Imagine a future where artificial intelligence decides your medical treatments or predicts your life choices—but without your say-so. Is this the dawn of innovation or a slippery slope into exploitation? Dive in as we explore how AI is revolutionizing human subjects research (HSR), and why our safeguards might be lagging behind.
As a fellow human being, chances are you've been part of human subjects research (HSR) at some point in your life—perhaps by joining a clinical trial for a new medication, filling out a survey on your daily health routines, or even volunteering for a psychology experiment in college that paid you a modest $20. Or maybe you've been on the other side, designing studies as a student or professional researcher. It's a field that touches us all, directly or indirectly.
Key takeaways
- Artificial intelligence is transforming how we conduct research involving people, yet the rules designed to protect participants haven't evolved quickly enough.
- AI holds promise for better healthcare and streamlined research, provided it's developed ethically with strong supervision.
- Our personal data is often utilized in unseen ways without our agreement, and marginalized groups face the heaviest risks.
At its core, human subjects research, or HSR, involves studying living individuals to gather data, biological samples, or insights through direct interaction. This includes not just biomedical studies like drug trials, but also social, behavioral, and educational research that analyzes private information or biospecimens that could trace back to someone. Think of it as two main categories: one focusing on human behavior and learning, and the other on medical and biological aspects.
To carry out such research, approval from an Institutional Review Board (IRB) is essential. These are ethics committees tasked with safeguarding participants, and any organization receiving federal funding for research must establish one. This could apply to universities, hospitals, public schools, or nonprofits.
Protection for research participants wasn't always a priority. The 20th century saw appalling abuses, such as the Tuskegee Syphilis Study—where African American men with syphilis were deliberately left untreated to observe the disease's progression—and Nazi medical experiments during World War II, which involved horrific procedures without consent. Revelations from Tuskegee in 1972 sparked outrage, leading to the Belmont Report in 1979. This document laid out foundational ethical guidelines: respecting people's right to make their own decisions, minimizing harm while maximizing benefits, and ensuring risks and rewards are shared fairly. These principles underpin the Common Rule, the federal framework governing IRBs and HSR.
Fast-forward to today—it's not 1979, and AI is flipping the script on how we do research. Our ethical and legal systems, however, haven't caught up. But here's where it gets controversial... Enter Tamiko Eto, a Certified IRB Professional (CIP) and expert in HSR ethics and AI oversight. She's the founder of TechInHSR, a consultancy aiding IRBs with AI-related reviews. I recently chatted with her about AI's impact on HSR—the upsides, the downsides, and the urgent need for change. Here's our edited conversation for clarity.
With over 20 years in HSR protection, how has AI's rise altered the landscape?
AI has completely overturned traditional research approaches. Historically, we'd examine individuals to draw conclusions about broader populations. Now, AI extracts vast patterns from collective data to inform choices about single people. This exposes weaknesses in our IRB system, rooted in the Belmont Report from nearly 50 years ago. That document focused on physical participants, not their digital footprints. AI shifts the spotlight to 'human data subjects'—people's information fed into systems often unbeknownst to them. Consequently, enormous troves of personal data circulate endlessly among companies, lacking consent or proper checks.
Can you share an example of AI-heavy HSR?
In social-behavioral-educational fields, imagine using student data to train algorithms that pinpoint ways to boost teaching methods or student outcomes. In healthcare, medical records fuel models predicting diseases like diabetes or cancer. AI has also redefined 'identifiable' data; what was once considered anonymous under older laws can now be traced back.
What's the source of these outdated definitions?
Healthcare relies on HIPAA standards, crafted before AI's data-handling capabilities. It assumes removing certain identifiers makes re-identification unlikely—but we now know that's false.
What positives can AI bring to research? Many don't grasp why IRB rules matter. What's the case for AI here?
AI can truly enhance healthcare, patient outcomes, and research efficiency—but only if built responsibly. Thoughtfully designed tools can spot issues early, such as detecting sepsis through vital signs or identifying cancer markers in scans by comparing to expert diagnoses.
Yet, I've observed poorly constructed tools causing real harm. I'm advocating for AI to streamline our workflows: handling massive datasets, automating mundane tasks for better productivity. Responsibly used, AI speeds up IRB application processes, helps review risks, and improves team communication. The potential is huge, contingent on ethical development.
What are the top immediate risks of AI in HSR?
Familiar dangers include 'black box' decisions—algorithms making calls without transparent reasoning, complicating informed choices. Even if explainability improves, the core ethical issue remains: how data is collected. Was it authorized? Do we have permission to use and even profit from it?
This ties into privacy. Unlike some nations with stronger data rights, the US lacks robust protections for data ownership. We're unable to control collection, usage, or sharing—a gaping hole.
Everything is potentially traceable, heightening dangers. Studies show re-identification via MRI scans (even without names or faces) or Fitbit step counts tied to locations. And this is the part most people miss... Enter the 'digital twin'—a virtual replica of you assembled from medical records, biometrics, social media, movement data, online chats, voice clips, and writing patterns. It can predict medication responses positively or be misused to forge voices or act without consent. You might lack rights over this 'you,' justified under the guise of health improvement. Is this innovation or a privacy nightmare?
What about longer-term threats?
IRBs can't address broad societal impacts; they're limited to individual harms. Yet AI's real perils—discrimination, inequality, data misuse—occur at scale. Clinicians might hesitate to adopt AI due to liability fears, as companies shift blame to users.
Data quality is another issue. The justice principle requires equitable distribution of benefits and harms. But marginalized groups supply data without consent, then suffer from biased, inaccurate tools. They bear the burden without reaping rewards.
Could bad actors exploit this data?
Definitely. Weak privacy laws allow unregulated sharing with malicious parties, leading to harm.
How can IRB experts boost AI knowledge?
AI literacy goes beyond tech basics—it's about posing the right questions. I've developed a three-stage framework for reviewing AI research, helping IRBs assess risks at development stages and understand its cyclical nature versus linear processes. This adapts traditional reviews for iterative AI projects.
As AI hallucinations drop and privacy strengthens, will more embrace it in HSR?
Automation bias makes us trust computer outputs uncritically, including AI. We hurry in fast-paced environments, like clinics, skipping checks. IRB staff, under pressure, might accept AI results without scrutiny as work piles up.
Hallucinations may lessen, but 'improvement' often means collecting more data—frequently unconsented, scraped from unexpected sources. This isn't true progress; it's exploitation. Genuine advancement needs ethical sourcing, equitable benefits, and limits—likely through laws, transparency, and clinician advocacy.
Companies lobby to avoid liability, shifting it to users. Clinicians, aware of potential mistakes, might stay wary.
Describe the worst outcome—and how to prevent it.
The worst scenario: AI influencing life-altering decisions—jobs, healthcare, loans—built on biased data without oversight. IRBs cover federally funded work, but AI often bypasses them via waivers or lack of review. Unconsented data slips past protections.
We'll trust these systems blindly, embedding inequities. Research-derived tools deploy into society, perpetuating discrimination against underrepresented groups.
But here's the controversial twist: Some argue AI's efficiencies outweigh risks, prioritizing speed over perfection. Others fear it erodes human judgment entirely. What do you think—should we prioritize innovation at the cost of privacy, or demand stricter controls?
To dodge this abyss, start at research's roots: enforce ethical data practices, expand IRB roles, and foster transparency. Prevention means involving diverse voices, questioning biases, and evolving laws to match AI's pace.
What are your thoughts? Do you agree AI's benefits justify the risks, or is it time for a moratorium on unconsented data use? Share your views in the comments—let's spark a discussion!