Collaboration, not Autonomy: Building AI for Clinical Development
July 2025
AI doesn’t need to be fully autonomous to be transformative. In clinical development, it actually shouldn’t be.
This isn’t a domain of clear rules and repeatable outcomes. It’s filled with ambiguity, trade-offs, political dynamics, and tacit knowledge that can’t be captured in data.
We’re building for a different paradigm at Cori: systems that operate with partial autonomy. That collaborate with humans in the loop. That accelerate decisions without overstepping them. Because in clinical development, full autonomy isn’t the goal – collaborative intelligence is.
Coherence, not control
Clinical development isn’t chaotic because people are disorganised; it’s chaotic because the information is fragmented. Documents live in different places. Prior decisions get buried. People reference different versions of the same thing without knowing. Small misalignments compound until they become costly delays.
That’s the kind of coordination Cori is built for. Not task management. Not timeline chasing. But helping teams work from the same base of structured, verified information.
Cori flags contradictions, surfaces missing context, and brings past decisions back into view when they’re relevant again. It acts like a second brain for your trial, watching for drift, and nudging you when things don’t line up.
But it doesn’t make those calls for you.
When trade-offs arise, between patient burden and data quality, between speed and regulatory risk, that still requires human judgment. Strategy, politics, and precedent don’t follow clean rules.
And much of what matters isn’t written down: A method you remember working once before. A site you’ve heard struggles with patient engagement. A reviewer who prefers data in a particular layout.
AI can’t intuit that. But it can make sure the rest of the system is aligned, complete, and visible, so your judgment stands on a solid foundation.
Surfacing, not deciding
What AI can do is bring the key decisions to the surface, so that humans, clinicians, regulators, and operators can make better, more informed decisions, faster.
In clinical development, the biggest risks often come from blind spots – trade-offs made without full context, inconsistencies that go unnoticed, or assumptions left unstated.
Take the clinical trial protocol process.
Drafting a protocol involves hundreds of interdependent decisions about eligibility, endpoints, assessments, and regulatory considerations. AI can accelerate this process by rapidly surfacing relevant past trials, suggesting commonly accepted standards, and flagging inconsistencies or gaps. But it can’t decide which trade-offs to make in a specific therapeutic area, or how to tailor the protocol to a sponsor’s risk appetite or operational realities. AI can help you decide, but it can’t make the decision for you.
From tool to teammate
This collaborative approach has already shown promise in diagnostic settings.
For instance, GPTs customised to work collaboratively have improved clinicians’ diagnostic reasoning. In this study, both the clinician and the AI made independent diagnostic assessments. These were then combined by AI into a combined assessment, highlighting points of agreement and disagreement, and offering commentary on each. This collaborative workflow outperformed the traditional workflow, achieving an average diagnostic accuracy of 83.5%, compared to 75% using traditional clinical resources.
It demonstrates the promise of collaborative interfaces that combine the complementary skills of AI and human experts.While AI alone performed best of all in the study, it was using clinical vignettes – short reports about a patient’s case and history – not real-life patients. These are accurate representations of what a student doctor experiences in an exam, but they’re far less messy than real world patients and their problems.
That’s where human expertise still holds an edge. In radiology, for instance, AI performs well on large-scale image analysis, but humans remain better at edge cases involving ambiguity, context, or prior experience.
In short, working with AI as a collaborator, not a tool to replace human work, is where you get the best performance.
An AI teammate for clinical trials
The workflow we’re building reflects this approach.
For example, when it comes to drafting your documents, the platform doesn’t start generating them immediately. It works with you to ensure you have all the information and context you need to create the right documentation.
Our platform starts by generating a list of the required information for the specific protocol you’re creating, based on templates and previous trials. Then it checks your existing documentation against those requirements, and flags each one as either “healthy”, “medium”, or “non-existent”, depending on how much of the required information is present. If all the information is there, Cori can start drafting. If there are gaps, Cori will prompt you for additional information to fill them before writing anything.
This creates a more collaborative, back-and-forth relationship.

Why human-AI collaboration is the future of clinical development
We need humans in the loop, because there’s a lot they excel at over machines. The question is not whether AI replaces people, but how AI and humans can best work together to boost human efficiency and decision-making, and ultimately and most importantly, patient outcomes.
Let AI handle organisation, prioritisation, and acceleration of repetitive work, allowing more time for humans to focus on what they’re good at: applying their own judgement and experience to the bigger strategic and political decisions, and the messy real-world edge cases.
AI should fit into human systems, not override them.