The Guidance Says What. The Next 12 Articles Show How
Over the past four months, I've published 12 articles on AI validation for life sciences, starting with the case for interdisciplinary expertise in AI validation, and have touched on everything from the technical implementation side (HITL/HOTL in practice, transparency architecture) to the business case for a robust validation strategy.
Since my first post, I launched Britt Biocomputing Insights and filed Britt Biocomputing as an LLC on November 25th. On the regulatory side, the FDA announced their internal usage of agentic AI on December 1st, and the FDA and EMA jointly released the Good AI Practice Principles on January 14th, just weeks later (https://www.kaylabritt.com/blog-1-1/fda-amp-ema-just-released-ai-guiding-principles-for-drug-development-heres-what-they-actually-mean).
Prior to launching my consultancy publicly, I shared my core framework on my website:
CoU Definition → Risk → Eval Design and Development → Acceptance Criteria → Deployment and HITL Control → Continuous Monitoring
And also noted that rigor always scales with the risk tier. The FDA/EMA Good AI Practice Principles formalized these same pillars.
Now that I’ve laid the foundations, the next phase of my blog addresses the harder operational questions, like agentic and multimodal architectures, human performance validation, full worked examples, and vendor qualification: specifics that are useful for sponsors navigating the changing landscape of AI architectures in drug development. The guidance tells sponsors what to demonstrate but deliberately stops short of how. That's where the next phase of this work lives.
The foundations are built. Now we stress-test them.