Research without systems is incomplete. Systems without research are fragile.
This section documents my work at the intersection of connected surgical intelligence, multimodal perception, and high-stakes AI systems—examining not only models, but the infrastructure, reliability, and economic constraints required to deploy them responsibly.
Explore by Domain
Each domain reflects a layer of research and system architecture. Click to explore.
Connected Surgical Intelligence
Designing measurable, verifiable surgical systems that reduce preventable error.
AI Architecture & Infrastructure
Production-grade machine learning systems, deployment patterns, and infrastructure economics.
Governance & Reliability
Risk management, compliance, and building accountable AI systems for healthcare.
Vision & Perception Systems
Multimodal sensing, calibration, and structured scene understanding in high-stakes environments.
Foundation Models & Learning Dynamics
Representation learning, structured adaptation, and the practical limits of large models.
Featured
The Tool Is Eating the Task
Why the durable advantage is deciding what should be done, not doing what was requested.
The Knowledge Problem No One Talks About
Why the Next Operating System Will Be Built for AI Agents
The Architecture of Trustworthy AI
Five Domains. Sixteen Pillars. Two Foundational Layers. A Working Standard for building AI systems that deserve trust — not just systems that perform, but systems that perform, survive, reason correctly, earn social acceptance, and endure economically.
When Less Is More: The Quiet Revolution in How AI Learns to See
Your brain encodes information sparsely. A few neurons fire with precision; the rest stay quiet. Now consider how today's most powerful AI systems represent information. They do the opposite. What if we taught AI to be sparse by design?
Grokking: THE LONG ROAD TO UNDERSTANDING
Grokking, Delayed Generalisation, and What We Still Don't Know About How Neural Networks Learn
AI Architecture & Infrastructure
Connected Surgical Intelligence
Verification, Not Classification
Our first attempt at automated validation used classification. It achieved 97% accuracy in the lab. In production, it failed catastrophically on the first day. Why embedding-based anomaly detection outperforms traditional classification for industrial validation.
The Architecture of Trust
Why Hybrid Intelligence Will Define Clinical AI
Governance & Reliability
The Mirage of Intelligence
Why Superficial AI Narratives Are Dangerous — and What Real Leadership Looks Like
The Line Between a Prediction and a Patient
What happens when AI is wrong, and who bears the consequences? Why the engineers who understand where probabilistic predictions end and consequential decisions begin are the ones who will matter most.