What is Model Informatics? What does it mean to "Verify" AI?
We first started exploring verifiability almost 3 years ago and first shared a few initial ideas in early 2024. We were learning, alongside everyone, what AI could really do. To be honest, we still don’t fully grasp the potential.
Then (and now) we were focused on answering a deeper question
How do we know which model we’re using?
We should be able to know:
When a judicial AI is used to determine sentencing for a convicted person.
When a financial AI is used to determine loan approvals and interest rates.
When healthcare AI is used to perform an early-diagnosis and generate a treatment plan for a complex disease.
We determined that to actually know what a model is, recognize it, and verify properties about it, we needed to learn…a lot. And we needed new tools to make sense of these new complex systems (or agents or machine intelligences, or token prediction machines, or …). These tools helped us derive properties about the systems we were exploring. We didn’t really know what to call this work until recently: Model Informatics
So what is Model Informatics?
Inspired by Bioinformatics, Model Informatics refers to the systematic study and management of AI models themselves as complex information systems.
Bioinformatics as a discipline has become essential to modern biology and medicine, enabling breakthroughs in understanding genetic diseases, developing new treatments, and advancing our knowledge of life processes at the molecular level. People can even get PhDs in bioinformatics from elite universities.
We believe the same level of inquiry and exploration should be done for AI, especially as it permeates every facet of society as we expect (and hope for).
And to enable this inquiry we will need better tools to do so. There will be the equivalent of gene sequencing devices that significantly bring down costs, algorithmic breakthroughs that provide novel insights, and open source tools that democratize who can do the exploring.
Knowing is Half the Battle
It’s not enough to just know something about a model or a group of models that we are using in our critical applications. If we can’t verify these models and their properties at the moment that we are using them, it is meaningless. This is especially important as we increasingly rely on models created by different labs, hosted by different providers, and integrated into the fabric of all applications.
Even more tools and techniques will be required to peer into these complex systems and gather what we can; to check the foundations of these systems are sound and what we expect. This is where verifiability meets model informatics. They depend on each other.
Join Us
We are excited to build these tools, to deepen our understanding these fascinating new, enigmatic entrants to the world, and to share what we learn. If you’re an explorer like us, we’d love to hear from you.
A note about Mechanistic Interpretability
We do not claim to be the first group to ask these epistemic questions about AI. There is active research in "Mechanistic Interpretability” which we deeply respect and are inspired by. But there is additional work to do to understand how AI works with and within human-centered systems: governments, enterprises, households, schools, etc. These systems have their own ways of being and it will be fascinating to see how they invite AI to participate alongside them.
Mechanistic interpretability is like studying cellular biology to understand how individual cells work, while model informatics is like building a comprehensive medical records system that tracks patient health, treatment history, and outcomes across entire healthcare systems. Both are critical and necessary to achieve the goal of improving the longevity and quality of life of a person.



