We’re building the legal and technical infrastructure for transparent, accountable AI decision-making in government systems that affect millions of lives.
The Hammurabi Institute advances algorithmic transparency and due process rights in automated government decision-making systems, with particular focus on disability determinations and social safety net programs.
Just as Hammurabi’s ancient code made laws visible by carving them in stone for all to see, we work to ensure that AI systems making critical decisions about people’s lives operate with transparency, accountability, and respect for constitutional due process.
Through a unique combination of patent-protected technical innovations, stakeholder education, policy advocacy, and strategic litigation support, we’re creating enforceable standards for transparent AI in government systems—transforming black box algorithms into glass box systems that can be understood, challenged, and improved.
Developing technical standards and frameworks for transparent AI adjudication systems, backed by three foundational patents that create enforceable transparency requirements.
Providing education to legal advocates, administrative law judges, and government officials on AI transparency issues through partnerships with NOSSCR, AALJ, and NADE.
Working with legislators and regulators to establish transparency requirements for AI systems used in government decision-making, particularly in disability and benefits determinations.
Supporting impact litigation challenging black box AI systems that violate due process rights, providing technical expertise and patent-based legal frameworks.
Licensing our transparency-enabling patents to ensure government contractors and agencies implement accountable AI systems with proper audit trails and explainability.
Building public awareness through media engagement, thought leadership, and strategic communications about the constitutional issues posed by opaque AI systems.
Founder & Executive Director
The Hammurabi Institute
Legal technologist and patent holder with expertise in AI transparency and automated adjudication systems. Leading the Institute’s mission to ensure constitutional due process in government AI deployments.
Board Member
Formerly Kadena
Technology executive with deep expertise in cybersecurity, blockchain, and distributed systems. Former Head of Research & Networks at Kadena, bringing critical technical perspective on secure and transparent systems.
Board Member
Formerly Salesforce
Enterprise technology leader with extensive experience in AI implementation and data systems at scale. Brings valuable perspective on deploying transparent AI in large organizations.
Board Member
2x Peabody Award Winner
Award-winning journalist, filmmaker, and TED Resident. Creator of groundbreaking documentaries on criminal justice and social issues. Her storytelling expertise helps translate complex AI issues for public understanding.
Board Member
CTO & Co-Founder, Sizzle AI
AI engineer and educator revolutionizing learning through transparent AI systems. Former Head of ML at Abnormal Security, bringing cutting-edge expertise in building explainable AI that empowers rather than replaces human decision-making.
Our patents create enforceable transparency standards for AI systems
Technical framework for transparent AI decision-making in legal contexts, requiring explainable reasoning paths and complete audit trails for every automated determination.
Revolutionary approach using game engine technology to create immutable, real-time audit chains for legal proceedings, ensuring complete transparency in AI-assisted adjudication.
Method for breaking down AI confidence scores into component factors, allowing legal professionals to understand and challenge the basis for automated decisions.
Join us in ensuring AI serves justice, not obscures it