Advancing AI Transparency in Legal Systems

We’re building the legal and technical infrastructure for transparent, accountable AI decision-making in government systems that affect millions of lives.

✓ NYS Incorporated
✓ EIN: 39-4684384
⏳ 501(c)(3) Pending

Our Mission

The Hammurabi Institute advances algorithmic transparency and due process rights in automated government decision-making systems, with particular focus on disability determinations and social safety net programs.

Just as Hammurabi’s ancient code made laws visible by carving them in stone for all to see, we work to ensure that AI systems making critical decisions about people’s lives operate with transparency, accountability, and respect for constitutional due process.

Through a unique combination of patent-protected technical innovations, stakeholder education, policy advocacy, and strategic litigation support, we’re creating enforceable standards for transparent AI in government systems—transforming black box algorithms into glass box systems that can be understood, challenged, and improved.

Our Core Work

🔬

Research & Development

Developing technical standards and frameworks for transparent AI adjudication systems, backed by three foundational patents that create enforceable transparency requirements.

📚

Education & Training

Providing education to legal advocates, administrative law judges, and government officials on AI transparency issues through partnerships with NOSSCR, AALJ, and NADE.

⚖️

Policy Advocacy

Working with legislators and regulators to establish transparency requirements for AI systems used in government decision-making, particularly in disability and benefits determinations.

🛡️

Legal Support

Supporting impact litigation challenging black box AI systems that violate due process rights, providing technical expertise and patent-based legal frameworks.

💡

Patent Licensing

Licensing our transparency-enabling patents to ensure government contractors and agencies implement accountable AI systems with proper audit trails and explainability.

📢

Public Engagement

Building public awareness through media engagement, thought leadership, and strategic communications about the constitutional issues posed by opaque AI systems.

Board of Directors

Lino Medina Mendez

Lino Medina Mendez

Founder & Executive Director
The Hammurabi Institute
Legal technologist and patent holder with expertise in AI transparency and automated adjudication systems. Leading the Institute’s mission to ensure constitutional due process in government AI deployments.
Monica Quaintance

Monica Quaintance

Board Member
Formerly CYREN, Kadena
Technology executive with deep expertise in cybersecurity, blockchain, and distributed systems. Former Head of Research & Networks at Kadena, bringing critical technical perspective on secure and transparent systems.
Rui Susan Chen

Rui Susan Chen

Board Member
Formerly Salesforce
Enterprise technology leader with extensive experience in AI implementation and data systems at scale. Brings valuable perspective on deploying transparent AI in large organizations.
Sue Jaye Johnson

Sue Jaye Johnson

Board Member
2x Peabody Award Winner
Award-winning journalist, filmmaker, and TED Resident. Creator of groundbreaking documentaries on criminal justice and social issues. Her storytelling expertise helps translate complex AI issues for public understanding.
Jeshua Bratman

Jeshua Bratman

Board Member
CTO & Co-Founder, Sizzle AI
AI engineer and educator revolutionizing learning through transparent AI systems. Former Head of ML at Abnormal Security, bringing cutting-edge expertise in building explainable AI that empowers rather than replaces human decision-making.

Patent-Protected Innovations

Our patents create enforceable transparency standards for AI systems

Synthetic AI Adjudicator

Technical framework for transparent AI decision-making in legal contexts, requiring explainable reasoning paths and complete audit trails for every automated determination.

Game Engine Legal Case Management

Revolutionary approach using game engine technology to create immutable, real-time audit chains for legal proceedings, ensuring complete transparency in AI-assisted adjudication.

Confidence Decomposition Algorithm

Method for breaking down AI confidence scores into component factors, allowing legal professionals to understand and challenge the basis for automated decisions.

Get Involved

Join us in ensuring AI serves justice, not obscures it

📍 244 Fifth Avenue, Suite L225, New York, NY 10001

📧 info@hammurabi-institute.org

🔗 EIN: 39-4684384