Research from Rota, Inc.

Trust Intelligence Research

We develop the science and tools for trust decisions—whether verifying an AI agent's output, approving a financial transaction, or validating a network configuration.

4+
Research Papers
10
Open Source Projects
96%
Sandbagging Detection

Featured Research

Field-Theoretic Memory Systems for AI Agents

Subhadip Mitra · January 2026

Memory systems modeled as continuous fields enabling smooth interpolation, natural decay, and principled attention over long contexts.

Read Paper Code

Verity: Verified Synthesis via Confidence-Error-Effort-Progress

Subhadip Mitra · January 2026

Framework for verified code generation using four measurable signals that predict correctness without ground truth.

Read Paper Code

Research Tracks

Track Focus Status
Field-Theoretic Memory Continuous field memory for AI agents Complete
Verified Synthesis CE2P hypothesis for code verification Complete
Red Queen Evolutionary adversarial testing Complete
Trust Cascade Theory Formal ROI-based decision routing Complete
Sparse Trust Circuits Mechanistic interpretability for trust Active
Graph Trust Propagation GNNs for entity trust networks Active

Open Source

rotalabs-cascade

Domain-agnostic trust cascade routing. Routes decisions to the cheapest sufficient layer.

pip install rotalabs-cascade

GitHub

rotalabs-probe

Sandbagging detection via metacognitive probes. 90-96% accuracy.

pip install rotalabs-probe

GitHub

rotalabs-steer

Runtime behavior control with steering vectors.

pip install rotalabs-steer

GitHub

About

Rotalabs is the research division of Rota, Inc., a trust intelligence company. We publish open research and open-source tools for trust decisions in AI systems and regulated industries.

research@rotalabs.ai