TL;DR: We’ve released
rotalabs-contexton both PyPI and npm. It gives AI agents a shared context layer - ingest records from any source, search them with keyword or semantic matching, and subscribe to updates in real time. Works locally with zero config or connects to the Rotascale platform for production.
The Problem: Agents in Silos
Most multi-agent systems have a context fragmentation problem. Each agent maintains its own view of the world. Agent A processes a support ticket about a login failure. Agent B investigates a latency spike in the auth service. Agent C reviews an incident report from the on-call engineer. All three are looking at different facets of the same outage, but none of them know what the others know.
This isn’t a hypothetical. It’s the default state of nearly every production agent deployment we’ve seen.
The consequences compound quickly:
- Duplicated work. Multiple agents independently investigate the same root cause.
- Missed correlations. Signals that would be obvious in aggregate stay invisible when scattered across isolated contexts.
- Inconsistent responses. Different agents give different answers about the same situation because they’re working from different information.
- Trust degradation. When agents contradict each other, operators lose confidence in the entire system.
We’ve written before about trust dynamics in multi-agent networks and coordination without centralized control. Shared context is a prerequisite for both. Agents can’t establish trust or coordinate effectively if they can’t access the same ground truth.
What rotalabs-context Does
rotalabs-context is a context intelligence engine. It provides three core operations:
- Ingest - Push records from any source (ETL pipelines, webhooks, manual uploads) with automatic enrichment.
- Search - Query stored context using keyword, semantic, or hybrid search with filtering by tags, sources, scopes, and sensitivity levels.
- Subscribe - Register callbacks for context events, so agents get notified when relevant new information arrives.
The API surface is intentionally small. Three operations cover the core use case: put context in, get context out, react when context changes.
Getting Started
Install for Python:
pip install rotalabs-context
Or for Node.js:
npm install @rotalabs/context
Both packages work out of the box with zero configuration - they default to an in-memory backend that’s useful for development and testing.
Ingesting Context
When an agent encounters information worth sharing, it pushes it to the context engine.
Python:
from rotalabs_context import ContextEngine
ctx = ContextEngine()
result = ctx.ingest(
records=[
{"content": "Auth service latency spike at 14:32 UTC", "title": "Incident #891"},
{"content": "User reports login failures across EU region", "title": "Support ticket #4501"},
],
source="monitoring",
tags=["incident", "auth"],
scope="team:platform",
sensitivity="internal",
)
print(f"Ingested {result.entries_created} entries")
TypeScript:
import { ContextEngine, Scope, Sensitivity } from "@rotalabs/context";
const ctx = new ContextEngine();
const result = await ctx.ingest({
records: [
{ content: "Auth service latency spike at 14:32 UTC", title: "Incident #891" },
{ content: "User reports login failures across EU region", title: "Support ticket #4501" },
],
source: "monitoring",
tags: ["incident", "auth"],
scope: Scope.TEAM,
sensitivity: Sensitivity.INTERNAL,
});
console.log(`Ingested ${result.entriesCreated} entries`);
Every record gets a content hash for deduplication, a timestamp, and the metadata you attach (source, tags, scope, sensitivity). This metadata becomes the basis for access control and filtering later.
Searching Context
When an agent needs to make a decision, it queries the context engine instead of relying only on what it already knows.
Python:
results = ctx.search(
"auth service failures",
top_k=5,
mode="hybrid",
filters={
"tags": ["incident"],
"scopes": ["team:platform"],
}
)
for hit in results.results:
print(f"{hit.score:.2f} - {hit.entry.title}")
print(f" {hit.highlights[0]}")
TypeScript:
const results = await ctx.search("auth service failures", {
topK: 5,
mode: "hybrid",
filters: {
tags: ["incident"],
scopes: [Scope.TEAM],
},
});
for (const hit of results.results) {
console.log(`${hit.score.toFixed(2)} - ${hit.entry.title}`);
console.log(` ${hit.highlights[0]}`);
}
Three search modes are available: keyword for exact matching, semantic for meaning-based retrieval (platform mode), and hybrid which combines both. Filters narrow results by tags, sources, scopes, sensitivity, and date ranges.
Subscribing to Updates
Agents shouldn’t have to poll for new information. The event system pushes relevant context to them as it arrives.
Python:
def on_new_incident(data):
entry_ids = data["entry_ids"]
# Trigger investigation workflow
print(f"New incident context: {entry_ids}")
ctx.on("context.created", on_new_incident, filter={"tags": ["incident"]})
TypeScript:
ctx.on("context.created", { filter: { tags: ["incident"] } }, (data) => {
// Trigger investigation workflow
console.log(`New incident context: ${data.entryIds}`);
});
Subscriptions support tag-based filtering, so agents only receive events that match their domain. A security agent subscribes to security tags. An ops agent subscribes to incident tags. No one gets buried in irrelevant notifications.
Access Control
Not all context should be visible to all agents. The package supports two dimensions of access control:
Scopes define visibility boundaries:
global- Visible to everythingteam- Team-specific contextproject- Project-scopedagent- Private to a single agentuser- User-specific
Sensitivity levels classify the data:
public- No restrictionsinternal- Organization-wideconfidential- Restricted accessrestricted- Highly sensitive
When an agent searches, filters constrain what it can see. An agent with team:platform scope won’t accidentally surface agent:billing-bot context.
Local vs. Platform
The package runs in two modes:
Local mode is the default. Zero config, in-memory storage, keyword search. Useful for development, testing, and single-process deployments.
ctx = ContextEngine() # Just works
Platform mode connects to the Rotascale Context Engine, which implements what we call the ETL-C framework - Extract, Transform, Load, Contextualize. The idea is that traditional ETL strips business context from data. Records lose meaning as they move through pipelines, and AI systems hallucinate to fill the gaps. ETL-C adds a contextualization step that preserves and enriches that meaning.
ctx = ContextEngine(api_key="rot_...")
In platform mode, the same ingest and search calls gain access to:
- Contextual joins - Semantic matching across datasets using embeddings and metadata. The platform matches records by meaning, not just exact keys. “J. Smith” connects to “Jane Smith” at 95% confidence, with an audit trail.
- Managed context store - A unified storage layer combining vector databases for embeddings, graph databases for relationships, and time-series data for temporal patterns, all behind the single API you’re already using.
- Adaptive pipeline orchestration - Pipelines that adjust based on context signals. Financial data processes differently during market volatility. Customer segments get tailored enrichment based on business events.
- Streaming updates - Real-time semantic queries optimized for RAG applications, with contextually rich results that go beyond simple vector similarity.
The API is identical in both modes. Code written against the local backend works unchanged when you switch to platform mode. The only difference is what happens behind the scenes.
Pipeline Integration
Context ingestion often happens as part of existing data pipelines. The Python package includes an Apache Airflow operator for this:
from rotalabs_context.adapters.airflow import AirflowContextOperator
ingest_task = AirflowContextOperator(
task_id="ingest_daily_metrics",
source="etl-pipeline",
records_callable=lambda: extract_records(),
tags=["metrics", "daily"],
scope="global",
)
This fits into standard DAGs without requiring changes to existing pipeline architecture.
Why This Matters for Agent Trust
We’ve been focused on trust infrastructure at Rotalabs - steering vectors for controlling behavior, RedQueen for adversarial testing, statistical evaluation for measuring what’s real. Context sharing is the connective tissue between all of these.
An agent that can’t access shared context can’t be properly evaluated against it. An agent that operates on stale or incomplete information will produce outputs that fail verification. And an adversarial agent that poisons shared context can undermine every other agent that depends on it.
Getting the context layer right is prerequisite infrastructure. It’s not glamorous work, but it’s the foundation that everything else sits on.
Links
- PyPI: rotalabs-context -
pip install rotalabs-context - npm: @rotalabs/context -
npm install @rotalabs/context - Platform: Rotascale Context Engine
- GitHub: rotalabs
If you’re building multi-agent systems and running into context fragmentation, we’d like to hear about your use case. Reach out at research@rotalabs.ai.