For Developers

Memory that knows when it's wrong.

Most AI memory systems store facts and hope for the best. Brain stores facts, tracks where they came from, detects when they contradict each other, and tells your agent before it acts on bad data. It's memory with a built-in immune system.

Newest overwrites best

One bad upload silently replaces ten good sources. Your agent doesn't know the difference.

💥

Contradictions go undetected

Source A says revenue is $10M. Source B says $8M. Your agent averages them and moves on.

🚫

Memory doesn't learn

Correct a fact today. The system doesn't learn which source was wrong. Same mistake, different fact, next week.

How Brain compares to what you're using now.

Vector stores, key-value memory, and append-only graphs solve storage. They don't solve truth.

CapabilityVector memoryKV / append-onlyGraph RAGBrain
Structured relationshipsNo — flat embeddingsNo — key-valueYesYes — consensus-scored
Multi-source trackingNo — one embedding per factNo — newest winsPartial — no scoringYes — every claim tracked to source
Contradiction detectionNoneNoneNone13 parallel detectors, proactive
Confidence scoringCosine similarity onlyNoneBasic — no source weightMulti-dimensional: evidence, authority, expert, temporal
Self-improvementRequires re-embeddingNoneNoneEvery resolution trains the model
Entity deduplicationNoneNoneString matching4-stage: exact → pattern → semantic → LLM
Query-time governanceNoneNoneNoneALLOW / WARN / BLOCK per query
Audit trailNoneWrite log onlyBasic versioningHash-chained, tamper-proof

See the difference in one API call.

Left: what your agent gets from a typical memory system. Right: what it gets from Brain.

Typical memory response
// GET /memory?query="Acme Corp revenue" { "result": "Acme Corp revenue is $10M", "similarity": 0.94, "source": "pitch_deck_v3.pdf" } // Looks great. Except... // Board minutes from last month say $8M. // The memory doesn't know. // Your agent doesn't know. // The wrong number ships.
Brain response
// POST /gate/query { "query": "Acme Corp revenue" } { "verdict": "WARN", "query": "Acme Corp revenue", "conflicts": [ { "claim_a": "$10M", // pitch_deck_v3 (auth: 0.71) "claim_b": "$8M", // board_minutes (auth: 0.87) "type": "numerical_contradiction" } ], "consensus": 0.52, // low confidence — contested "recommendation": "present both values with sources" }

What you get.

Everything a memory system should do — plus everything they don't.

Proactive conflict detection

13 parallel detectors continuously scan for contradictions using structural graph queries. Conflicts are found before your agent encounters them — not after.

Consensus scoring

Every fact carries a confidence score based on evidence weight, source authority, expert validation, and temporal recency. Your agent always knows how much to trust each answer.

Learning flywheel

Every conflict resolution updates source authority scores across the entire graph. Correct a fact about Source X today — every other fact from Source X recalibrates automatically.

Multi-source provenance

Every claim is tracked to its source document, timestamp, and extraction context. Never wonder "where did this fact come from?" again. Full chain of custody.

Query Gate API

One API call before any action. Returns ALLOW, WARN, or BLOCK based on the consensus state of knowledge relevant to that query. Deterministic, not probabilistic.

4-stage entity resolution

Exact match → formatting normalization → pattern matching → semantic comparison. Catches "Microsoft Corp" = "MSFT" = "Microsoft" while keeping "Apple Inc" ≠ "apple extract."

Temporal awareness

Domain-configurable decay rates. Fintech metrics go stale in weeks. Clinical trial data stays relevant for years. Your agent always knows how fresh each fact is.

MCP native

29 tools exposed via Model Context Protocol. Any MCP-compatible agent runtime can use Brain as its memory layer out of the box. Claude, OpenAI, LangChain — your choice.

Sealed audit trail

Every write, every conflict, every resolution — hash-chained into an immutable ledger. Export to CSV or PDF. EU AI Act Article 12 compliant out of the box.

Where Brain sits in your stack

┌─────────────────────────────────────────────────────────────┐ Your Application LangChain · LangGraph · CrewAI · custom agent · any LLM └────────────────────────────┬────────────────────────────────┘ brain.gate("query") ┌─────────────────────────────────────────────────────────────┐ Brain — Truth Layer ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ Query Gate │ │ Conflict │ │ Consensus │ ALLOW | WARN │ │ Detection │ │ Scoring Engine │ │ BLOCK │ │ (13 parallel)│ │ (self-improving) │ └──────────────┘ └──────────────┘ └──────────────────┘ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │ Entity │ │ Expert │ │ Audit Ledger │ │ Resolution │ │ Routing │ │ (hash-chained) │ │ (4-stage) │ │ (weighted) │ │ │ └──────────────┘ └──────────────┘ └──────────────────┘ └────────────────────────────┬────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────┐ Knowledge Graph (Neo4j) + Vector Store (Pinecone) Entities · Relationships · Claims · Consensus Scores └─────────────────────────────────────────────────────────────┘

Start in 5 minutes.

Add Brain to an existing agent in three steps. No data migration required.

Step 1 — Install

Add the SDK

Python, TypeScript, REST, or MCP — pick your interface.

pip install brain-sdk # or npm install @theup/brain
Step 2 — Connect

Wrap your agent

Three lines of config. Brain runs as middleware.

from brain import BrainClient brain = BrainClient( api_key="sk-brain-..." )
Step 3 — Query

Check before acting

One call returns a verdict with full context.

verdict = brain.gate( "What is Acme revenue?" ) # → ALLOW | WARN | BLOCK
Python TypeScript LangChain LangGraph CrewAI MCP REST API CLI

Give your agent memory it can trust.

Brain drops into your existing stack as middleware. No rewrite, no migration. Start in advisory mode — Brain logs everything but blocks nothing. Harden to strict when you're ready.