Pre-launch · July 2026

AI memory that actually understands what it remembers.

Nastalgic runs a 4-stage extraction pipeline — Resolver → Facts → Inference → Graph — turning raw conversations into a knowledge graph your agents can reason over. Not compressed chat logs. Structured intelligence.

No spam. One email when we launch.

The problem

AI agents have amnesia. You already know this — you've shipped around it.

Existing tools either compress chat logs into embeddings, or paywall the actually useful parts behind enterprise plans. Nastalgic does neither.

How it works

Extraction, not compression.

Four stages turn unstructured conversation into a queryable knowledge graph. Same pipeline runs on every message, in every vault.

extraction pipeline · 5 nodes
Messagesraw inputResolverentitiesFactsclaims · evidenceInferencecausal chainsKnowledge Graphqueryable · traversablestep 0step 1step 2step 3step 4 · output
01 · structured_extraction

Structured Extraction

Four stages. We don't compress chat logs. We extract entities, facts, relationships, and causal chains — the units your agent can actually reason with.

02 · knowledge_graph

Knowledge Graph

13 entity types. Directed edges with confidence scores. Causal chain detection. Not a vector store with a marketing budget.

vs. Mem0 — graphs on every plan, not the $249/mo tier.
03 · vault_isolation

Vault Isolation

Per-tenant from day one. Free = logically isolated. Paid = dedicated DB and vector store. Enterprise = dedicated infrastructure. Architecture, not upsell.

04 · two_stage_rag

Two-Stage RAG

Qdrant vector similarity, then CrossEncoder reranking. Good retrieval isn't just embeddings — it's embeddings plus a model that reads.

The fundamentals

Built on research, not buzzwords.

Nastalgic implements well-published techniques from NLP, information retrieval, and knowledge representation. No magic. Just the field's vocabulary, applied carefully.

coreference_resolution

Neural Coreference Resolution

Pronouns, ellipses, and "that thing we talked about" get resolved back to the canonical entity. When memory recalls a conversation about Sarah from three sessions ago, it knows which Sarah.

// output: every reference linked to a single entity ID
entity_recognition · entity_linking

Named Entity Recognition + Linking

Thirteen entity types — people, organizations, locations, events, artifacts, and more — extracted from raw conversation and linked across the entire vault. Same entity, same node, regardless of how it was referenced.

// output: 13 typed entities, vault-scoped
open_information_extraction

Open Information Extraction

Free-form messages get parsed into structured (subject, predicate, object) claims with attached evidence. The pipeline isn't compressing the conversation — it's extracting the facts inside it.

// output: structured claims with provenance
causal_chain_detection

Causal Chain Detection

Beyond what happened, the inference stage tracks why. Cause-and-effect relationships, temporal ordering, and dependency chains get surfaced as first-class graph structure.

// output: directed causal edges, traversable
provenance_tracking

Provenance Tracking

Every fact, every node, every edge links back to its source message via typed evidence edges. No floating claims, no "the model said so." Every assertion has a receipt.

// output: typed evidence edges, fully auditable
two_stage_retrieval

Two-Stage Hybrid Retrieval

Dense vector similarity for recall (Qdrant), then cross-encoder reranking for precision. The top candidates from stage one get reordered by a model that actually reads — not one that just matches embeddings.

// output: ranked, attributed context

// none of these are new. that's the point.

What's coming

Production launches July 2026.

Python and TypeScript SDKs. REST APIs. Docker self-hosting. Built for builders, shipped when it's ready.

One email when we launch. That's it.