A structured, portable format for extracting and preserving knowledge from AI conversations. Works with any LLM that can read a system prompt.
Every day, millions of people have substantive AI conversations — making decisions, debugging systems, designing products, researching topics. Valuable knowledge is produced: decisions with rationale, insights, patterns, open questions, concrete next steps.
Then the conversation ends, and all of it vanishes into a chat log. Chat is an interaction medium pretending to be a storage medium. You can't ask "what did I decide about authentication last month?" and get a structured answer. You can't trace how your thinking evolved across six sessions.
LoreSpec fixes this. It defines a structured format for extracting the durable knowledge from any AI conversation and preserving it in a way that compounds over time.
Every conversation produces two kinds of knowledge — the experience and the content. LoreSpec captures both, mirroring how human memory actually works.
The Session Arc — the story of the conversation. Where it started, the pivots where thinking changed direction, where it landed. Pivots aren't mistakes; they're evidence of active sensemaking.
Knowledge Objects — the extractable content. Decisions, insights, patterns, solutions. Each one standalone, searchable, and linked to the episodic context that produced it.
A minimal ontology that covers every kind of durable knowledge an AI conversation can produce. Validated across product strategy, authentication design, DevOps, competitive analysis, and more.
Docs, specs, code, plans, frameworks produced during the session
Full argumentative structure — issue, positions, arguments, warrant, status
Context-free knowledge retrievable without the original conversation
Procedural knowledge — knowing how, not just knowing that
Questions raised but not answered, with partial progress and blockers
Tools, companies, articles, repos discovered and why they matter
Commitments that emerged, with urgency and dependencies
Problem → fix → why it works → caveats. Standalone debugging value.
Most tools treat decisions as flat facts. LoreSpec captures the full argumentative structure from IBIS and Toulmin — not just what was decided, but the reasoning that makes the decision evaluable and revisable.
The warrant — the unstated assumption connecting evidence to conclusion — is consistently the most valuable field. It tells future-you what belief would need to change to revisit the decision.
decision: Use buyer-request/seller-approve model issue: How should ownership transfer work? positions: - Dealer-push — dealer enters buyer email - Buyer-request — buyer scans QR, requests - Mark-as-sold with open claim warrant: The party with the strongest incentive should drive the process qualifier: Settled for Phase 1 status: settled
An isolated decision is a fact. A decision linked to the insights that informed it, the alternatives rejected, and the actions implied — that's knowledge. Connections are not metadata. They are the knowledge base.
A trail is a named path through connected objects across multiple sessions. When conversations about the same topic happen weeks apart, trails link them into a coherent narrative.
Inspired by Bush's associative paths (1945), Luhmann's branching sequences, and Tulving's episodic threads. The Scribe is Bush's "trail blazer," automated.
Every design decision in LoreSpec maps to an established framework from cognitive psychology, information science, or epistemology. Independently validated against primary sources.
Associative trails through linked information → cross-session trails
Warrants and qualifiers → the "why behind the why" in decisions
Issue → position → argument → Decision object structure
Episodic vs. semantic memory → the two-layer digest
Atomicity and linking → knowledge objects and connections
Declarative → procedural knowledge → Pattern objects
Information → knowledge transition → what the Scribe does
Externalization → Combination → the knowledge spiral
The standard. Defines the LORE.md format — 8 object types, 7 connection types, session classification, trails.
System prompt that extracts LORE.md from any conversation. Works with Claude, ChatGPT, Gemini — any LLM.
Import LORE.md into Open Brain as properly chunked, typed, tagged thoughts optimized for retrieval.
Process conversation exports into LORE.md files from the command line.
Serve your lore library to any AI client via the Model Context Protocol.
Landing page, documentation, and community hub for the standard.
Copy the Scribe system prompt into a Claude Project, ChatGPT custom GPT, or any LLM system message.
Paste a conversation export or transcript. The Scribe classifies the session and extracts a structured LORE.md.
Import into Open Brain, Obsidian, Notion, or any vector store. Trails form. Knowledge compounds across sessions.
LoreSpec is MIT-licensed and open to contributions. Test the Scribe, propose new object types, build integrations.