v0.1 — Open Standard

The Open Standard for AI Conversation Outputs

A structured, portable format for extracting and preserving knowledge from AI conversations. Works with any LLM that can read a system prompt.

Read the Spec → Get Started
SOUL.md defines who the agent is LORE.md defines what the conversation produced

Chat is a trap

Every day, millions of people have substantive AI conversations — making decisions, debugging systems, designing products, researching topics. Valuable knowledge is produced: decisions with rationale, insights, patterns, open questions, concrete next steps.

Then the conversation ends, and all of it vanishes into a chat log. Chat is an interaction medium pretending to be a storage medium. You can't ask "what did I decide about authentication last month?" and get a structured answer. You can't trace how your thinking evolved across six sessions.

LoreSpec fixes this. It defines a structured format for extracting the durable knowledge from any AI conversation and preserving it in a way that compounds over time.


Two layers of memory

Every conversation produces two kinds of knowledge — the experience and the content. LoreSpec captures both, mirroring how human memory actually works.

📖

Episodic Layer

The Session Arc — the story of the conversation. Where it started, the pivots where thinking changed direction, where it landed. Pivots aren't mistakes; they're evidence of active sensemaking.

🧠

Semantic Layer

Knowledge Objects — the extractable content. Decisions, insights, patterns, solutions. Each one standalone, searchable, and linked to the episodic context that produced it.


8 knowledge types

A minimal ontology that covers every kind of durable knowledge an AI conversation can produce. Validated across product strategy, authentication design, DevOps, competitive analysis, and more.

Artifact

Tangible Outputs

Docs, specs, code, plans, frameworks produced during the session

Decision

Choices Made

Full argumentative structure — issue, positions, arguments, warrant, status

Insight

Facts & Observations

Context-free knowledge retrievable without the original conversation

Pattern

Reusable Methods

Procedural knowledge — knowing how, not just knowing that

Open Question

Unresolved Issues

Questions raised but not answered, with partial progress and blockers

Reference

Resources Found

Tools, companies, articles, repos discovered and why they matter

Next Step

Concrete Actions

Commitments that emerged, with urgency and dependencies

Solution

Problems Fixed

Problem → fix → why it works → caveats. Standalone debugging value.

Decisions get special treatment

Most tools treat decisions as flat facts. LoreSpec captures the full argumentative structure from IBIS and Toulmin — not just what was decided, but the reasoning that makes the decision evaluable and revisable.

The warrant — the unstated assumption connecting evidence to conclusion — is consistently the most valuable field. It tells future-you what belief would need to change to revisit the decision.

decision: Use buyer-request/seller-approve model
issue: How should ownership transfer work?
positions:
  - Dealer-push — dealer enters buyer email
  - Buyer-request — buyer scans QR, requests
  - Mark-as-sold with open claim
warrant: The party with the strongest incentive
  should drive the process
qualifier: Settled for Phase 1
status: settled

The network is the knowledge

An isolated decision is a fact. A decision linked to the insights that informed it, the alternatives rejected, and the actions implied — that's knowledge. Connections are not metadata. They are the knowledge base.

Trails

A trail is a named path through connected objects across multiple sessions. When conversations about the same topic happen weeks apart, trails link them into a coherent narrative.

Inspired by Bush's associative paths (1945), Luhmann's branching sequences, and Tulving's episodic threads. The Scribe is Bush's "trail blazer," automated.


Grounded in research

Every design decision in LoreSpec maps to an established framework from cognitive psychology, information science, or epistemology. Independently validated against primary sources.

1945

Bush's Memex

Associative trails through linked information → cross-session trails

1958

Toulmin

Warrants and qualifiers → the "why behind the why" in decisions

1970

Rittel & Kunz IBIS

Issue → position → argument → Decision object structure

1972

Tulving

Episodic vs. semantic memory → the two-layer digest

1981

Luhmann's Zettelkasten

Atomicity and linking → knowledge objects and connections

1983

Anderson ACT-R

Declarative → procedural knowledge → Pattern objects

1989

Ackoff DIKW

Information → knowledge transition → what the Scribe does

1995

Nonaka & Takeuchi SECI

Externalization → Combination → the knowledge spiral


The ecosystem

v0.1

LoreSpec

The standard. Defines the LORE.md format — 8 object types, 7 connection types, session classification, trails.

Ready

The Scribe

System prompt that extracts LORE.md from any conversation. Works with Claude, ChatGPT, Gemini — any LLM.

Ready

Open Brain Import

Import LORE.md into Open Brain as properly chunked, typed, tagged thoughts optimized for retrieval.

Planned

Lore CLI

Process conversation exports into LORE.md files from the command line.

Planned

Lore MCP

Serve your lore library to any AI client via the Model Context Protocol.

Planned

lorespec.org

Landing page, documentation, and community hub for the standard.


Get started

1

Grab the Scribe

Copy the Scribe system prompt into a Claude Project, ChatGPT custom GPT, or any LLM system message.

2

Feed it a conversation

Paste a conversation export or transcript. The Scribe classifies the session and extracts a structured LORE.md.

3

Connect & compound

Import into Open Brain, Obsidian, Notion, or any vector store. Trails form. Knowledge compounds across sessions.

Stop losing what you figure out

LoreSpec is MIT-licensed and open to contributions. Test the Scribe, propose new object types, build integrations.

View on GitHub Read the Spec