You explain the same things every session.
Who you are, what you're working on, why you made that decision three weeks ago, what "Val" means, which project is which. Every time.
Context for AI assistants
A structured way to store context — projects, people, decisions, vocabulary — in plain Markdown inside your Obsidian vault, so any AI assistant can load it.
vault/
|-- CLAUDE.md
|-- COPILOT.md
|-- TASKS.md
`-- memory/
|-- glossary.md
|-- context/
|-- people/
|-- projects/
`-- decisions/
No database, no embeddings, no API. Markdown files in folders, readable by humans and machines alike.
Who you are, what you're working on, why you made that decision three weeks ago, what "Val" means, which project is which. Every time.
Glossary, projects, people, tasks, decisions — structured in your vault, loaded on demand. The AI reads; you stop repeating yourself.
How it works
There's no clever trick. You organize context into files with a consistent structure, and the AI follows rules about what to read first.
Projects, people, vocabulary, tasks, decisions — each in its own file, with frontmatter for metadata. Human-readable, grep-friendly.
A master file and a memory index tell the AI what to read first and what to load only when relevant. Context windows are finite.
The guide includes review routines. Stale context is worse than no context, so maintenance is part of the design.
See it in action
The repo includes a working example — a fictional art conservator who maps medieval pigment recipes to spectroscopy data. Different domain, same system.
minimal-vault/
|-- CLAUDE.md
|-- TASKS.md
`-- memory/
|-- ContextSummary.md
|-- glossary.md
|-- context/
| |-- professional.md
| `-- personality.md
|-- people/
| |-- marta-delvaux.md
| `-- tobias-ackermann.md
|-- projects/
| |-- concordance.md
| `-- gallery-work.md
`-- decisions/
|-- ContextSummary.md
`-- DEC-001 - Concordance data format.md
# Context for AI sessions
## Who I am
- **Name:** Elena Voss
- **Role:** Art conservator and pigment researcher
- **Base:** Berlin, Germany (studio in Kreuzberg)
- **Day job:** Senior conservator at the Gemäldegalerie
- **Side project:** Reverse-engineering historical pigment
recipes using spectroscopy data and period source texts
## How to interact with me
- **Tone:** Direct, technical when needed, no hand-holding
- **I value:** Precision in terminology, proper citation
- **When I say "the manuscript"** I mean Strasbourg MS
(15th c.) unless I specify otherwise
## Vault structure
- Research/ — Pigment analysis, treatise translations
- Concordance/ — The mapping project: recipes → spectra
- Gallery/ — Work notes, condition reports
- memory/ — AI memory system (this folder)
## Acronyms
| Term | Meaning |
|------|--------------------------------------|
| XRF | X-ray fluorescence |
| FORS | Fiber optic reflectance spectroscopy |
| GG | Gemäldegalerie |
| LTY | Lead-tin yellow |
## Internal terms
| Term | Meaning |
|---------------|--------------------------------------|
| Concordance | Pigment recipe → spectroscopy project|
| the manuscript| Strasbourg MS (15th c.) |
| Gallery 5 | Current Cranach survey assignment |
| Ghent samples | XRF dataset from KIK-IRPA collab |
## Nicknames
| Nickname | Person |
|------------|--------------------------------------|
| Tobias | Tobias Ackermann, lab colleague |
| Dr. Krause | Dr. Ingrid Krause, department head |
| Marta | Marta Delvaux, KIK-IRPA Brussels |
# DEC-001 - Concordance data format
**Status:** Accepted
**Date:** 2026-03-16
**Scope:** Concordance project
## Context
The Concordance project needs a data format for
storing recipe-to-spectrum mappings. Marta suggested
SQLite. Tobias mentioned Airtable.
## Decision
Use Markdown files (one per entry) with structured
headings + companion CSV files for spectral data.
## Alternatives considered
- **SQLite** — Better for queries, but adds tooling
dependency. Breaks "everything in Obsidian" principle.
- **Airtable** — Good UI, but proprietary and cloud-
dependent. No wikilinks, no version control.
- **JSON** — Machine-readable but painful to edit by
hand. Markdown is both human- and machine-readable.
## Consequences
- Entries are part of the Obsidian knowledge graph
- Querying requires grep or Python, not SQL
- Revisit if dataset grows past 500 entries
Design choices
No database, no vector store, no embeddings. Your context is in files you can read, edit, version, and move with standard tools.
Claude Code, Copilot, Cursor, ChatGPT — the same files work across tools. You adapt the loading mechanism, not the data.
This came from actual work — writing, software, personal context, long-running projects. The structure reflects what survived contact with reality.
The system stores the rationale behind changes. Future sessions get the "why", not just the latest snapshot.
Folders, Markdown files, frontmatter, a loading order. That's the entire stack. Complexity goes into the content, not the plumbing.
The repo includes examples for packaging workflows as installable commands and skills, for tools that support them.
Compatibility
Reads CLAUDE.md natively. Best integration out of the box.
Uses COPILOT.md and workspace context. Plugin flows available.
Attach or paste the relevant files when starting a session.
Adapt the files to each tool's context mechanism. The data doesn't change.
Scope
Get started
The whole system is in the repo. Fork it, copy what works, change what doesn't. It's Markdown — you'll figure it out.
FAQ
No. The system is Markdown files in folders — any text editor works. Obsidian is where this was built and where it works best (wikilinks, graph view, frontmatter), but it's not a dependency.
Claude Code has the best native integration, but the files work with any AI that can read them. The guide covers multiple tools.
No. You maintain the files, you choose what to load. The AI reads structured context — it doesn't build it for you.
Different layer. A vector database is good for semantic search over thousands of documents. This system handles the 50–500 files that define who you are — identity, preferences, active projects, people, decisions. That kind of curated personal context doesn't need embeddings; it needs structure and transparency.
With Markdown you get full visibility into what the AI "knows", version control via git, zero infrastructure, and portability across any AI tool that reads text. A vector store gives you none of that. If you're already running RAG for other reasons, this system complements it — it's the personal layer that RAG is bad at.