Give AI the Same Context You Give Yourself

How I use my Obsidian vault to route context to AI agents across coding, writing, and work tasks — and how to know it's actually working.


I needed to integrate curriculum events from Ministry Grid — our training platform — into an engagement analytics adapter. The AI needed to understand the curriculum domain well enough to suggest which events actually matter, like "Downloading a Leader Guide" or "Completing a Training Plan." What I got back was repetitive and unfocused — vague event names, no understanding of which user actions are meaningful in the curriculum space, and acceptance criteria I'd have to rewrite from scratch.

After setting up what I'll walk through in this post, the same prompt produces a story scoped to the right curriculum events, pointing to the correct Ministry Grid codebases, with acceptance criteria tied to specific user actions. The AI knew the domain because it had already loaded my curriculum and Ministry Grid context files.

The difference isn't a better prompt. It's that the AI already knows my conventions before I say a word — because it reads from the same notes I use to plan my work. I use Obsidian, a markdown-based notes app, as the single source of truth for how I work: coding conventions, Jira formats, domain glossaries, writing style guides. An AI coding agent called OpenCode reads those files at the start of every session.

Here's how I set it up.

# The Routing Chain

The whole thing starts with one line in OpenCode's config file:

{
  "instructions": ["~/code/personal/notes/LLM Context/AGENTS.md"]
}

That points to a single markdown file in my Obsidian vault. Instead of cramming everything into one massive file, that root file acts as a router — it looks at what I'm doing and loads the right context.

I do a few different types of tasks:

- Coding or Programming
- Writing
- Learning

If I'm coding, use the additional context at `@coding/AGENTS.md`
If I'm writing, use the additional context at `@writing/AGENTS.md`
If I'm learning, use the additional context at `@learning/AGENTS.md`

By default, I am doing coding.

When I open a coding session, it loads my coding philosophy and language-specific preferences. When I'm writing a blog post, it loads my writing style guide. Same entry point, different paths.

Each of those activity files can route further. My work context, for example, branches into application-specific documentation:

## Applications

### Ministry Grid

A platform for church leaders to manage their on-going curriculum
and to train their leaders. See `@apps/ministry_grid.md` for more
details.

### Curriculum Service

Manages curriculum content and formats. See
`@apps/curriculum_service.md` for more details.

## Jira

See `@jira_conventions.md` for guidelines on creating Jira tickets.

## Terms and Definitions

See the `@glossary.md` file to understand acronyms and company-specific terms.

Each application, convention, or domain concept gets its own file. The AI loads only what's relevant to the current task. When I'm working on curriculum, it knows which repos to explore. When I'm writing a Jira ticket, it knows my team's format. I don't have to explain any of this — it's already in the files I maintain anyway.

One practical note: since these context files live outside the working directory, the AI agent needs explicit permission to read them. In OpenCode, that's an external_directory permission in the config. This detail is important but very dependent on your tooling — Claude Code's CLAUDE.md lives in the repo by default, Cursor's rules are project-scoped, and Claude Projects has no file system to worry about. Check your tool's docs for how it handles external file access.

Routing treeopencode.jsonLLM Context/AGENTS.mdcoding/AGENTS.mdwriting/AGENTS.mdcoding_philosophy.mdtypescript.mdblog/AGENTS.mdengineering/AGENTS.mdLayer 1: Entry PointLayer 2: Activity RouterLayer 3: Domain Files

But here's the thing about a chain like this: when it works, it's invisible. The AI just produces better output and you don't think about why. When it breaks, it's also invisible — the AI quietly ignores your conventions and you waste time wondering what went wrong. So how do you know the right context actually loaded?

# The Feedback Loop

I treat this the same way I'd treat instrumenting code. In my root context file, I added:

I am testing my context loading process. After each prompt, tell me what
context files were used. Highlight this with a unique emoji each time.

Now every response ends with a summary of which files were loaded. If that summary is missing, something broke in the chain.

I also embedded identity phrases in each activity context. The writing context says:

On every response, address me as "the person who is trying to write"
so that I know this context is working correctly.

The coding context says:

I am coding! Call me "person who is trying to code!"

If I'm writing a blog post and the AI calls me "person who is trying to code," I know the wrong branch loaded. These are cheap, immediate signals — debugging tools for context, not code.

Once you have that visibility, something interesting happens: you start noticing exactly where the AI's output diverges from what you'd produce yourself. And that changes how you maintain the files themselves.

# Context Files Are Living Documents

The routing chain gives you structure. But the real value comes from what's inside the files — and those files are never finished.

The rule I follow: if I'm going to have to explain something to an agent again, it belongs in a context file.

The Jira conventions file didn't start as the clean spec I showed earlier. The first time I asked an AI to write a Jira ticket, the output was nothing like what I'd write. So I added a note about keeping descriptions concise. Next time, the description was better but the structure was wrong — no technical details section. I added that. Over a few iterations, the conventions file evolved into:

## Story/Task Description Format

Keep descriptions concise with two sections:

1. **Brief description** (1-2 sentences) - What needs to be done and why
2. **Technical Details** (bullet list) - Specific implementation details,
   repos, configurations

Now the AI's output is close enough that I rarely edit it. I didn't get there in one shot — I got there by noticing what was wrong and updating the context file each time.

Same pattern with writing. The first AI-assisted draft I worked on read like nine separate articles stitched together — repetitive, wordy, and full of end-of-section summaries that restated what the reader had just read. The content was accurate, but it didn't value the reader's time. Every sentence should earn its place, and too many of them didn't. So I updated my writing context: remove purple prose, cut unnecessary qualifiers, never summarize what the reader just experienced. Prompt, notice what's wrong, update the context, prompt again. Over time, you stop correcting. I'm still tuning the granularity.

Feedback loopRouteLoad the right contextVerifyConfirm it's workingRefineUpdate context filesAI readscontext filesFeedback markersshow what loadedFix what's wrong,add what's missing

# Start Where You Are

None of this requires Obsidian or OpenCode specifically. It requires a place where you keep notes and an AI tool that can read them. If you already plan in Notion, start there. If you don't use a notes app at all, a folder of markdown files works. If you're not an engineer, Claude Projects lets you paste context directly into a project — no terminal, no file system, no config files.

The pattern scales in both directions. A single context file in a repository with your coding conventions is a version of this. A full vault with activity routing and domain branching is another. Start where you are and add layers when you feel the friction.

If you want to see someone taking this further, Teresa Torres — a product coach and author — runs her entire business from two Claude Code terminals and a notes app. Worth watching if you're curious about what this looks like at scale.

The tools will keep changing. The principle won't: give the AI your context, verify it's working, and refine as you go.