Give AI the Same Context You Give Yourself

How I point AI tools at my Obsidian notes I already keep so they stop producing generic output, and how I verify the context actually loaded.


Every session starts the same way. You open an AI tool, type a prompt, and spend the first few minutes re-explaining yourself. Your preferred code style, the terms your team uses, how you like Jira tickets formatted. The AI doesn't remember any of it, so you repeat yourself or accept generic output and fix it by hand.

This is a solved problem. Most AI coding tools can read from a file at the start of every session, and if you already keep notes about how you work, you can point the AI at those notes directly. If you don't keep notes, start with one file. The AI reads your conventions before you say a word, and the output reflects it immediately.

Here's how to set it up, verify it's working, and make it better over time.

# Start With One File

Create a markdown file with a few of your conventions. This doesn't need to be comprehensive. Three to five things you find yourself repeating to AI tools is enough.

If you're a developer, your file might look something like this:

# My Conventions

- Design before coding - understand the problem completely first
- Simple is better than clever - optimize for understanding

## Rust-Specific

- All aggregate IDs wrap UUIDs using the newtype pattern
- Use thiserror for library error types, anyhow for application error types

If you're a writer:

# My Writing Style

- Keep sentences short. No commas where a period works.
- Never summarize what the reader just read.
- Define abbreviations on first use.
- Written from a place of humility; confidence is fine, arrogance is not.

Now point your AI tool at it. In OpenCode, that's one line in the config:

{
  "instructions": ["~/notes/CONTEXT.md"]
}

Claude Code uses a CLAUDE.md files. Kiro uses .kiro/steering/<name>/ directories. Claude Projects lets you paste context directly into the project instructions. The mechanism varies by tool, but the principle is the same: give the AI your context before the conversation starts.

# Verify It's Working

This is where most people stop, and it's the reason most context setups quietly fail. The AI loads your file, you don't notice any difference, and you assume it didn't work. Or it does work, but you have no way to prove it. Context failures are silent by default, and that's what makes them dangerous.

Add a verification line to your context file:

After every response, tell me which context files you loaded.

Now every response ends with a receipt. If the receipt is missing, something broke in the loading chain.

You can take this further by embedding identity phrases as cheap debugging tools. My writing context includes:

On every response, address me as "the person who is trying to write"
so that I know this context is working correctly.

And my coding context says:

I am coding! Call me "person who is trying to code!"

If I'm writing a blog post and the AI calls me "person who is trying to code," I know the wrong context loaded. These are dumb, obvious signals, and that's the point. You want context failures to be loud, not silent.

Try it yourself: add a verification line to your context file, start a new session, and confirm it appears in the response. That's the first feedback loop, and you'll use it constantly from here.

# What Changes

Once you've seen the verification line appear, test the actual output. Ask the AI to do something your conventions cover and compare the result to what you'd produce without the context file.

I needed to integrate curriculum events from Ministry Grid, our training platform, into an analytics adapter. The AI needed to understand which user actions matter in our domain: things like "Downloading a Leader Guide" or "Completing a Training Plan." Without context, the output was repetitive and unfocused. Vague event names, no understanding of which actions are meaningful, acceptance criteria I'd have to rewrite from scratch.

After pointing the AI at my context files, the same prompt produced a story scoped to the right curriculum events, pointing to the correct codebases, with acceptance criteria tied to specific user actions. The AI knew the domain because it had already loaded my notes.

The difference isn't a better prompt. It's that the AI already knows your conventions before you say a word.

# Scale Beyond One File

One file works, but as you add more context, a single file gets unwieldy. You don't want the AI loading your Jira conventions when you're writing a blog post. The natural next step is to split your context into a routing chain: a root file that points to others based on what you're doing.

I do a few different types of tasks:

- Coding or Programming
- Writing

If I'm coding, use the additional context at `@coding/AGENTS.md`
If I'm writing, use the additional context at `@writing/AGENTS.md`

By default, I am doing coding.

When you open a coding session, it loads your coding preferences. When you're writing, it loads your style guide. Same entry point, different paths.

Each of those files can route further. My work context, for example, branches into application-specific documentation:

## Applications

### Ministry Grid

A platform for church leaders to manage their on-going curriculum
and to train their leaders. See `@apps/ministry_grid.md`
for more details.

### Curriculum Service

Manages curriculum content and formats.
See `@apps/curriculum_service.md` for more details.

## Jira

See `@jira_conventions.md` for guidelines on creating Jira tickets.

## Terms and Definitions

See the `@glossary.md` file to understand acronyms and company-specific terms.

Each application, convention, or domain concept gets its own file, and the AI loads only what's relevant to the current task. When I'm working on curriculum, it knows which repos to explore. When I'm writing a Jira ticket, it knows my team's format. I don't have to explain any of this because it's already in the files I maintain anyway.

Routing treeopencode.jsonLLM Context/AGENTS.mdcoding/AGENTS.mdwriting/AGENTS.mdcoding_philosophy.mdtypescript.mdblog/AGENTS.mdengineering/AGENTS.mdLayer 1: Entry PointLayer 2: Activity RouterLayer 3: Domain Files

One practical note: since these context files might live outside the working directory, the AI agent needs explicit permission to read them. In OpenCode, that's an external_directory permission in the config. Claude Code's CLAUDE.md lives in the repo by default. Check your tool's docs for how it handles external file access.

# Keep Refining

The routing chain gives you structure, but the real value comes from what's inside the files, and those files are never finished.

The rule I follow: if I'm going to explain something to an agent again, it belongs in a context file.

My Jira conventions file didn't start as a clean spec. The first time I asked an AI to write a Jira ticket, the output was nothing like what I'd write. So I added a note about keeping descriptions concise. Next time, the description was better but the structure was wrong because there was no technical details section. I added that too. Over a few iterations, the file evolved into:

## Story/Task Description Format

Keep descriptions concise with two sections:

1. **Brief description** (1-2 sentences) - What needs to be done and why
2. **Technical Details** (bullet list) - Specific implementation details,
   repos, configurations

Now the AI's output is close enough that I rarely edit it. The same pattern showed up with writing. The first AI-assisted draft I worked on read like nine separate articles stitched together: repetitive, wordy, and full of end-of-section summaries that restated what the reader had just read. The content was accurate, but it didn't value the reader's time. So I updated my writing context: remove purple prose, cut unnecessary qualifiers, never summarize what the reader just experienced.

Feedback loopRouteLoad the right contextVerifyConfirm it's workingRefineUpdate context filesAI readscontext filesFeedback markersshow what loadedFix what's wrong,add what's missing

The verification techniques from earlier are what surface these opportunities. When the AI's output doesn't match what you'd produce yourself, that gap is a signal. Either your context file is missing something or the wrong file loaded. Both are fixable in under a minute. Prompt, notice what's wrong, update the context file, prompt again. Over time, you stop correcting.

# Go Try It

The tools will keep changing. The principle won't: give the AI your context, verify it's working, and refine as you go.

If you want to start right now:

  1. Create a file with 3-5 of your conventions
  2. Add a verification line so you can confirm it loaded
  3. Point your AI tool at it
  4. Ask the AI to do something covered by those conventions
  5. Notice what it gets wrong. Update the file. Ask again.

If you want to see someone taking this further, Teresa Torres, a product coach and author, runs her entire business from two Claude Code terminals and a notes app. Worth watching if you're curious about what this looks like at scale.