AI Principles That Won't Expire

Most AI advice expires quickly. These five practices won't, because they're not about AI.


If you only read this far...

Five practices that hold up regardless of which AI tool you use:

  1. Write down what you want before you prompt.
  2. Point your AI tool at the notes you already keep.
  3. Read the output line by line before you run it.
  4. Store your context in files you control, not inside a vendor.
  5. Know what you want from a session before you start one.

Pick one. Try it in your next session.

Before you prompt an AI tool, write down what you want it to produce. One sentence is enough. That single habit will outlast every tool you're using right now, because it has nothing to do with AI. It's just disciplined thinking.

Most AI advice doesn't hold up this well. Six months ago, you were told to chunk your documents and manage your context window carefully. Then context windows grew by orders of magnitude, and Google's own documentation acknowledged that conventional wisdom no longer applied. Retrieval-Augmented Generation went from "always do this" to "do this when you need it" after Databricks showed you could skip retrieval entirely for smaller datasets. Chain-of-thought prompting was a standard technique until newer models baked the reasoning directly in. The pattern is the same every time: reasonable advice becomes obsolete because the underlying capability changed. If a recommendation references a specific tool, model, or capability threshold, it has an expiration date.

What follows are five practices that don't. They work regardless of which model or tool you're using, because none of them are about AI specifically. They're about being a disciplined engineer. There's a natural arc to how they show up in your workflow, from before you touch the tool through how you operate across tools over time. Start with whichever one is closest to where you already feel friction.

# Plan Before You Prompt

AI makes it easy to skip thinking and jump straight to output. The cost of producing code has dropped so far that the temptation is to generate something and see if it works. But the cheaper code is to produce, the more important it is to ask whether you should produce it at all.

Planning means clarifying what you want the AI to produce, what constraints matter, and how you'll know the result is right. This doesn't need to be formal. A single sentence before you open the tool changes the dynamic: you're evaluating output against a standard instead of accepting whatever the AI gives you.

Some tools encode this discipline directly. OpenCode ships with distinct "Plan" and "Build" modes, forcing a deliberate shift from thinking to execution. Kiro takes it further with spec-driven development, where you define requirements and design before the AI writes a line of code. The specific tools will evolve. The discipline they encode won't.

Try it! Before your next AI prompt, write one sentence: "I want it to produce ___." After you get the result, compare what you asked for to what you accepted. Were they the same thing?

# Provide Your Context

The quality of AI output is directly proportional to the context you provide. This was true with search engines, it's true with prompts, and it'll be true with whatever comes next. A developer who's good at providing context to an AI assistant is also good at writing clear tickets, useful documentation, and precise bug reports. The skill compounds across everything you do.

Most AI tools can read from a file at the start of every session. If you already keep notes about how you work, your coding conventions, your team's terminology, your preferred formats, point the AI at those notes. The output changes immediately, because the AI is working from your standards instead of guessing.

I wrote a full walkthrough of how to set this up in Give AI the Same Context You Give Yourself. The short version: create a markdown file with your conventions, point your tool at it, and verify it loaded.

Try it! Create a file with three of your conventions. Point your AI tool at it. Ask it to do something those conventions cover and compare the output to what you'd get without the file. If you want the full setup, including how to verify the context loaded and how to scale beyond a single file, the linked post walks through each step.

# Verify the Output

AI acts on your behalf. It has your credentials but not your judgment. You know not to run destructive queries against production. The AI doesn't. Treat it like a new team member who has full access and no institutional knowledge, because that's functionally what it is.

A colleague, CJ Taylor, put this simply: the safe zone is the overlap between what AI says and what you can verify. Everything outside that intersection is where problems accumulate. If the AI generates a database migration, can you trace through the SQL and confirm it does what you intend? If it writes a business logic function, do you understand the edge cases well enough to spot what it missed? If the answer to either question is no, you're not ready to ship it.

The principle of least privilege is as old as computing. AI just makes it urgent again. Tests, sandboxed execution, code review, checking sources: these aren't new practices. What's new is that they've shifted from good hygiene to the only thing standing between you and shipping a hallucination.

If you want to go deeper on how to decide what's safe to delegate and what isn't, I wrote about the framework I use in AI Output Looks Right Until It Doesn't.

Try it! Next time AI generates code for you, read it line by line before you run it. Find one thing you'd write differently. That's your verification muscle, and it gets stronger every time you use it.

# Stay Portable

It's tempting to build your entire workflow around whatever tool you're using right now, its proprietary context format, its specific API, its way of organizing projects. But if your process only works inside one tool, you'll rebuild from scratch when you switch. And you will switch.

Markdown, plain text files, version-controlled configuration: these aren't exciting, but they're durable. The same applies to how you structure context for AI. If your carefully curated prompts and system instructions are locked inside one vendor's platform, they're not yours. They're the vendor's. Choose formats and processes that survive the transition.

Try it! Check where your AI context lives right now. Is it inside a vendor's platform, or in a file you control? If it's vendor-locked, copy it to a markdown file you own. That file will outlast the tool.

# Be Intentional

AI usage escalates in ways that aren't always obvious. A debugging session that starts as "help me understand this error" turns into a long back-and-forth where each message carries the full conversation context. Suddenly you've spent an hour on a problem you could have reframed in five minutes. This happens because AI sessions have no natural stopping point. The tool will keep going as long as you keep prompting.

The specific economics will change: pricing models, rate limits, what's free and what isn't. But the discipline of knowing what you want from a session before you start one won't. Unfocused usage wastes more than money. It wastes your attention, produces lower-quality output, and trains habits that compound in the wrong direction. Before you engage the tool, know what you're trying to get out of it. That forcing function keeps you asking the right questions regardless of what the tool costs.

Try it! Before your next AI session, set a timer for 10 minutes. When it goes off, ask yourself: am I still working on what I started, or did the session drift? If it drifted, that's the signal to reset.

# Where to Start

These five practices aren't about AI. They're about the discipline you bring to any tool. The models will change. The advice will change. The habit of thinking before you build, providing your context, verifying what you get back, keeping it portable, and knowing what you want before you start, that doesn't expire.