Tecknoworks Blog

AI-Assisted Development
Cheat Sheet

Your guide to becoming native in markdown, prompting best practices, and essential general rules to follow

Table of Contents

AI-assisted development can transform how you write, debug, and deliver code, but getting consistent, high-quality results takes skill. After a year of working hands-on with AI coding tools across real-world projects, I’ve compiled the techniques I use every day into a single, practical cheat sheet. 

The One Thing That Matters Most

Output quality equals input quality. AI doesn’t know the full scope of your codebase, your domain context, your runtime state, or your intentions, it has a limited context window, and what isn’t in that window doesn’t exist. The developers who get the most from AI are the ones who’ve learned to manage context deliberately.

This demands a shift in how we work, from code writers to delegators. Your job is to shape the context so the AI gets it right on the first pass: clear instructions, references to specific files and extension points, examples and analogies, constraints that define “done,” and any external knowledge the AI wouldn’t have on its own. Every technique in this guide is a way to do one of these things better.

But delegation doesn’t mean detachment. To use these tools effectively, you need to maintain an accurate mental model of your codebase and its desired state. If you lose that alignment, if you stop tracking what the AI is changing and why, the model quietly starts making architectural and design decisions for you. That’s when unexpected issues surface, and they compound fast. For quick prototypes or throwaway experiments, this matters less. But the moment code needs to be maintained in active production, there’s no shortcut around understanding what’s being built. 

See Chapter 1 (TL;DR) for all 13 core practices at a glance. 

Plan First, Then Implement

The most impactful habit I’ve adopted is separating planning from implementation. It sounds obvious, but the temptation to jump straight into coding with AI is strong, and it’s a trap, especially on complex tasks.

The planning phase is where the agent explores the codebase in read-only mode, gathers relevant context, and breaks the task into small, precise subtasks with file references and test strategies. Only after you review and approve that plan does it move to implementation.

Why is this so effective? LLMs have limited context windows. If the agent tries to solve the entire problem in one pass, it has less attention for the details that matter. Plan mode lets it consider everything first, then execute each step one by one, focused on implementation, not distracted by planning.

The caveat: you cannot outsource the thinking. Even in plan mode, you need to supply the domain context, the problem definition, preferred architecture, conventions to follow, and relevant tests. If your mental model is wrong, the AI will confidently build on that flawed foundation. Pour your energy into the plan, under-planning costs hours, over-planning costs minutes.

Chapter 3 walks through a complete plan example with acceptance criteria and test strategies. 

Context Windows Are a Resource You Have to Manage

Context is finite, and how you use it matters more than how much you have. Don’t overwhelm the agent with a broad, unfocused problem and expect precision, scope each conversation tightly. One chat per feature or bug, not one mega-thread for the whole repository. Keep only the relevant files attached. Give clear direction rather than throwing everything in and hoping.

What erodes context fast? File searching, code flow analysis, editing and reviewing diffs, test output, MCP tool responses (especially verbose JSON), and back-and-forth corrections. When answers start degrading, don’t keep chatting, start a fresh session with a compact recap.

I use a compaction recipe that works well: ask the agent to compress your current context into a markdown summary, review and tag it, then start a new conversation with that file attached. A good compaction captures what you’re working on, the specific files that matter, key decisions so far, and what was tried and didn’t work.

Markdown is the universal format for all of this. It’s what LLMs parse most naturally, their native language. Plans, research summaries, context handoffs, implementation specs, reach for markdown first.

As a rule of thumb, aim to keep context window utilization under roughly 40%. Beyond that, the model has to sift through more noise to find the signal, and quality drops.

Learning a Codebase with AI

Before diving into implementation on an unfamiliar project, invest time in having AI generate documentation about key flows. Ask it to create markdown files that describe authentication flows, data model relationships, API request lifecycles, or deployment pipelines, complete with Mermaid sequence diagrams. These diagrams are ideal for AI-generated docs because they’re text-based, easy for LLMs to produce, and render beautifully in any markdown viewer. 

This isn’t just about convenience. It’s about building your own intuition before you start making changes. The documentation AI generates is only as good as the codebase it reads and the questions you ask, so use it to build understanding,  then validate what you’ve learned. 

Chapter 4 includes the exact prompts and a Mermaid diagram example. 

The Stale Knowledge Problem

Whether your AI-assisted implementation works or fails often comes down to one thing: whether the AI has accurate, up-to-date information about the libraries, frameworks, and APIs you’re using. By default, most coding agents do not use web search. They rely on training data that may be months or years out of date.

For stable, mature libraries, the training data is usually fine. But for framework major versions, cloud integrations, security implementations, or breaking changes in dependencies, you need current information. I’ve found three approaches that work.

The first is MCPs — Model Context Protocol servers — which fetch live documentation directly into your AI session. The second, and often most effective, is a research-first approach: run a deep query using tools like Perplexity or Claude Deep Research, export the response as markdown, and drop it into your AI assistant before asking it to implement. You’re compressing expert-level, current research into the context window before the AI writes a single line of code. The third is inline web search, which is convenient but typically less thorough than dedicated research tools.

Chapters 5 and 9 cover the research-first workflow and ready-to-use MCP configurations. 

Debugging: Let the Logs Do the Talking

AI can solve the vast majority of bugs by itself, if it has the necessary context. The key is bridging the gap between what the code says and what actually happens at runtime. My debugging playbook is straightforward: ask the AI to instrument the code with detailed logging (expected versus actual values at each step), reproduce the bug, then paste the full log output back into the conversation. If the first attempt doesn’t solve it, add more granular logging to the area where behavior diverges and repeat.

This works because bugs are fundamentally context problems. The AI already has the code, but it doesn’t know the runtime state. Logs bridge that gap by showing exactly what happened, not what you think happened.

Chapter 6 has the full 4-step logging approach and a quick debugging reference table. 

The Biggest Productivity Unlock: Parallel Agents

If there’s one technique that delivers an outsized return, it’s running multiple AI sessions in parallel using git worktrees. Create three to five worktrees from the same repository, each on a different branch, each with its own AI session. Instead of waiting for one task to complete, you’re working on multiple features, tests, and refactors simultaneously.

The patterns are flexible: one worktree for a new feature, while another writes tests for something else, and a third handles refactoring. Or use one for AI planning while another executes a finished plan. Quick shell aliases make navigation between worktrees effortless.

Let AI Run Commands

Enabling your AI assistant to execute shell commands unlocks workflows beyond code editing. Infrastructure management with Azure CLI or Terraform, remote debugging via SSH, complex git operations, running test suites, analyzing failures, and package management all become conversational. You describe what you want to investigate, and the AI runs commands, interprets output, and suggests next steps.

Start with approval mode so you review commands before execution, use read-only queries before modifications, and limit auto-execution to non-production environments. Every command is logged in the conversation history, creating a natural audit trail.

Chapter 8 includes Azure CLI and SSH examples plus a safety checklist. 

MCPs: Extending AI Beyond Code Editing

Model Context Protocol servers are one of the most powerful ways to extend your AI’s capabilities. They provide standardized access to databases (query and explore schemas using natural language), live documentation (keep the AI current on framework APIs), cloud resources (Azure, AWS), and browser automation (AI-driven end-to-end testing). Additional MCPs exist for filesystem access, GitHub, Slack, Google Drive, and persistent memory across sessions.

One important note: MCPs are for local development productivity, not production pipelines. Responses are non-deterministic and subject to network variability.

Configuration Files: Less Is More

Over-stuffed configuration files are a common mistake in AI-assisted development. LLM-generated config files can make agents perform worse by adding redundant, distracting context. Only include what the model cannot derive from the codebase itself.

Start without a config file. Only create one when you observe the agent making the same mistake repeatedly, and only after trying to fix the root cause in the codebase first. The fix order is: restructure the code so the right approach is obvious, then improve tooling (better tests, type checks, linting), and only then add a targeted rule as a last resort. Delete and re-test with every model upgrade, because newer models need fewer rules.

Chapter 10 covers the “pink elephant effect” and what to include vs avoid. 

Build and Share Custom Skills

If you do something repeatedly, if it takes meaningful effort to figure out how to solve effectively with AI, or if it’s a common problem across your team, make it a skill. Skills are reusable prompt templates that encode workflows, patterns, and domain knowledge into repeatable actions.

The most effective way to scale AI-assisted development across an organization is to maintain a shared internal repository of curated skills. This standardizes approaches, distributes proven workflows, compounds productivity as every skill benefits all teams, and creates a feedback loop where teams share improvements. Examples include tech debt finders, PR preparation workflows, project structure validators, document converters, and product requirements analyzers.

Chapter 11 includes six example skills and the shared repository approach.  

Recovery and Safety: Git Is Your Real Safety Net

Commit when meaningful progress is made. Git — not AI checkpoints — is your real safety net. Create more, smaller files for granular context. When things break, don’t try too hard to fix it in the same conversation. Revert to a known-good state, consider changing models, and start fresh with what you’ve learned.

Pay attention to the trajectory of your conversation. A pattern of “you ask, AI does the wrong thing, you say fix it, AI does the wrong thing again” is a signal to stop and start over,  not to keep pushing. The AI sees that pattern of failure and predicts more of the same. Start fresh, include what was tried and didn’t work, and let the model approach the problem cleanly.

Choosing the Right Workflow

Not every task needs the full ceremony. For a quick fix in a single file, just ask the agent. For a feature in familiar code with known technology, plan then implement. For unfamiliar code, use sub-agents to explore before planning. When integrating a new library or API, gather the latest documentation first. And for large refactors or architectural changes, orchestrate multiple sessions yourself.

When in doubt, invest more in the plan. Under-planning costs hours. Over-planning costs minutes.

AI-Assisted Development
Cheat Sheet

Everything in this article, and much more, is captured in a single, printable reference designed to sit open on your second monitor while you work. It covers all thirteen topics with detailed tables, code examples, ready-to-use MCP configurations, and tool-specific guidance for GitHub Copilot, Claude Code, Cursor, and