Opening
Welcome everyone. Today we're going to talk about something that separates teams who dabble with coding agents from teams who actually ship with them — and that's configuration.
Most of us installed Claude Code or OpenCode, typed a prompt, and got something useful back. That's great. But if that's where you stopped, you're leaving an enormous amount of value on the table.
Over the next twenty minutes or so, I'm going to walk you through every configuration surface these tools offer — from the basics that take five minutes, all the way to subagent architectures and lifecycle hooks. By the end, you'll have a concrete plan to take back to your repo this week.
Why This Matters
Let me frame the problem. A coding agent is not autocomplete. It reads your entire codebase. It runs shell commands. It makes coordinated edits across multiple files. It runs your tests to verify its own work. It can commit changes. This is a fundamentally different tool than what we had two years ago.
Look at these numbers. Teams with well-configured agents report around sixty percent less time on boilerplate, three to five times faster PR throughput, and significantly fewer context switches because the agent handles the mechanical work while you focus on design decisions.
But here's the catch — these numbers come from configured setups. Without configuration, your agent doesn't know how to build your project. It doesn't know your test runner. It doesn't know that you use a real database in tests, or that you never import from that internal package directly. It guesses. And it guesses wrong. A lot.
Configuration is the multiplier. It's the difference between a talented contractor on day one and a senior team member who has read every doc, every PR, and every Slack thread you've ever written.
What We'll Cover
Here's our roadmap. We'll start with the foundations — where configuration files actually live, and what goes in your main instructions file. That alone will get you eighty percent of the benefit.
Then we'll layer on permissions, rules, skills, and subagents. These are the power-user features that let you build real workflows — things like automated PR review, test generation, and sprint planning.
If you're new to this, focus on the first three or four sections. That's your starting kit. The rest is there when you're ready to go deeper. And everyone should stay for the action plan at the end — that's five concrete steps you can take this week.
Where Configuration Lives
Both tools use a dot-directory in your project root. Claude Code uses dot-claude. OpenCode uses dot-opencode. Inside, you'll find slots for settings, skills, agents, commands, and more. The structures are remarkably similar.
The important thing to know is that these are just files in your repo. You commit them to git. Your whole team gets the same configuration. When someone on your team figures out the right way to handle something, everyone benefits on their next pull.
One detail worth calling out — both tools share the Agent Skills open standard. That means a skill you write for Claude Code will also work in OpenCode, and vice versa. OpenCode even reads Claude Code's CLAUDE.md as a fallback if you don't have an AGENTS.md. So you're not locked into one tool.
OpenCode has a couple of extras you won't find in Claude Code — a dedicated tools directory for custom TypeScript definitions, a modes directory, and a themes directory. Claude Code has built-in path-scoped rules and auto-memory, which OpenCode handles through plugins.
Your Most Important File
If you take one thing away from this talk, it's this file. CLAUDE.md — or AGENTS.md if you're using OpenCode — is loaded into every single agent session. It's the first thing the agent reads. It shapes every decision it makes.
Four things go in here. First, your build and test commands. This is the highest-impact single thing you can add. When the agent knows that your test runner is npm run test, not pytest, not cargo test, not make check — it stops guessing wrong. That alone eliminates most of the frustration new users report.
Second, a brief architecture overview. Your tech stack, your directory layout, how you access the database. Third, your coding conventions — naming, error handling, import patterns. And fourth, the gotchas. The things that surprise even experienced developers. "Tests use a real local database, run the reset script first." "Never import from the internal package directly."
Keep this file under two hundred lines. I know that sounds tight, but longer files actually reduce instruction adherence — the agent starts ignoring parts of it. If your instructions are growing, split them into rules or skills, which we'll cover next.
Both tools have an init command that auto-generates a starting file by scanning your repo. Run it. It gives you a solid first draft. Then review it, tighten it, and commit it.
Controlling What the Agent Can Do
Once you've told the agent about your project, the next question is what it's allowed to do. The permission model has three levels. Allow means the agent can use that tool without asking you. Ask means it needs your approval every time. Deny means it's blocked entirely — the agent can't use it at all.
The critical rule to remember is that deny always wins. If something is denied at any level — managed settings, project settings, or user settings — it's blocked everywhere. No one can override it. This is how security teams enforce policy.
My recommended starting point: deny access to your .env files and any secrets directories. Allow your test commands and basic git operations without prompting. Leave everything else on ask until you've built enough trust. You can always loosen permissions later. It's much harder to tighten them after an incident.
The settings cascade matters for larger teams. Managed settings, which your platform team or admins control, override everything. Then project settings. Then personal user settings. This means you can enforce organisation-wide rules while still letting individual teams and developers customise their experience.
Permissions in Practice
Here's what a real settings.json looks like. The allow array lists tools and commands the agent can use freely — no confirmation prompt. The ask array requires your approval every time. The deny array blocks access completely.
Notice the prefix wildcard syntax. Bash(git:*) matches any git command — git status, git commit, git diff. You don't need to list every variation. Read(.env*) blocks any file starting with .env.
Three settings files cascade. Managed settings from your admin override everything. Project settings in .claude/settings.json are committed to git — shared by the team. And settings.local.json is gitignored for personal preferences. This lets you enforce team policy while still giving individual developers flexibility.
Splitting Instructions Into Modules
As your team grows and your instructions get longer, you'll want to split them into separate files. That's what the rules directory is for. Each markdown file in dot-claude-slash-rules covers one topic — code style, testing, API conventions, database queries.
The powerful feature here is path scoping. You can attach glob patterns to a rule so it only loads when the agent is working on matching files. Your React component rules only load when the agent touches files in your frontend directory. Your database query rules only load for backend files. This saves context tokens and reduces noise — the agent isn't distracted by irrelevant instructions.
There's an important distinction between rules and skills. Rules load automatically — either every session or conditionally by file path. Skills load on demand, when you invoke them or when the agent decides they're relevant. Use rules for "always know this" and skills for "know this when I ask."
In OpenCode, this feature isn't built in natively — there's a community plugin called opencode-rules that adds it. Alternatively, you can point at any markdown files using the instructions array in your opencode.json config.
Reusable Workflows
Skills are the most flexible extension point in both tools. A skill is a directory with a SKILL.md file that contains YAML frontmatter — a name, a description — and then markdown instructions. The name becomes a slash command. So a skill named deploy becomes slash-deploy.
There are four ways a skill gets invoked. You can call it directly as a slash command. The agent can auto-load it when it reads the description and decides it matches your prompt. You can pre-load it into a subagent's context at startup. Or the agent can programmatically call the Skill tool mid-conversation.
The allowed-tools field in the frontmatter is particularly powerful. When you declare that a skill is allowed to use certain tools, those tools are granted without per-use approval for the duration of that skill. So your deploy skill can run git commands and bash scripts without prompting you every time.
And here's the strategic point — skills follow an open standard. They're portable across Claude Code, OpenCode, and even OpenAI's Codex. Your investment in writing skills is not locked to any one vendor. That's a meaningful advantage for teams evaluating their options.
Your Agent Team
Subagents are where things get really interesting. Each one runs in its own isolated context window with its own tools, its own model, and its own permissions. When it finishes, it sends a summary back to your main conversation. Your main context stays clean.
Let me walk through the six examples we've defined here. The Code Reviewer is read-only — no write tools at all. It scans diffs for bugs, style issues, and security concerns. Simple, focused, and safe.
The Explore agent, which we call the Spec Oracle, is a pattern I want to highlight specifically. When your build agent is deep in implementation and hits a product question — "what should this endpoint return for this edge case?" — instead of loading product specs into the build agent's context, you delegate that question to the Spec Oracle. It has the product context. The build agent stays focused on code. This keeps context budgets sustainable across long sessions. It's one of the most effective patterns we've found.
The Sprint Planner works in analysis mode. It reads your codebase and project context to decompose epics into tasks with estimates, dependencies, and acceptance criteria. It doesn't modify anything.
The Technical Writer generates and maintains your documentation — READMEs, architecture decision records, API references — from actual code changes. Pre-load it with your docs conventions skill and it produces consistent output every time.
The QA Guardian is not a passive test generator. This is an active collaborator. It authors test plans before you start implementing. It reviews designs for testability. It writes Playwright end-to-end tests. And it focuses hard on edge cases — the things human reviewers tend to miss when they're tired at four PM on a Friday. Configure it with strict permissions and your team's quality standards skill.
And the Researcher deep-dives into documentation, external APIs, and codebases to surface prior art and options. Read-only, web-search enabled, cannot touch your code.
Anatomy of a Subagent Definition
This is a real agent file. The frontmatter at the top is your configuration — model, tools, permissions. The markdown body below becomes the agent's system prompt. That's it. One file, two sections.
Notice this code reviewer has Read, Glob, Grep, and git diff — but Edit and Write are explicitly disallowed. It can look at everything, change nothing. The permissionMode is set to plan, which means it operates in read-only mode. This is defence in depth.
The body is where you put the agent's personality and instructions. What should it look for? How should it report findings? What severity levels should it use? This is the system prompt that makes one subagent different from another.
You invoke subagents either explicitly — specifying the subagent_type in an Agent tool call — or let the main agent delegate automatically based on the task. Both work. Explicit is better when you want guaranteed routing.
Deterministic Automation
Everything we've talked about so far — instructions, skills, agents — relies on the LLM choosing to follow them. And most of the time it does. But for things that absolutely must happen, every single time, you want hooks.
Hooks fire at specific lifecycle points. Session start. Before a tool executes. After a tool completes. When the agent finishes its turn. They're deterministic — they always fire. The agent doesn't choose whether to run them.
The classic example is a PostToolUse hook that matches Write or Edit and runs your formatter. Every file the agent touches gets auto-formatted. Guaranteed. No relying on the agent to remember your Prettier config.
The more powerful use case is PreToolUse hooks for security. A PreToolUse hook can inspect a bash command before it runs and block it. And here's the critical part — a deny decision from a PreToolUse hook cannot be overridden, even in bypass-permissions mode. This is how you enforce hard policy.
Claude Code gives you four handler types: shell commands, HTTP endpoints, LLM prompt evaluation, and full agent-based hooks. OpenCode uses a JavaScript plugin system that subscribes to the same lifecycle events. Different implementation, same concept.
Real Hook Configurations
Two practical examples here. The first is an auto-formatter. Every time the agent writes or edits a file, prettier runs automatically. This is deterministic — it always happens, it doesn't depend on the LLM remembering to format. This is the single most popular hook I see teams adopt first.
The second is a safety gate. Before any bash command executes, a prompt hook asks a fast LLM: "is this command safe?" If the LLM says no, the command is blocked. This catches things like accidental rm -rf or commands that would modify production resources.
The matcher syntax is pipe-separated tool names. Write|Edit fires on either tool. You can also use the if field for finer matching — like only matching bash commands that start with git.
Four hook types: command runs a shell script, prompt asks an LLM, agent spawns a full subagent with tools, and http POSTs to a URL. Most teams start with command hooks and graduate to prompt hooks for safety gates.
Team Shortcuts
Custom commands are the simplest extension. Drop a markdown file in your commands directory, and it becomes a slash command. The filename is the command name. review.md becomes slash-project-review.
The real power is standardisation. Instead of every engineer writing their own review prompt, you define it once. Slash-review runs git diff against main, injects the actual diff into the prompt, and gives the agent specific criteria to evaluate — bugs, security issues, missing test coverage, performance concerns. Everyone on the team gets the same quality bar.
Same for onboarding. A slash-onboard command that explains your codebase architecture, key patterns, and dev setup. New team members get a consistent, always-up-to-date introduction from an agent that has actually read every file in the repo.
In Claude Code, commands have been merged into the skills system — your existing command files still work, but new ones get extra features if you use the skills format. In OpenCode, commands support argument placeholders so you can pass parameters — slash-test Button substitutes "Button" into the template.
Connecting to the Outside World
So far everything we've covered is about the agent interacting with your codebase. MCP servers extend the agent's reach to external systems — your database, Slack, GitHub, Jira, internal APIs. The agent can query your database, post to Slack, create GitHub issues — whatever tools the MCP server exposes.
MCP supports local servers, which run as child processes on your machine, and remote servers over HTTP with optional OAuth. Both tools support MCP but configure it slightly differently — Claude Code uses a dot-mcp.json file, OpenCode puts it in the main config.
One warning about MCP servers — each one adds to your context window. The GitHub MCP server alone can consume thousands of tokens just in tool descriptions. Be selective. Only enable the servers your agent actually needs for the current project.
Custom tools in OpenCode deserve a mention. You can define TypeScript functions in the dot-opencode-slash-tools directory that the agent can call. Type-safe with Zod schemas. Claude Code achieves the same thing through MCP servers or plugins.
And plugins are the packaging layer that bundles everything — skills, hooks, agents, MCP servers — into a single distributable unit. Claude Code uses git-based marketplaces. OpenCode uses npm packages. Both support local development and testing.
Controlling Agent Behaviour
Modes and output styles control how the agent behaves without changing your permanent configuration. Think of them as runtime switches.
Both tools ship with Build mode, which has full tool access, and Plan mode, which is read-only — the agent can analyse and suggest but cannot modify files. OpenCode adds Review and Debug modes out of the box, and you can define custom modes as markdown files.
The practical tip here is to use Plan mode when you're exploring. Switching to plan mode when you want to understand a new codebase or evaluate architecture options means you can ask aggressive, exploratory questions without worrying about the agent accidentally changing something. Switch back to Build when you're ready to implement.
Claude Code has output styles, which change how the agent communicates rather than what it can do. The Learning style is worth knowing about — it inserts TODO-human markers in code for the developer to fill in, turning the agent into a teaching tool. Consider this for onboarding junior developers.
Learning Across Sessions
Here's something that trips people up. Every session starts with a completely fresh context window. The agent has no memory of what you did yesterday. Memory is the feature that bridges that gap.
Claude Code has auto-memory built in. As the agent works, it automatically saves observations — commands it discovers work, patterns it notices, architectural insights it picks up. These accumulate in a MEMORY.md index file, and the first two hundred lines load at the start of every session.
Agent memory is the team-level version. When a subagent discovers something about your codebase, it can persist that knowledge in a shared directory that gets committed to git. Every team member's future sessions benefit from that discovery. There's also a local variant that's gitignored for personal preferences.
In OpenCode, persistent memory isn't built in — the community letta-memory plugin adds this capability. If your team is on OpenCode, I'd recommend evaluating that plugin early.
The practical takeaway: when you correct the agent about something, ask yourself whether that correction should be permanent. If you've corrected the same thing twice, it should go in your config — either as a rule, a CLAUDE.md update, or formalised from an auto-memory entry.
Agent Memory — How It Works
This is the mechanism that makes subagents genuinely useful over time. You add one line to your agent's frontmatter — memory: project, or local, or user — and the agent gets persistent storage.
Here's what happens under the hood. When the agent starts, Claude Code loads the first 200 lines of its MEMORY.md file into context. Read, Write, and Edit tools are automatically enabled for the memory directory. The agent decides what's worth remembering — patterns it discovers, solutions to problems, architectural insights — and writes them to its memory files.
Three scopes. Project memory lives in .claude/agent-memory/ and gets committed to git. This is the most powerful option — when your code reviewer learns something about your codebase, every team member's future reviews benefit. Local memory is gitignored, for personal notes. User memory is global across all projects on your machine.
The key insight: you don't write the memories. The agent does. Your job is to choose the right scope and let the agent accumulate knowledge naturally. Over weeks and months, a subagent with project memory becomes increasingly valuable — it knows your codebase's quirks, your team's patterns, your common pitfalls.
The Five-Stage Journey
Let me give you the full adoption lifecycle in five stages. Stage one is Bootstrap. Run slash-init. It takes two minutes. It generates your first CLAUDE.md or AGENTS.md from your actual codebase. Commit it immediately. You're already ahead of most teams.
Stage two is Configure. This is your first week. Set permissions. Deny access to sensitive files. Add rules for your code style and testing conventions. Define your first skill — PR review is the obvious starting point.
Stage three is Integrate. Connect MCP servers for the services your team uses — your database, GitHub, Slack. Add custom tools for internal APIs. Install community plugins that solve problems you've already identified.
Stage four is Iterate, and this is the stage that never ends. Use the agent every day. When it makes a mistake, don't just correct it — encode that correction into your configuration. Every correction is a configuration improvement waiting to happen. Auto-memory captures some of these automatically, but the important ones should be formalised into rules or skills.
Stage five is Scale. Commit your configuration to git. Onboard the rest of the team. Add managed settings for organisation-wide policies. At this point, your agent configuration is living infrastructure — it improves with every sprint, just like your CI pipeline.
The key message here is that this is not a one-time setup. The best configurations I've seen evolve daily. Treat it like your CI pipeline — living infrastructure that gets better every week.
Choosing Your Tool
I won't read every row of this table — it's here as a reference you can come back to. But let me highlight the key differences.
Claude Code has built-in rules with path scoping, built-in auto-memory, and a richer hook system with four different handler types — shell commands, HTTP, LLM prompts, and agent-based hooks. It's deeply integrated with Anthropic's models.
OpenCode has a dedicated custom tools directory, built-in modes and themes, and supports any LLM provider — Anthropic, OpenAI, local models, whatever you want. Its plugin system uses JavaScript and TypeScript modules, which gives you more programmatic control.
The shared ground is actually very large. Both support the Agent Skills standard. Both have agents and subagents. Both support MCP. Both have custom commands. Skills you write for one tool work in the other.
My recommendation: if your organisation is standardised on Anthropic models, Claude Code offers the tightest integration. If you need multi-provider flexibility or want to use open-source models, OpenCode gives you that freedom. Either way, the configuration concepts are transferable.
What the Best Teams Do
Six practices that separate teams who get real value from teams who are still fighting with their agent.
First, commit your config to git. This is the single highest-leverage thing you can do. When one engineer figures out the right test command, discovers a gotcha, or writes a skill that saves time — the entire team benefits on their next git pull.
Second, start with the workflow you repeat most. Don't try to build the perfect configuration upfront. Pick one thing. PR review is usually the best starting point because every engineer does it every day. Get that working, iterate on it, then expand.
Third, keep instructions specific and concise. Every line in CLAUDE.md consumes context tokens. Write what the agent can't learn from reading your code. If it's enforced by your linter, don't repeat it in your agent config.
Fourth, iterate on every correction. Every time you correct the agent, that's a signal. If you've corrected the same thing twice, it needs to go in the config. Period.
Fifth, use deny rules for safety. Block .env files, secrets directories, credential stores. Deny rules can't be overridden, even in bypass mode. They're your safety net.
And sixth, scope permissions tightly. Start restrictive and loosen over time. It's much easier to grant new access than to recover from a misconfigured agent that had too much access.
Your Action Plan for This Week
I want to make this very concrete. Five steps, each one under fifteen minutes. You can do all of these by end of day Friday.
Step one: open a terminal in your project directory and run slash-init. It takes two minutes. Review the generated file, make any obvious corrections, and commit it. That's it. You're configured.
Step two: open that file and make sure your build, test, and lint commands are correct. This single edit eliminates the majority of "wrong command" errors. Five minutes.
Step three: add deny rules for your sensitive files. .env, secrets directory, anything with credentials. Five minutes, and it prevents the most common security concern people raise about these tools.
Step four: create one team skill. I suggest PR review. Write a markdown file that describes how your team reviews PRs — what to check, what standards to apply, what to flag. Put it in your skills directory. Commit it. Now everyone on the team has the same review workflow.
Step five: use the agent on a real task. Not a toy project. Not a hello-world. Real work. You'll immediately discover where your configuration has gaps — and those gaps become your next improvement. That's the iterate stage kicking in.
I'm happy to pair with anyone who wants help getting their first configuration set up. Hands-on pairing is the fastest way to build confidence with these tools.
Closing
Here are the links to the official documentation for both tools. Bookmark them. Both have excellent docs with interactive explorers and real examples.
If you have questions after today, drop them in our agentic-engineering channel. Share your configurations there too — when someone writes a good skill or discovers a useful hook, the whole team should know about it.
I'll leave you with this. Your coding agent is only as good as the context you give it. Five minutes of configuration today saves hours of correction tomorrow. And the best time to start was when you first installed the tool. The second best time is right now.
Step one is slash-init. It takes two minutes. Go do it.
I'll take questions now.