Managing AI Coding Tools Is Just Managing Developers. That's the Point.
The discipline that makes AI coding tools work isn't new — it's the same engineering hygiene every team should already have. Here's what we've learned running Claude Code across a seven-person dev team.

Jason M. Lemkin recently made a distinction that stuck with me: the difference between an AI tool and an AI teammate. A tool helps you work faster. A teammate completes work autonomously with oversight. Most companies think they're building teammates. They're building tools.
He's right. But I'd add something from the trenches: if you actually treat AI like a teammate, you quickly realise it needs managing like one. Same discipline. Same hygiene. Same problems when you skip it.
At Eebz we have seven developers working with Claude Code daily. One of our lead devs, Callum Roberts, has been pushing the boundaries of what's possible, and what we've learned in the process has been genuinely surprising — not because the AI is magic, but because the challenges are so familiar.
claude.md is your new onboarding doc
If you're not using Claude Code, here's the setup: you can provide Claude with a context document called claude.md that tells it how to behave, what your codebase looks like, and what rules to follow. Think of it as the onboarding doc you give a new developer on day one.
This file is the single most important thing in our workflow. Get it right and Claude produces code that fits your codebase. Get it wrong — or skip it — and you're in for a rough time.
We run Claude in plan mode first, then review before it writes anything. That alone saves hours of cleanup. But the real leverage comes from how specific you are in that context document.
The same old dev hygiene, but it matters more now
Here's what nobody tells you about AI coding tools: the fundamentals don't change. They just get louder.
Decide on two-space or three-space indents? Enforce it. Use named colour variables? Enforce that too. Have a component library? Tell Claude to use it explicitly — use this button component, don't invent your own.
We use Nuxt and BootstrapVue Next as our framework, and being prescriptive about this has been critical. When Claude knows it should reach for an existing component, it does. When you leave it to figure things out on its own, it builds something new. And that something new will look slightly different, behave slightly differently, and cause you headaches three weeks later.
This is standard dev hygiene. It's the stuff every team should be doing anyway. But with AI, the cost of not doing it is amplified massively because Claude can produce code so fast that inconsistencies multiply before you catch them.
The vibe coding trap
This is probably our biggest learning so far, and I haven't seen many people talking about it honestly.
It is incredibly easy to vibe code something with Claude. You describe what you want, it builds it, it works, you ship it. Brilliant. Then five weeks later someone on the team vibe codes something similar — a different component that does roughly the same thing but with different font weights, different spacing, slightly different behaviour. Now you've got two components that should be one, and you're spending ten follow-up prompts telling Claude to match the font size here, use that colour there, align this with that.
It's a hell house. And it's entirely avoidable if you enforce the same discipline you'd expect from any developer: understand what already exists before you build something new.
Ironically, this is the exact problem mid-level developers have always had. They're fast at coding but don't spend the time understanding the existing codebase and component library. Their code goes off-piste, and you end up with a trail of inconsistencies. Claude has the same tendency — it just moves faster, so the mess accumulates quicker.
For this reason, we've so far had the best results avoiding the latest trend of inline CSS. It's too easy for Claude to redo everything from scratch when styles aren't abstracted into reusable classes. We strictly enforce named colour variables and tell Claude to use them. It's not glamorous, but it works.
The tricks that are actually working
We've found that how you describe Claude's role makes a real difference to the output.
When we're working on existing code, we frame it as: "As a senior developer who believes strongly in DRY code..." This primes Claude to look for existing patterns and reuse what's there rather than reinventing it.
When we want to innovate on something new — a fresh UI, a new feature with room for creativity — we frame it differently: "As an innovative front-end developer, inspired by websites like specific examples..." That gives Claude permission to be creative within boundaries.
We also lean heavily on our framework's documentation. Nuxt and BootstrapVue Next have solid docs, and pointing Claude at them is one of the most effective ways to keep it on track. Though we still have to explicitly say "use the existing xyz component" rather than letting Claude invent its own — that's probably the single biggest source of wasted time if you skip it.
The real challenge: context at scale
Here's where it gets genuinely hard.
The claude.md context document fills up too quickly. Even with the higher size limits for organisations, there's only so much you can fit. And sharing that context effectively across a seven-person team is a real operational challenge. Everyone needs to be working from the same playbook, but the playbook has a character limit.
We're still figuring this out. It's an evolving art — deciding what goes in claude.md, what gets handled through prompting conventions, and what you just have to enforce through code review. If anyone has cracked this at scale, I'd genuinely love to hear how.
The point Lemkin is making, from the inside
The tool-versus-teammate distinction is real. But here's what it looks like from inside a team that's actually living it: managing an AI teammate requires the same leadership as managing a human one. Set clear standards. Enforce consistency. Make sure they understand the existing codebase before they start building. Review their work.
The companies that will get the most out of AI coding tools aren't the ones with the best prompts. They're the ones with the best engineering discipline. The irony is that everything we've learned about managing Claude effectively is stuff we should have been doing all along.
This is the second in a series on building an AI-native product company. The first article, on how Claude Code transformed our development productivity, is also on the blog.