Using LLMs for coding? Don’t offload thinking - offload typing

Large Language Models (LLMs) and code assistants like Claude Code or Codex can be huge productivity boosters.

If you use them the right way.

The idea of “vibe coding” an entire app sounds great, and sometimes it works to a certain extent.

Most of the time, it doesn’t.

Apps fall apart as soon as you ask for changes. Security holes creep in: leaked user emails, hard-coded API keys, even dropped production databases. I see enough stories on tech feed on X to know the hype rarely matches reality.

I’ve been using code assistants for a while now. Claude Code is my favourite at the moment, and yes, it’s a massive productivity boost. It helps me skip tedious scaffolding, speed up long debug sessions, and sometimes spots mistakes I might have missed. Sometimes it even adds small improvements I hadn’t thought of.

But it can go wrong just as easily. Claude has written duplicated logic all over the place, and produced overly complex solutions.

Code assistants like adding code. They almost never remove it.

Left unchecked, that can turn a clean codebase into a mess.

So the question isn’t whether the model is good enough.

The question is how you use it.

Prompting is really just advanced requirements engineering.

Using code assistants doesn’t mean you stop thinking.

It means you stop typing.

Here’s what I’ve learned:

1. Write clear requirements

Being able to write functional and non-functional requirements is a superpower. The clearer you are about behaviour, constraints, performance, security, and style, the better the output.

2. Keep scopes small

Vibe-coding an entire app is fun for experiments, but not for real products. Break work into small, focused tasks that are easy to review.

3. Provide examples

I want my codebase to stay uniform. Linters help with formatting, but not with structure. When I ask for a feature, I point the assistant to existing examples, inside or outside the repo. It dramatically improves consistency.

4. Require tests

I’ve always worked test-driven, and AI makes it even more important. Tests are your guardrails. Asking the assistant to write tests first is a great way to check if it got the requirements right.

5. Work in parallel

When scopes are small and tests protect you, you can run several coding sessions at once, almost like managing a small team. Complex features still need more care, but simple scaffolding tasks work great in parallel.

6. Use dedicated agents

Claude Code lets you create specialised “agents”, which are basically just tailored system prompts. I have separate ones for things like writing tests or handling translations. Focused agents give much better results.

7. Don’t be afraid to restart

Even with perfect instructions, an LLM can drift as the chat history grows. Big context windows don’t guarantee accuracy.

If a session goes off the rails, just start over: clear the branch, start a new session, and re-prompt.

8. Review everything

I never ship AI-generated code unchecked. I review everything, make sure I understand what is written, it matches my quality standards, and make sure it won't cause problems later.


Bottom line

I never ask an LLM to do something I can’t do myself.

I ask it to do the things I don’t want to do anymore: repetitive scaffolding, boilerplate, long debug sessions.

I see it as autocomplete on steroids: it takes away the tedious work so I can spend my time designing, architecting, and solving the hard problems.

Use LLMs with intent, and they’ll speed you up.

Use them blindly, and they’ll create a mess faster than you can clean it up.

Let us build something strong

Briefly describe your goals. I will respond with a clear proposal, scope, and timeline.