is vibe coding bad?

by hedgehog-ai — Fri 05 December 2025

YES

... the end.

...

Just kidding... vibe coding or its more professional sounding spin off: "context engineering" is more of a by product of developers coming to terms with the power of large language models and their reasoning capabilities.

It gives people super powers who would otherwise have no real access to software development. It creates its own set of risks and can create a dependence on AI companies.

Why I'm using it at arms length

Prototyping, meeting deadlines, rubber ducking and problem solving... these use case's are undeniably powerful and automation in the form of LLM generated code can be useful as long as you stay firmly in control.

I want to become more intelligent, and be a better software developer, inventor and creative force. If I thought vibe coding as a skill would help me achieve that I would be doing it 100% of the time.

What I find after extensive vibe coding is: racking up huge bills. Typing out non-transferrable knowledge instead of code: "Please make a thingy ->" like a project owner or as if your writing specifications. Ending up with AI slop code or text I didn't ask for.

Then the poor code you get back is not something you can get a refund for and when you pull your hair out saying things like: "No I wanted something else... why did you drop my production DB?" you have no recourse.

When you run out of tokens you feel like a drug addict when you have to buy more. It turns development into a gameified slot machine. You hope that your prompt will produce some magic and you might get a dopamine hit.

You spend most of the time doing code review which is the least fun part of programming. Programming as a skill is reinforced by typing out code.

Systems as specifications

A relatively unexplored alternative to LLM based system design is genetic and neuroevolution based approaches. You create a high level set of tests or a specification... then evolve the code using genetic algorithms. Instead of LLM based agents you could have neuroevolution based actors. The reward functions and test's passing could drive the outcome.

This kind of research or approach could generate novel code or solutions that an LLM or human + LLM process would never think of.

Other types of machine learning exist

Online or adaptive ML, open ended algorithms, reinforcement learning, novelty search, neuro evolution, spiking neural networks, weight agnostic neutal networks, transformer hybrid approaches do not get any hype. The word "agent" or agentic means something completely different to what you mean when you say agent in the context of reinforcement learning.

Most of the use cases LLM based agentic systems are being sold for: like "real time decision making" could be done much better and much more cheaply using other approaches like reinforcement learning or neuroevolution for example.

Everything and nothing

The word AI means everything and nothing. It has now become completely meaningless. It may as well just mean "computation" or "magic". Machine learning is a much better and more accurate term. Unless your system is sentient or can spawn conciousness please refrain from using the term AI.

You can do machine learning without python

It's possible to use Elixir instead of python for data science and machine learning. For scalable fault tolerant distributed systems or agentic systems Elixir is a much better choice in my opionion. Just compare the cost of a Broadway data pipeline to a python powered one at scale. Python is great as a prototyping language but you cannot scale it like Elixir.

The LLM powered agentic workers

Anthropic and other companies may try to sell agentic workers that log in remotely to your company and start working through your jira backlog. Apart from the absolute privacy and compliance nightmare this would be if something crucial got deleted without a backup - what else could go wrong?

There are agentic solutions that can help streamline your processes... they just need all your valuable real time private data and access to all your systems.

Self hosted non-LLM agentic systems

These self hosted evolved agentic systems are way more private and interesting in my opinion. Why spend millions leaking your ideas to third parties and spending tokens. Why contribute to the data centres that are destroying the natural habitat we call earth.

When you could evolve your own bespoke solution on prem. You could train your agentic actors that use reinforcement learning to make real time decisions.

AI as a friend

When you rely on AI too much, asking it for its opinion. Caring what it says you slowly start to lose your own critical faculties. You don't excercise that bit of your brain that has to do this all by itself. The over reliance and lack of actually forming muscle memory by typing or doing test driven or hand written code means your not keeping your skills sharp.

A form of vendor lock in

Having access to smarter and smarter models is a form of vendor lockin. You don't feel satisfied using an inferior model.

Your ideas are valuable

Your ideas are valuable... but you lost them the moment you prompt engineered it with a third party AI.

...