
AI has made writing code dramatically faster. That much is undeniable. But what I consistently see across teams building mobile and digital platforms is that the speed gain rarely translates directly into faster delivery. It often just shifts the bottleneck.
Code that once took days to write now appears in hours, only to queue up in code review or wait for testing. The coding phase accelerates; everything around it struggles to keep pace.
Chief AI Officer at Miquido.
AI researcher Andrej Karpathy described this shift as Software 3.0: instead of writing every line manually, teams now describe what they want the system to do and let AI produce large sections of the implementation.
Article continues below
In a recent interview, Karpathy revealed that by late 2024 his own working ratio had flipped, from writing roughly 80% of code himself to delegating 80% to agents. The new verb, he argues, is no longer “coding” but “manifesting”, expressing intent to systems that implement it.
The agentic era is here, as tools like Claude Code, released in May 2025, and OpenAI’s Codex agent, released in October 2025, have moved far beyond autocomplete. They can now autonomously plan, write, and debug entire features.
The initial phase of any project feels almost frictionless. You can go from a vague idea to a working proof of concept in a single afternoon. However, complications appear once that initial version has to fit into the actual product.
The new code still needs to work with existing services, handle real user traffic, and stay reliable as the rest of the platform evolves. Faster generation doesn’t remove these steps. It moves them downstream, and concentrates them.
Engineering teams end up spending more time reviewing, integrating, and stabilizing output that was produced quickly but without full visibility into the wider system. Code review queues grow. Test suites have to work harder.
Features that look complete in isolation reveal subtle inconsistencies only once everything is connected and running under realistic load.
There is a subtler challenge that rarely gets discussed: what developers actually do while they wait. Working with an agent means delegating a task and then sitting with downtime.
The developers who use that time well, by preparing the next prompt, spinning up a parallel agent on another part of the system or reviewing architecture, are seeing compounding gains. Those who don’t lose their deep work rhythm entirely. In practice, many developers also gravitate toward AI tools to reduce effort rather than multiply output.
Individual efficiency rises, but team delivery velocity doesn’t always follow. I see this regularly in our own teams and in the organizations we work with. Managing this gap requires active project leadership, clear expectations, and a genuine shift in developer mindset. The tools are only half the story.
When the whole process catches up
The teams that do achieve real delivery acceleration across the full cycle, not just the coding phase, have redesigned how they work, not just which tools they use. Three things make the difference. First, upfront architecture investment.
Agents produce far better output when given clear structural constraints. Investing serious time in system design before prompting pays back many times over in review and integration effort saved.
Second, agents checking agents. This means using dedicated review agents to check generated code for security vulnerabilities, architectural consistency, and compliance with your quality standards. These agents catch issues early before they move further down the pipeline.
It also includes test generation agents that create tests from tester-written specifications and run them continuously. On large projects, regression testing that once took weeks of manual effort now runs in a fraction of the time.
Third, giving agents the right context and capabilities. An agent working from vague instructions will produce vague results. This starts with how requirements are written: well-structured product requirements documents that are precise and detailed enough for an agent to execute from, not just for humans to read and interpret.
It extends to connecting agents to the right sources of truth: your design system so UI output stays consistent, your project management tools so agents understand current requirements, your documentation so they are not working from guesswork. This is where institutional knowledge compounds into a durable edge.
Adoption looks different depending on context. Startups are leading the charge. With funding harder to secure than a few years ago, there is real pressure to show results fast, and early-stage teams can afford to move quickly without deep security or compliance constraints. Vibe coding a first version is now simply how startups operate.
Larger enterprises tend to move more cautiously because their systems are more complex, compliance requirements are tighter, and the risks to their reputation are much greater. Adoption is happening, but generated code goes through significantly more review before reaching production.
According to the JetBrains State of Developer Ecosystem 2025 survey, 85% of developers now regularly use AI coding tools and 41% of all code written in 2025 was AI-generated. The tools are ubiquitous, however the discipline around them is not.
The changing role of engineers
What is shifting most fundamentally is the nature of the engineering role itself. Developers are becoming system directors rather than implementers. The day-to-day work is now less about writing beautiful code and more about defining architecture, managing agent output, ensuring security, and thinking about scalability.
The weight has moved from writing to verifying and orchestrating. Karpathy puts it precisely: the bottleneck is no longer the keyboard. Strong engineers can now work effectively in languages they have never used before. The barriers between frontend and backend are dissolving.
Entire MVPs ship from teams of one or two people. A proof of concept that would once have taken weeks can be built in an afternoon and sent to a client the same day, something that genuinely changes competitive dynamics in pitches and early engagements.
The advantages are clearest where patterns are well-established: standard integrations, repeatable workflows, and routine business logic. The further you move from that territory into complex, long-lived systems with years of accumulated context, or into questions of security and scalability, the more human judgment remains essential.
Software 3.0 is real. The acceleration in the coding phase is genuine and significant.
But the teams extracting the most value are not the ones generating the most code, instead they are the ones who have rebuilt their processes around the new reality: investing in architecture up front, using agents to verify agents, giving agents the right context to work from, and managing the human dynamics of a fundamentally changed working day.
The bottleneck is no longer writing code. It is judgment about what to build, how to structure it, and whether what the agent produced actually belongs in a system that has to perform reliably under real conditions. That is what engineering discipline looks like in the Software 3.0 era.
We’ve featured the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/9WT9t3hZhDVD84bF8rSypL-2560-80.jpg
Source link




