As someone who used LLMs for coding since 2023 I've seen insane progress with that tools. To the point of existential threat to software engineering occupation.
AI generated code is extremely cheap. And currently it is used to solve problems that are relevant for the time when we didn't have extremely cheap AI generated code. Some problems won't be relevant anymore, some approaches to monetization won't be relevant anymore. If company didn't have a moat like a network effect, market dominance, huge regulatory compliance, etc — it's cooked. It feels like we are heading to the market correction in coming years.
New companies starting today don't need new engineers in the amounts before. The founders think differently, they might need product designers and AI enabled engineers at 1-1 ratio. Designers will create high fidelity prototypes, and engineers will mostly productionize these prototypes (with help of AI of course). I believe 1-10 ratio from the previous times is over. On one hand it seems positive, as it reduces risks of starting a new business, but one the other hand the competition will be severe and many companies might just not get big enough to compete with established giants.
The established giants will adapt AI coding assistance on unprecedented pace. It will help them to keep and widen their moat. The inertia will help here to keep current jobs, but hiring will be drastically reviewed. In the markets with less job protections layoffs will continue. To hire new engineers teams will have to proof that they need a new problem solver, new AI orchestrator, that currently they are at their capacity and spinning another cloud agent AI will not help.
Additionally to that we might not even need that many software. Like how many personalized fitness plans apps are out there? How many of them actually adapting to user needs (like traveling or recovering after sickness)? Can it be solved with AI better?
It might not be the best interface and not best advice, but it is already good enough for many use cases. And critique will only serve as a direction of improvement for AI apps.
My assumption here is that AI tools indeed increase productivity in the medium to long run, not just being useful for flashy demos. Last year I heard a lot on this, and information was a bit skeptical, meaning companies running AI pilots reported improving productivity only by 10% in best case.
Will see how it will have this year, as I observed tools quality increased dramatically in practical cases (e.g. I don't care how good it is for solving PhD math problems, but I care how good it is in solving my problems and it is usually solving them quite well).
Also I've seen how good local models can be, just running Mistrals 8b on my old MacBook Air M1 with 16Gb RAM — quite comparable to state-of-the-art a year ago. Other open source and open weights models also available to run on own hardware.
This year I've already seen change in opinions from the industry heavyweights like Linus Torvalds or DHH. And some other notable people like Kent Beck, Martin Fowler, the whole O'Reily publishing house are already on board. The question now is not if AI tools are useful, but what impact they will have have on the industry.
And it does not look good. Software industry is reminding agriculture industry before industrial revolution. We definitely need it and it will definitely grow, but we won't need that many people working there. But the problem that change is happening much faster, and people won't be able to adopt. I've been working as a software engineer for past 17 years, but right now it feels that the industry in the current state have maybe just a couple of years left.