Artificial Intelligence, AI, is currently sweeping over the software development industry. The creators of the newly formed AI technology companies warn us that software developers will be a relic of the past within 6-12 months because their tools are so good at writing code, developers will no longer be needed. In this post I would like to explore the consequences of world where code is no longer written by humans and the inevitable slow down that will cause on software projects.

First lets breakdown some assumptions. I think these are mainly factual assumptions, but they could change with future breakthroughs, although, I don't foresee that happening.

  • AI produces code faster than humans
  • How we prompt AI, with language and especially English, is not precise

How we produce code from Large Language Models, LLM's, is by prompting it with natural language. The first thing to note is while the output is faster than a human, the code quality is not better than a human. I think this is pretty obvious to anyone who has read AI generated code. Humans can produce much more compact and understandable code than an AI. However, if humans are no longer reading code, this may not be a bad thing. Almost no one reads assembly code anymore, so if we can take humans out of the loop this may be a moot point. But if humans still are required to interact with code, this could be a problem.

The next thing to note is since language is imprecise, a human must be in the loop for planning and directing an LLM to code. Many LLM's already have a planning feature to help design and architect solutions, however, this process will always involve a human because you can't go from a high level prompt to a detailed plan without some sort of feedback loop between a human and an AI. This is a discovery process that happens today and still must happen with AI; however, it changes the importance of this process since AI generates the code with far less feedback. AI will not complain about a complex algorithm or bad business logic. AI does what you prompt it to do so the preciseness of your planning may now have a bigger impact on the generation of your code.

Pre-AI State of Software Development

Software development, pre-AI, I would classify as semi-professional. It is more like a carpenter than an architect. You can learn software development on your own and be quite successful. You can also learn software development at a university and be successful. There are also many in between methods to learn like coding courses and bootcamps. This is why in development we have books titled "Learn Python in 24 hours!". While that is most certainly not realistic, it is closer to reality than a book titled "Learn to be an Architect in 24 hours!" or "Learn to be a Doctor in 24 hours!". These titles are overtly absurd. However, you can learn Python or other languages in 24 hours and start to do something useful. You will still need to learn more and practice, but you can still start writing some simple software.

Software is also often built in a semi-professional manner. The most common practice of software development in this state is build small, learn, and iterate. Eventually with iteration and learning, you build something bigger and more professional. This allows the software to be released quickly and evolve overtime to something better and more useful. In the terminology of cathedral and the bazaar, I would argue pre-AI the software industry built more bazaars than cathedrals. There are definitely plenty of cathedrals out there, but the industry by in large follows the approach of building in small iterations for small to medium sized projects.

Professional Engineers

I think it is important to take a quick aside to talk about what I mean by "semi-professional". Different types of engineering can be broken into professional and non-professional. Professional engineers must be licensed by a professional organization and often can not practice without that license. This is akin to a doctor. A doctor can not practice without a license. Professional engineers, and doctors often require longer training and practical experience before they can be licensed. Non-professionals by contrast do not need a license to practice. For example, an architect or civil engineer have to be licensed to practice. In contrast, aerospace engineers do not need to be licensed in general. There is an aerospace professional organization but most aerospace engineers do not need this license to work. During my career in aerospace, I never once met a professionally licensed aerospace engineer and I have worked on projects such as the Space Shuttle and satellite operations. Note, there are sectors of aerospace especially in the aeronautical side of aerospace where licensing is more common.

So while many people would consider an aerospace engineer more prestigious and than a civil engineer, a civil engineer has more rigorous requirements to qualify to build roads than an aerospace engineer has to build rockets. Aerospace as an industry is still new and "figuring" out what an aerospace engineer should know to have this title. While civil engineers have a long history and a generally well defined area of expertise they must possess. With that said, no one is going to hire aerospace engineer without some schooling. This is a semi-professional. In software engineering in the pre-AI era, you could get a development job by simply passing a coding interview test. Of course, to do that you would need schooling or practice, but no license is required.

I take this aside because while I don't think in the AI era, coding will become professionalized, I think I will show that it will move in that direction. More training and experience will be required to use AI to code if you wish to produce quality software.

The AI Era: Cottage Apps

Before I dive into the productivity trap, I would like to note, the AI era will produce an industry of small cottage apps. I think there will definitely be a wide variety of small niche apps that will be made by experts in those areas. These experts will be able to sufficiently define for AI, the requirements of the application and know what the output should be to successfully build useful apps. These apps may have loads of bugs and vulnerabilities, but they will be useful. Individual applications may not spread over more than a handful of computers, but they will increase productivity in significant amounts for those individuals. However, the upfront expertise cost to create these applications successfully is high. I predict these applications will not change the economy as a whole much. However, it might spawn a small cottage industry of niche app makers. In terms of productivity this will seem like an exponential jump for the people involved. But the wide spread benefits of this productivity will not be felt.

The AI Productivity Trap

For quality software and good general engineering, you must be able to prove the thing your building can do the thing you said it can do. Much of this proof, pre-AI, is done while writing the code. If you have a simple and small enough amount of code, you can often reason about the code doing the right thing by just reading it. This becomes less obvious with AI generated code. This can still occur with AI generated code but requires a higher degree of expertise to read and reason with the code. If a junior engineer writes code though, they are usually writing code they understand by the mere act of doing it. Of course this isn't always the case. Sometimes a developer will copy code from somewhere else and use it without understanding it. But overall, with AI this is continually happening, while with hand written code this is only happening sometimes. Thus, I would argue the code generated by AI is going to be less understood and reasoned about than hand written code.

Another mechanism of proof is the hands on implementation of an algorithm or plan. Often while implementing an algorithm or plan, by doing the thing, I find the thing is too complex or doesn't really work the way I thought. Writing code and implementing the thing help facilitate a learning process. Almost every technical plan I've developed in my life has to be changed because of something I learned while building the thing. This evident in the software industry as a whole by the fact that "agile" practices are widely adopted.

This is not to say that projects using AI coding can not be agile and learn during the build out. The trap is because you are not doing these practices while writing the code, they get pushed out to the edges. With AI coding you now must spend more time planning if you need to control the algorithms or implementation tightly, and you must spend more time verifying the output. Granted these practices in hand written code are informal so many times we add in large planning efforts and large verification efforts. But because these informal practices occur, we can often have good quality software that does not implement these practices immediately. A very common practice is to have less testing and planning during the beginning of a software project and as the project matures, these practices get added in. The trap with AI code will be the more immediate need for these practices.

Many AI coders boast the writing of 10 thousand lines of code in a day. I've written many projects under that line count and been able to produce quality code and output with very little formal testing and planning. If you are producing that much code daily, there is no way to maintain quality and security without very formalized testing and planning. The trap of AI generated code is the immediate pit you fall into of corporatization. Today, most software projects do not fall into the large sized code base. With AI, you can reach that size in a couple days and now if you want to maintain quality, you must utilize slow corporate processes. Large processes aren't inherently bad, but they are inherently slow and not something we should immediately adopt.

The Planning Trap

Since English and language are inherently imprecise and this is how we prompt AI to generate code, more time must be spent on planning with AI. Often in hand coded projects, we can come up with some really ill defined plans, start coding it, learn along the way, and adjust. Especially with new features and new projects, the practice of coding helps me to learn and figure out what needs to be done. This is why Windows wasn't decent until version 3.11. It's hard to plan out an operating system until you've written one a few times. With AI, since engineers will be writing less code, they will cut a big part of how we currently learn to create software.

Of course, there are other ways to learn about creating software. Software engineering will need to switch to more formal educational processes. With civil engineers, you can't really build your own road, so their education includes more formal certifications to make sure they know what their doing. Also civil engineering projects must go through a more rigorous planning period before anything is built. Doctors, must go through much more education and training before being able to practice on their own. The same thing must happen with software engineers in the AI era. Software engineers will have to spend more time studying software design patterns and architecture and produce better planning before ever generating a piece of code. We may have this class of engineers already with people who have been more formally trained at school in computer science. But it may be that the self taught and bootcamp class of developer decline at least in terms of using AI to code.

The AI planning trap is that with AI generated code, software projects will require more formal planning which can only be done by more experienced developers. So while code generation will be fast, more formal plannings processes can often easily eat away at any productivity gains in coding.

The Verification Trap

Since we will understand the details of the AI code less, we will need more rigorous testing of AI coded projects to verify output. Also AI code generation produces more lines of code and thus more liability. This necessitates more testing and verification. Today we can manage less testing by the fact that we understand our code more and produce fewer changes. Yes this is informal but especially with smaller projects, this is highly effective in practice. The problem is AI code generation pushes us into large project territory quickly where to maintain quality you have to have rigorous testing. More testing increases the time to iterate. Additionally, when errors occur, if the AI can not debug them, the human debugging will take more time and expertise. While the verification may not be as big of a trap as planning, the problem is your forced faster into formal verification. If you are really producing 10k lines of code a day, you will probably need formal testing within one week of your project.

Summary

The problem that I foresee with AI coding is that it destroys the "middle class" of development we have today. You can use it to code small niche projects very effectively by knowing exactly what it should do and keeping the feature set tight. These applications will have ugly code, bugs, and security holes, but it doesn't matter because they will not be widely distributed and be very useful. Once you start using AI to code larger projects, in a few days you will be pushed to do more detailed planning to control the AI more and need to implement a rigorous set of tests to verify the output of the AI. Also the skill set to plan well and test well will require experience or more formally trained engineers. All of these things slow down software development overall even though the productivity of the code generation increases. In the enjoyable "middle class" of coding we have today, we often learn during the process and can produce smaller maintainable projects. The constraint of slower code generation forces us to cut features and algorithms that will be easily thrown into an AI project. These constraints makes better, higher quality software.

Some may argue AI will produce better code in the future. And even if this is true, the planning trap will still be a problem because at the end of the day AI can't read our minds yet. The testing trap may be lessened if the code generated can become shorter and more readable. This will allow us to do more verification by code review. However, you still have the fact that a human didn't write the code, and if you want a human has to approve the code and put their reputation on the line, they will want more testing. So unless you don't care about the quality of your end product, which many software projects don't, your AI software project will most certainly fall into a productivity hole.