AI and the Primates That Forgot How to Make Fire
AI and the primates that forgot how to make fire
There’s no denying that AI is an incredibly disruptive technology. Disruptive. Or whatever fancy words marketers will use to make advancements sound cooler than they are by the time you are reading this.
I’m neither for or against AI but I think people are lazy, greedy and careless. Humans are not going to perish because AI rises up. We will perish because we became too dumb to survive.
At some point in time, cave dudes learnt how to make fire by banging sticks and rocks together and then they all had a good ol’ BBQ and cooked some woolly mammoth steaks on the grill. Good times. They taught those skills to the next generation the knowledge continued to be passed down the descendants. At some point between then and now, this knowledge has become almost mythical. Some people genuinely believe that making a fire with two sticks is only possible in cartoons. While it may be true that it is quite difficult to generate enough heat from this process to guarantee a fire, it is still possible. It is more likely that cave dudes used flint and pyrite to generate sparks and then fire1 or maybe they waited for natural fires to occur and then kept their fires burning as long as they could. Regardless of how they produced fires, it was a massive event in human history to be able to harness it.
If you were left out in the middle of nowhere now and you wanted a fire, are you certain you could create one on your own without tools to do it? For those of us that aren’t Bear Grylls or some kind of survivalist, or have magical pyromancy abilities, the answer is likely no, not without tools. Thankfully, we have many companies producing tools that make the process of creating fire a lot easier.
With the advancements in technology comes the ability for us to give up the requirement to learn new skills. This is a brutal consequence of our fragile human nature. For something as simple as fire, the effects of this consequence are not so bad; we used it to create new tools, new materials, and in a way, it was fundamental to any advancements made thereafter. What happens when we invent something that seemingly gives us the ability to give up the requirement to learn? As I said earlier, people are lazy and greedy.
AI is supposedly already responsible for 41% of the code on GitHub2. If that number is true (and I hope it is above the real number by quite a lot), then how much of that 41% do you think people understood when they accepted it into their codebases? Even if the code the AI gave them is capable of compiling and it made the program work, how many people do you think questioned it’s logical correctness in regards to the context of the application it was used in?
There are several solutions to every problem when it comes to software development. A solution has several properties worth considering: correctness, optimality and conciseness.
- Correctness - whether the solution works or not. A simple yes or no for whether the solution actually treats the problem.
- Optimality - whether the solution is the most optimised for the context of the problem being solved. I.e. a brute force search on an ordered list might be a correct solution, yet it is unlikely to be optimal.
- Conciseness - whether the solution is written with the least amount of unnecessary code. We don’t expect someone to try extreme code golf every solution but we do expect that a solution doesn’t contain abstractions for the sake of abstractions, variable assignments that are reflective of their lifespan (for example, don’t assign a property of a class to a variable if that variable is only used once), etc. Cross cutting concerns like logging would not be counted in a solution’s measurement of conciseness; the cross cutter would be examined separately.
AI coding assistants simply don’t get enough context to create solutions that meet all three requirements. The large language models are getting a lot better, however, they all lack the ability to introspect on the information they produce. They’re exactly like the shittest book-smart junior programmers you can get and they’re basically copy/pasting their way through helping you. They don’t produce new and creative solutions to problems, instead they pump out code that looks largely related to the task at hand with limited contextual information substituted in.
University produces coding assistants through computer science degrees. Only some of them go on to become good software developers. The separation between a programmer and a software developer is that a programmer only writes code, while the software developer designs digital solutions and can write code. AI is currently a cheap alternative to hiring a junior out of university.
It is incredibly annoying to search for software development articles on the internet now. Too many of them are starting to be written by AI and it’s obvious that they are when they provide no references and the content is full of hallucinations of things that don’t exist. They have that GPTesque way of writing that always has the same format.
The problems with training these models from data on the internet is the presumption that anything written there is correct and that the data scientists prepping the data can clean it properly. Eventually, the models may reinforce their own knowledge using the shit code they generated. If we all begin relying on the AI that is stuck in this trance, we may begin to lose the ability to think critically. This isn’t just going to affect software development, this will affect all areas.
l still think AI is quite cool and definitely worth exploring. I don’t think it will be all Terminators and death in the future. We just need to be cognizant of our human short comings and refuse to let ourselves be slaves to our own laziness. Always understand the code you write, the code you use, how you’re using AI and it’s limitations and most importantly, continue to evolve your critical thinking skills.