6 Feb 2026
To be honest, I’m not scared of AI taking away what I do. I see it as a tool (however inefficient it is by depriving our planet or by stealing other’s copyrighted material) that’s abstracts away a lot of things. It’s honestly useful sometimes to get up to speed on how to do things or quickly see how a feature would look like! But maintenance and critical infrastructure will always need human approval. Would anyone like AI in their pacemaker? Talk about giving someone a heart attack! :P
In fact, I think AI slop highlights the waste we already endure!
Let’s take movies for example. When Avatar (the blue people movie) came out, it was highly rated because of the graphics. But if you were to watch it now, the graphics are nothing new and you’d actually be focusing on how sparse the plot is. Same with reading emails. How many times have you read something long winded where you thought to yourself, “Just get to the point already!”. In art, I see AI being used to get the pretty visuals out of the way while getting humans to focus on the more meaningful storyline. And in text form, I expect more people to concisely respond to things, dealing the death blow to the verbose and flowery prose that texting and social media had initially sidelined with their introduction.
It’s the same with software engineering. What is programming exactly? It’s the marriage between mathematics and language. That’s it. AI (specifically LLMs) understands the language part of programming really well but struggles with mathematics. And this is due to the way AI is structured: it’s a statistical machine that “guesses” what is likely to come next. Someone said it’s like “keyboard autocorrect on steroids” and I’d be inclined to agree.
Since we’ve defined programming as the marriage between language and mathematics, I can potentially see a future where AI can (maybe) slowly replace the language part. Here are some revolutions that I can potentially see with AI:
- Happening right now: Mainly used by engineers to scaffold big features and quickly iterate. Manually verify and optimize.
- Short term: Test driven development. Specifications will turn into test cases, and AI basically codes it up to solve the test cases. Humans will have to manually optimize and verify.
- Medium term: Runtime issues will be pervasive in test driven development, so there will be a shift to type and memory safe languages like Rust. This abstracts away pointer issues that are introduced with programming languages like C and C++ and the slowness of programming languages such as Python. Humans will have to manually optimize and verify.
- Long term: Humans will want to start minimizing the time they spend optimizing and verifying (Eg. [1]). One solution can be that AI generates mathematical functions to make an application you specify (similar to something like Lisp?), and then you can formally verify that those functions are correct by doing proofs on them. Basically back in the realm of mathematics. Maybe during that time AI get’s good at math? I’ve briefly heard of AI that are discovering new physics and mathematical concepts today, but haven’t heard much about them. Who knows.
- ? term: Companies start partly or fully open sourcing their code bases. Not only will this make sense as AI can train on the codebase to offer better suggestions, other companies will start to rely on your codebase (meaning more business customers). The tradeoff is that if people don’t like your direction, they can simply fork your project. More people using your codebase means more maintainers that you essentially get for free. Why vibe code something inaccurately multiple times when you can use a library that everyone shares and does things correctly once?
I think companies will eventually realize that coding isn’t simply about the number of lines you write, but the knowledge you gain from maintaining the codebase and interfacing with customers for their needs. In other words: trust.
So if AI is an abstraction and there is a lot of work still left to do (both to get AI to a viable state and the features that customers are requesting), why are there so many layoffs going on? Simple: it’s a long running macroeconomic recession. Ask yourself the following questions:
- If I were running OpenAI and AI was truly a boon to the software industry, wouldn’t I want to keep it to myself and develop software to out compete my competitors?
- If I were running Amazon, Apple, Google, etc. and AI was truly a boon to the software industry, wouldn’t I want to keep the software engineers I have and instead enhance them with AI to out compete my competitors?
The fact that OpenAI didn’t guard their most prized possession from others or that technology companies aren’t pumping out useful features by the truckload shows that there is something else causing the slowdown in the global economy that everyone is missing.
In fact, when it’s time for the AI software companies to pay back their dues, I think software companies will realize too late that they are too dependent on Claude, ChatGPT, and Gemini. Their codebases will be too full of AI generated code and there will barely be any engineers who truly understands the codebase. And then it will be a gold rush for hiring engineers once again (similar to 2020) to try and monkey patch the codebase. In fact, there’s already a term for this for how these AI companies are operating: Predatory Pricing. Humans are the expensive resource that AI companies are trying to frame on getting rid of, but in the process are losing everyone’s trust.
It doesn’t have to be this way. We can set up a system where every human is valued and create a future where most would like to live, not just the top %. Instead of depleting the Earth’s resources or harming other humans in a system that values greed, I believe the best way to do this is to create a worldwide social safety net: Worldwide UBI. I’ll hopefully be going into this a bit in an upcoming post, but we will see when that comes out.
And on the point of consciousness: we’ll have a conscious AI once we can define what consciousness truly is. :P