12 Apr 2026
Seriously, what’s the point of software engineering interviews?
You’re telling me that in order to be hired for a company, you 1) get a clear cut problem and 2) solve it using your own brain without any external help at all. This would be a good test if you’re expected to solve puzzles for work the entire day, but that’s not the entirety of software engineering at all! You are expected to collaborate, read specs, refresh on coding semantics, learn new paradigms, and solve the ticket (typically via AI/Internet).
Give the same LeetCode styled problem to AI, and it’ll solve the problem faster than a human. So what exactly are we testing here? That you can monkey code faster than AI? That you’re easily replaceable?
Give me a break. None of this reflects the reality of software engineering!
Imagining a Better Process
Now that AI has escaped Pandora’s Box, we have to ask ourselves what the point of software engineering is. Let’s zoom out a bit.
At the minimum, a Software Engineer is supposed to:
- Develop code
- Maintain the codebase
- Fix bugs
- Write tests
- Identify features to add
- Read/write specifications
- Interact with other engineers from different disciplines/companies
At the minimum, a Software Company is supposed to:
- Develop new features/products
- Increase share value
- Outcompete competitors
AI is already pretty good at developing code, fixing bugs, and writing tests. However, it’s not great at the human side of the job: understanding tradeoffs/risks (I’m not talking about just code!), deciding features to add/remove/change, interacting with others, or maintaining the codebase. Don’t believe me? Just look at Microslop.
In other words, you need software engineers to be long term maintainers. Someone who understands what belongs in the codebase, identifies risky changes, asks annoying (but necessary!) questions, pushes back when something is wrong, and treats tests as both a safety check and documentation. You can’t do that if you are constantly firing and hiring people.
Code generated by AI should aim to be as readable to regular humans as much as possible. Even if there were optimizations that the AI could do right now, it would be looked down upon because reviewing the changes built on top of that feature will be a pain to review again later.
Don’t get me wrong, AI is a pretty great revolution. But they aren’t going to replace humans any time soon. In fact, companies should be scared because local LLMs will help usher in new competition. The only reason that layoffs are happening now are because we are in a blue/white collar recession. It’s not a recession for the rich… yet. Maybe once the recession comes for everyone, only then will companies try to hire as many people as possible again to prevent competitors from having human resources. Just like in 2020.
So… what now?
That’s the key: humans will likely become reviewers of AI generated code and will occasionally have to write code themselves. This won’t hold true across the board: Just like there are civil engineers that do manual labor even when there is heavy machinery involved, there will be instances where only human expertise is required.
So how do we test for these skills? Let’s back up; are we even asking the right question?
To me, it doesn’t seem like a test can accurately reflect these skills. If companies switch to prioritizing long term maintainers, they probably should look for those qualities/work experience. Eg.
- Looking at (F)OSS contributions
- Have a short paid 1 month internship where they are assessed among their peers
The last bullet is especially true for those fresh in the field (not just college grads) as they will likely not have any, if at all, (F)OSS contributions.
There may be other indicators of long term expertise that a company is looking for, but these are the only two I can come up with. I would’ve also said looking at their contributions at other companies, but actually verifying that would be hard.
As you can see, this is probablly not ideal for companies looking for a short and quick hire (though at this point they’ve probably switched to fully using AI anyways). Just like how you wouldn’t marry the first person that comes into your life, a company should also be careful on who they hire. AI just highlights this even more now.
Hopefully this leads to changes in firing practices as well. Eg. the bottom 10% are fired through no fault of their own. Having a cut throat company like that is negative for both moral and business; why would employees care about the work they are doing? They would only be there because the external system (eg. the social saftey net) has failed them.
A company that seems to be doing exceptionally well during all this AI stuff is Valve. Maybe it’s because they don’t have to listen to their shareholders since there are none?
See, once employees aren’t being treated like slaves and more like humans, the better your company will be. But I’m getting ahead of myself. The whole world needs to create structural incentives for everyone to be treated like a human being.