Let the machine rise
Let me start by stating my current opinion on Large Language Models. They are great, and I use them extensively. I consider them a great motor for getting things done.
But sometimes programming, problem-solving, or life in general are not only about getting things done. Sometimes they are about innovation, pushing the limits of what we know, and making something you enjoy.
For me, this is the big busilis, the difference between these two realities. The fact is that there’s a difference between getting things done, getting things over the hump, and being creative and innovative on issues. And some people are refusing to accept this fact.
Personally, I call them cuck programmers: someone who ignores these two realities. It’s not an insult! But it’s the best way for me to describe someone who has completely ceded creative control and innovation to the machine. Someone who no longer understands that there is this duality between getting things done, innovating, improving, and enjoying something; someone who truly believes the machine can do everything better than humans.
Worse than that, these guys are also giving the machine a bad rep. They sit there watching the machine work on the project, sometimes intervening, but most of the time failing to properly guide anything. Just pressing tab, accepting all edits, and watching. They’ll spend the whole day writing prompts in the style of “Make it better… Make it nicer…” and then claim at the end that they wrote this new feature in record time. No, you didn’t! The machine did. You just managed it. Poorly, as well. Why are you taking credit from the machine? You’re making the machine look dumber because you managed it wrongly. Now both look like idiots.
These people are the ones who then proceed to claim that nowadays AI is way better at programming than humans are. Yes and no. Why are you engaging in this extreme case of glAIzing? The more unhinged the promises, the more people will fly to the other side of the scale when those promises are not met. They claim the models have PhD-level intelligence, but then they can’t figure out by themselves that they shouldn’t manipulate people? We’re so close to the singularity, but the machine will still get bamboozled by the simplest enigma if it deviates slightly from the original example it saw.
This same class of humans has been prophesying the end of software engineers for months now. But the problem with promises that have a time frame is that time will pass, indifferent to our human affairs. And now, the time is running out, and these same leaders are still hiring software engineers like before. What happened, bro? Some might say it was just exaggeration, just bait. But it’s only bait when they’re wrong. When they’re right, they call it vision.
And why is this sad? Because the machine is actually doing amazing work. The glorification of the machine as this inherently better, exponentially improving entity is dangerous and wrong. If you want to glorify something, glorify what humans are building. Let the machine grow, but don’t burden it with your fantasies.
Progress isn’t about watching the machine replace us; it’s about remembering why we built it in the first place.