Friday, July 19, 2024

We All Know AI Can't Code, Right?

If anyone is telling you that AI can code what you need coded and build what you need built, they are lying to you. This is not speculation. This is not bombast. This is not a threat. We know enough now about how AI works, and especially GenAI, to be able to say this with confidence. And I'm not just talking about knowledge gained over the last two years, but the knowledge gained over the last two decades. I was there at the beginning. I know. For a lot of you, I'm telling you something you already know as well. But your work here is far from over. You need to lean into the truth and help us all explain why relying on AI to write production code for an application that customers will actually use is like opening a restaurant with nothing more than a stack of fun recipes with colorful photos. They look great on paper, but paper doesn't taste very good. The Boring Structural Work Matters To put this into a perspective that everyone can understand, let me ask you a question: Q: How would you know if this article was written by AI? A: Because it would suck. Yeah, maybe the bots could imitate my vibe, adopt my writing tics, and lean into the rule of threes as I often do, but even then, the jury is still out on how closely it can replicate my style beyond a sentence or two. Banana. Screw you, AI. The thing I'm 100 percent sure AI can't do is take my decades of experience in the topics I choose -- topics that need to be timely across an ever-changing technical and entrepreneurial landscape -- and use my snarky words and questionable turns of phrase to put insightful, actionable thoughts into the heads of the maximum amount of people who would appreciate those thoughts. That's structure. It's foundational. It's boring. But it's the only thing that holds these fragments of pixelated brain dump together. Look, if you want to write about a technical or entrepreneurial topic, you either need to a) spend a lifetime doggedly nerding down those paths with real-world, real-life stakes and consequences, or b) read a bunch of articles written by people who have done just that and then summarize those articles as best you can without understanding half of what those people are actually talking about. Which one sounds more like AI, a) or b)? Now let's talk about how that relates to code, because hopefully you can already see the connection. AI Is Not an Existential Threat Real coders know. The threat that AI presents to your average software developer is not new. Raise your hand if you've ever used GitHub or Stack Overflow or any other kind of example code or library or whatever to help you get started on the foundational solution to the business problem that your code needs to solve. Now, put your hand down if you've never once had to spend hours, sometimes days, tweaking and modifying that sample code a million times over to make it work like you need it to work to solve your unique problem. OK. All of you who put your hands down. Get out of the room. Seriously. Go. We can't have a serious discussion about this. Cheap, flawed, technical-debt-inducing, easily breakable code has been a threat to software developers since they first started letting us kids bang on Basic -- let alone the threat of any technology solution that ends with the word "-shoring". The AI threat just seems existential because of the constant repetition of a few exaggerated truths. That it's "free," that it's "original," and that it "works." Here's why that's going to be a race to failure. Position yourself. "AI" "Can" "Code" That's the most judgy, snarky, douchey section header I've ever written. But in my defense, there's a reason why every word is in quotes. Because this is how the lie propagates. Yes, what we're calling AI today makes an admirable attempt at slapping syntax together in a way that compiles and runs. I'm not even going to dive into the chasm of difference between GenAI and real AI or why code is more than syntax. But I will point to the fact that -- even beyond those quibbles -- we're not at anything I'd call viable yet. Damning words from an IEEE study follow: [ChatGPT has] a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent -- depending on the difficulty of the task, the programming language, and a number of other factors. I'll let you determine how "difficulty," "programming language," and "other factors" impacts the success rate. Quotes again. Sorry. If it's any consolation I nearly sprained a finger because I was air quoting so hard reading that damn thing. A conclusion of the study (italics are mine): "ChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered." So much like my example of why AI-generated articles suck, if you're trying to solve new problems by inventing new solutions, AI has zero experience with this. OK, all you "ChatGPT-4o-is-Neo" bros can come at me now. But it isn't just the syntax where AI has problems. Aw, AI Came Up With This All by Itself Code in a vacuum is worthless. Every software developer reading this just went, "Yup." Beyond all the limitations that AI exposes when it creates syntax out of "thin air" (or to use the technical term, "other people's code"), deeper problems start to expose themselves when we try to get the results of that code into a customer's hands. Code without design, UI, UX, functional requirements, and business requirements is a classroom exercise in futility. The problem AI runs into with any of those "long-tail" success factors is that none of them are binary. Zero. So, for example, Figma had to temporarily pull back on its AI design feature when it was alleged that its AI is just copying someone else's design. "Just describe what you need, and the feature will provide you with a first draft," is how the company explained it when the feature launched. I can do that without AI. I can do that with cut and paste. Figma blamed poor QA. Which one sounds more true? AI Is Great at a Lot of Things But not elegance. If your code is not infused with a chain of elegance that connects the boring structural-solution work to the customer-facing design and UX, you can still call it "code" if you want to, but it will have all the value of an AI-generated avatar reading aloud AI-generated content over AI-generated images. Have you ever seen that? It'll stab you in the soul. There's a right way to do things and there's a way to do things well, and I'm not naive enough to rail against the notion that sometimes you just can't do both. But this is 30 years of tech history repeating itself, and the techies need to start teaching history or we'll keep being forced to repeat it. So I'd ask my software developer friends to raise your hand if you've ever had to come in and fix someone's poorly structured, often broken, debt-laden, and thoroughly inelegant code. OK. Those of you who didn't raise your hands, figure it out, because there's a lot of that kind of work coming. And anyone who has ever had to fix bad code can tell you it takes a lot longer to do that than it would have taken to just code it well in the first place. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO

No comments: