Friday, October 18, 2024
The Future of AI Is AI Building AI for AI
Last week, I read an article about YCombinator taking some blowback for backing an AI startup that admits it basically cloned another AI startup. Short version, a buncha bros released an AI code editor that was cloned from other AI code editors and then they used ChatGPT to clone their own closed license, “overwriting” (my quotes) the Apache open source license.
Yo! I’m a founder now! I pushed a button and made some software happen!
I found this to be hilarious. I’m a huge nerd.
Now, the snafu is more about open-source software and fair use and trying to pass off open-source code as their own. At least I think so. Because honestly, I don’t care. I can’t stop thinking about AI building AI to steal AI from someone else’s AI.
Allegedly.
And, to be super honest with you, as someone who helped invent this current AI flavor some 14 years ago, I always saw this as the eventual end game: AI building AI to be used by AI. Not humans. Humans would set the course and get the results and make the decisions, sure, but I never saw a mass-adoptable use case of humans sitting down to have a pleasant chat with a bot, let alone the kind of human/AI relationship like in the movie Her.
Even so, I never would have predicted a well-known incubator getting wrist-slapped online for investing in a clone of a clone of a clone of AI, or whatever, but here we are.
So here’s a snarky, satiric, but painfully probable list of several scenarios where AI won’t just be the provider, but also the user and the customer and maybe even take your vacations for you.
You’re going to laugh, but be warned–that’s exactly what they want you to do!
AI Writers and Readers
This first one is not too far-fetched, nor is it self-defeating, so we need to start paying attention to it.
Think about asking ChatGPT to write your ChatGPT prompt for you – a vicious cycle of douchery! But as we humans eventually employ more AI, especially generative AI, into everyday use cases, good or bad, we’ll skip codifying more and more of the UI until it’s just AI talking to AI.
Crazy?
Last week I wrote a column about how I can pretty quickly tell when people use generative AI and how it’s starting to work against those people in certain situations. While there was nuance in several of my hyper-judgmental rants, there were two of those situations where I was very clear.
One, if you’re using ChatGPT to write a post or article and passing it off as human-written for whatever reason, I’m not down with that. For obvious reasons, some of them selfish, others ethical.
Two, if you’re using ChatGPT to comment on an article, especially one of mine, I’m totally down with that. Because it’s the time you spent on me that I appreciate. See, your time, your effort, and your energy in response to my human-generated content make my content more valuable. Selfish reasons bordering on ethical reasons.
I’m no saint.
But what if I jettisoned ethics entirely and skipped rule number one?
The next logical step is for AI to write an article intended to be read by AI, which then comments on and even promotes the article using AI, creating an auto-generated swirl of noise that – unless advertisers and paywallers start catching on and stay ahead of it – creates real ill-gotten gains for someone.
This is already happening. If you want to help me fight it, it can be as easy as joining my email list.
AI Influencers and Followers
This is happening too, and has been since the dawn of social media. Now it’s just totally automated.
I know what pops into your head when I say “AI Influencer” – a generated, gorgeous, angular, maybe scantily clad model of an avatar (or whatever – I’m being traditionalist), using sugary sweet words to promote sketchy products to their millions of followers on the TikTok.
But it’s not about the traditionalist allure of the model or the silver of the tongue. It’s about those millions of followers, and how many of them are bots.
See, influencers get paid not by their followers – they get paid by advertisers. Followers are what they sell, and bots are cheap and getting cheaper. I could have a million followers tomorrow, if I bought them. And Makers Mark and Caesars Resorts, if you’re reading this, I already use the hell out of your products.
Just putting that out there.
AI Parents and Kids
There’s a great Silicon Valley episode in which Tres Comas billionaire Russ Hanneman lets his house AI nanny be the “bad guy” to his kid when it’s time for bed.
“We disrupted fatherhood.”
Gets me every time.
Look, my kids are already old enough that they stopped listening to me long ago, so I’ll leave it to you to tell me how much of this is going on today. But based on the stress of my experience over the past 20 years, I’m assuming a lot.
And I’m not 100 percent sure here, but I think most of Gen A is already AI. Either that or we really are living in the matrix.
AI Government and Voters
Here’s where you might start thinking I’m just spouting dystopian nonsense. But I’ve got two words for you.
Smart contracts.
Nobody seems to be able to let the awfulness of this idea go. And all I have to do is point to this upcoming November, and I don’t care what your politics are or what you think may or may not happen, I’m just sure the word “smooth” won’t be in your description.
But let’s take that to the extreme. It’s all about money anyway, and when it’s not, it’s about influence, so why not just digitize that and have AI vote for us based on what it knows about us, and then AI officials can make AI decisions about our very real lives.
Woof. I’m sorry. I hate myself for this entire section. I don’t even have a good joke here. I should have just stopped at “smart contracts,” which probably made you smile a bit.
AI Employers and Employees
This is also already happening, but it needs its own column. I’m on it. In the meantime, allow me to depress the holy hell out of you.
AI Programmers and Users
Back to the YCombinator kerfuffle and I’ll get semi-serious here. This too is already happening, and honestly, it’s not a horrible use case, within reason and except for all the “misrepresentation.”
I’ve already had more than one business plan cross my desk for a startup company that leans into AI for ideation, requirements development, coding and testing, sales and marketing, and most definitely support and customer communication. Not one or a few of those things. All of them.
One of the major problems with AI right now, and again – I’m an OG and still consider myself a champion of the technology – is that the people working on it have decided that general purpose AI, like chatbots or Amazon Echo or general search engine results, is the way to go – to bring in the most money in the shortest time with the most barely acceptable results.
I hate it. But I think we’re finally getting around to collectively learning something I learned when we first set out developing a generative AI platform. And that’s the primary use case of generative AI is writing when humans can’t write, or when the data is too rich or the output audience is too fragmented.
In the “real” AI sense, it’s about making decisions and taking actions when humans can’t, when they’re not there physically, or are too slow, or the expertise needed to do it isn’t worth the expense of that expertise.
You know, tasks like – pains me to say it – programming. Results may vary.
That doesn’t mean you and I can’t sit down and have a pleasant chat with an AI bot, but that’s not where the money is heading.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment