Tuesday, July 25, 2023

HERE'S WHERE THE SMART A.I MONEY IS GOING NEXT

The crazy thing about what a lot of folks are calling artificial intelligence these days is that it's not so much intelligence as it is a question-and-answer routine with some very powerful "magic behind the curtain."

And there's nothing wrong with that. But let's peek behind that curtain a bit, because the magic is what makes the money.

The dumbest explanation of AI ever

My machine-learning friends will drag me for this, but the whole concept of artificial intelligence dumbs down to three simple actions: Gather input, make decisions, respond. 

The "intelligence" part is something we humans do a million times every day. For example, if you've read this far in this article, you made a decision to read it, then you'll either continue to read it or click away. 

The "artificial" part is something computers have been doing since they were invented. For example, you hit the power button, the computer receives binary input from a mechanical switch, then responds by firing up its bootloader to turn itself on. 

My computer scientist friends will probably drag me for that, too. 

However, the neat thing about human decision making is that it's inconceivably fast. You gather uncountable bits of input data while reading this article, from how much sleep you got last night to your own personal judgment call on every similar article you've ever read, and even the meaning you derive from each of the first few hundred words that I used to get you here. You do all this at blinding speed.

Today's computers are getting faster and more accessible. They can gather input, make decisions, and respond in nanoseconds. That's the magic currently printing the money. But if you want to follow current money to future money, the question is: How much data is being input?

And the answer is "excruciatingly little," at least when compared to our brains, our experience, and our environment. 

Better answers require better questions

When you think about "strong" AI versus "weak" or "narrow" AI, the difference comes down to the complexity of the question asked.

This is a concept I explored often at Automated Insights, where I helped invent the first publicly available Natural Language Generation engine in 2010, then spent the next seven years teaching machines to answer questions of increasingly broader complexity. 

We started in sports, so to give an example of the complexity scale, let's use once-in-a-lifetime baseball player Shohei Otani:

Did Shohei Otani hit a home run last night? That's an extremely simple question for a computer to answer, binary for the most part, and doesn't require any external or adjacent data. It's yes or no.

When we ratchet up the complexity -- How did Shohei Otani play last night? -- That requires much more processing. Did he play last night? Did he bat? Or pitch? Or both? How did he perform at each? How did that performance compare to his norm? How did it impact the game? 

Plus you need a lot more data to answer what seems like a simple change in the question. You need data from Otani's career, all the data from the game down to each pitch, all the data from all the players in the game, even adjacent data like injuries, time of day, and weather. 

Imagine the complexity needed to answer this much more important question (at least in a business context): What is Shoehi Otani's value as a baseball player? 

The answer to that question can mean big monetary differences for a lot of people, beginning with but certainly not ending with Otani himself. And I can assure you that you need a shedload of data to even begin to answer that question.

Getting to the right question

I advise entrepreneurs at startups and innovators at large companies. I've been doing this for decades, and about three years ago, I started trying to automate the question and answer part of giving advice (not automating the answers themselves, that's a whole other keg of fish). 

Anyway, I'm trying to solve one of the first tricky issues I learned about in my advising career, that coming up with the right answer is nowhere near as difficult as coming up with the right question. I get asked a lot of questions, and truthfully, most of the ones that come out of the blue are not especially valuable, because there is no complexity to them. 

Consider a question like, "How do I succeed with my startup?" That's too simple a question to be able to provide any helpful answers. But as the complexity of the question goes up, I can bring my wealth of data -- experience learned over decades -- to the forefront to generate a better, more helpful, more valuable answer. I can't do it in nanoseconds, but at this point, neither can any machine.  

The smart money is on asking the right questions

As people get better at creating long, complex, well-formed prompts for GPTs to respond to, that's starting to become a science in itself. But even now, the best advice you'll find on creating better prompts is essentially trial-and-error -- which makes me think of optimization by evolutionary algorithm, which is just fancy trial and error.

Even more limiting, the size and breadth of the models and the processing required to provide the most valuable answers to the most complex questions are still not there, at least  in a way that doesn't produce an error or nonsense at a certain level of complexity. 

The next wave of generative AI will be able to provide specific answers to unique and complex questions. The magic will be found in automating the creation of those questions.


BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM@JPROCO

No comments: