Wednesday, October 30, 2024
3 Things You Can Do to Stay Relevant as AI Alters the Workplace
While a few of the senior executives that I coach are on the cutting edge of AI, most are more focused on staying relevant in this time of massive change.
You may have heard that, “If your job is not replaced by AI, it will likely be replaced by someone more skilled at using AI.” In my coaching sessions with executives, we use this point to brainstorm how they can leverage AI to improve their communication.
Here are three specific opportunities for how you can elevate your communication effectiveness and stay relevant:
1. AI and Enhanced Writing
Gone are the days of simply running spell-checks before sending emails or reports. AI-powered tools like Grammarly and ChatGPT assist in drafting and refining documents, improving clarity and tone. Leveraging these tools can improve your communication, but over-reliance can depersonalize messages.
How to stay relevant:
Improve your writing by leveraging AI at all stages of the writing process, from your outline to brainstorming, to writing.
Then evaluate your writing beyond spell-check by measuring tone and potentially biased language.
Use these tools to boost the efficiency and effectiveness of your writing, while always maintaining your personal voice.
2. AI for Meeting Notes and Summaries
Software tools like Microsoft Teams, Zoom Workplace, and Otter.ai now generate detailed meeting summaries automatically. I just started doing this myself and it is a game-changer! Now I can focus more on real-time communication in meetings and coaching sessions.
Assessing the meeting summary takes just a few minutes and serves to reinforce the key meeting takeaways. And the bonus? Knowing that the meeting is being summarized keeps our conversation more on track.
How to stay relevant:
Download software to summarize your online meetings.
After every important meeting, add five minutes to run through the meeting summary. Edit to ensure the main points and next steps are clearly identified, then forward the summary to appropriate colleagues.
3. Video Analytics
AI-powered video analytics can help with your personal communication effectiveness as well as analysis of team dynamics. With software such as Yoodli.ai, you can upload a video of your formal presentation – whether onstage in front of a big audience or in a virtual meeting. You can evaluate your tone, filler words (“umm…”), pace, intonation, body language, and more.
You can also use AI to monitor and analyze virtual meetings, attention spans, participation levels, and engagement metrics.
How to stay relevant:
Leverage AI to measure the effectiveness of your formal presentations over time.
Use insights from AI to improve meeting facilitation skills, ensuring everyone stays engaged and aligned.
Upskill Your Communication with AI
AI is no longer a distant concept—it’s now embedded in how we interact at work. From chatbots to automated meeting summaries, it is revolutionizing workplace communication both externally with customers and other stakeholders, as well as internally with our colleagues.
Staying relevant means embracing these technologies while honing our human touch. The future of workplace interactions will be a blend of AI-driven tools and human communication.
EXPERT OPINION BY ANDREA WOJNICKI, EXECUTIVE COMMUNICATION COACH AT TALK ABOUT TALK
Tuesday, October 29, 2024
All Those Zoom AI Note-Taking Apps Have Gotten Out of Hand and Are Ruining Meetings
At first, the idea that you could have an AI robot transcribe a Zoom meeting you couldn’t attend seemed amazing. You’d go on with whatever it was you had that was more important than attending the meeting and you’d get an email with a transcript later on. What could be better than that?
Of course, if you’re in the meeting, the whole thing is kind of weird. I was in a meeting recently with around a dozen or so attendees. Except, four of those attendees were just a blank screen that said something like “Fireflies AI” or “Otter.ai.” Those AI robots were listening to everything we said, recording it, and transcribing it, whether anyone else on the call liked it or not.
That means that an entire third of the attendees of that meeting had something more important to do than attend the meeting. I think we can all agree it’s getting out of hand. After all, there’s probably a reason you were invited to the meeting. Presumably, it’s because you have something to offer.
Really, there are three ways these AI note-taking robots are ruining meetings:
Too Easy to Skip
First, and this seems obvious, but it gives people who should be in a meeting too easy of a way out. A thing that seems to be true is that if you avoid a meeting by sending a robot to take notes, you’re more likely to avoid that meeting. I suppose that says something about the fact technology is capable of such a thing, but I think it says a lot more about the way we think about meetings.
To be clear, I think it’s fine for a meeting to be recorded and transcribed. I just think it’s weird that people are sending their AI robot to attend on their behalf so they don’t have to sit through the meeting. As a general rule, if you feel like sending an AI note-taking robot to attend a meeting in your place, you’re pretty much suggesting that the meeting isn’t very important to you at all.
What you’re really saying is “This meeting isn’t important enough to be on my calendar, but I’ll read the summary.”
To be fair, that’s probably true of a lot of meetings. A thing that definitely seems to be true is that Zoom has made it easier for people to schedule meetings, maybe too easy. A lot of Zoom meetings should just be an email or Slack message, but I’m just not sure this is the right solution. I think a much better way to make meetings better is to invite the right people and make sure everyone clearly understands the purpose of the meeting.
An Invasion of Privacy
Second, having multiple transcription robots on a Zoom call creates a weird dynamic. Not only does it create feelings about the people who skipped the meeting and sent the robots in the first place, but it also creates weird feelings about the meeting itself because you know someone who isn’t there is having it recorded and transcribed. It feels like a gross invasion of privacy.
Expectations are Everything
Finally, even if everyone is okay with the fact that someone is recording the meeting, there’s an expectation that if you skip a meeting and have it transcribed, you’re going to actually listen or read it. It seems reasonable that everyone else might just assume you’ll be up to speed. The thing is, I’m not sure that’s a good assumption to make. If the meeting wasn’t important enough to show up in the first place, it’s not a given that it’ll be important enough to read about later.
The lesson here is pretty simple—just because you can, doesn’t mean you should. That actually applies in a lot of areas of our lives, but especially when it comes to meetings. Just because Zoom makes it easier to have a meeting, doesn’t mean you should. And, just because you can send your AI robot to take notes, doesn’t mean it wouldn’t be better for everyone if you show up in person.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Friday, October 25, 2024
Tech execs ask: Why have product managers when you can just keep using AI?
I’ve known her since she founded her startup while still in college in the late 2000s and built it up to a pretty sizable concern before ultimately winding it down after five years. She made many of the mistakes most first-time entrepreneurs make but also did many things right that most first-time entrepreneurs never do right.
She’s not tech. She knows how to create, innovate, and sell, especially in the consumer market, with all its complexities and shifting priorities.
In other words, she knows what the people want and how to bring it to them.
In the sense of “product” – product science, product development, product management – this is, like, must-have skill number one. At the end of the day, “product” is all about solving a problem at a price that a lot of people are willing to pay.
But while she bounced around in various lead sales and marketing positions for several startups in the time since she shuttered her own business, she had never thought of herself as a “product person.”
“I’ve been doing fractional work around brand and product strategy for the past two years,” she told me. “I’ve recently become interested in moving more toward product in tech.”
Again, she’s not tech. So she wanted some insight into the tech product world. She called me a “product legend.” I said absolutely, but if she used the word “legend” again the meeting was off.
She laughed. I didn’t have the heart to tell her tech product is dying.
Who am I kidding? Of course, I told her that!
Here’s what we talked about.
The product role has evolved again and not in a good way
Look, I’m not only not a product legend, but in my heart of hearts, I don’t really consider myself a “product person,” even though I’ve been a chief product officer for 15 years across three companies.
I just kind of fell into it and now it’s a role of convenience because what I do is hard to explain.
I’m an entrepreneur who has sold companies and an innovator who holds a patent. I know how to get shit done, and I have a background in software development and an education in systems and data. But my top skill is that I know what people want and how to bring it to them.
So that makes me a “product person.” And since I’ve been doing it longer than it’s been a thing, it makes me an OG product person.
Like any OG, I guess, I’m now longing for the good old days. But not like you might think. See, the product role stumbled into existence with the expansion of software from business to consumer, from desktop to Web to mobile, and from corporate-driven to innovation-driven.
Not that software was never that, but in the mid/late 2000s it was becoming a lot more of that very quickly. Someone needed to shepherd all that innovation from the company’s collective brain into the users’ collective hands, almost literally.
As an entrepreneur who knew what the people wanted and who could code, lead, and speak, I fit this role pretty well.
Eff it. I’ll do it.
Product is still being confused with project
From those early days, most corporations outside of the startup world didn’t have a handle on how to make the product role work for them, so the conflation of product management with project management was born.
I’ve been fighting this for almost 20 years.
Both roles are valid and often needed. But while product is about creating the thing, from ideation to use case to upsell, project is about delivering the thing, from requirements to launch.
I don’t mind when these roles overlap. I don’t mind when product management has to absorb project management or even perform it.
What I do mind is that when people who have no sense of innovation, no understanding of customers and markets, and who consistently prioritize timelines above top and bottom lines start thinking they can do the product role.
By the mid/late 2010s, there was an understanding of the difference between the two functions and all the roles, and things were good. Then they weren’t. Now, product management is evolving again into its own form of project management.
About a month ago, I wrote a column called “The Slow, Painful Death Of Agile and Jira,” in which I asked when Agile stopped being an optional methodology and became a religion. The only negative feedback I got – in an overwhelming landslide of positive feedback – was for calling Agile a methodology, thus proving that I didn’t understand it.
I didn’t respond to any of that criticism there, but I will here.
Naw, man. It’s a methodology now. That’s my entire point. Sorry I gave you intellectual credit and didn’t hit you over the head with it.
Ah, I feel so much better now.
Product has its own demons now
It’s not just the crushing misinterpretation of the Agile manifesto or the constant stop-start-stop pace of Scrum or the horrible micromanagement UX of Jira turning product management back into project management. Over the lpast couple of years, it’s become a misinterpretation of the product role itself.
Experimentation is being deemphasized for the sake of an unbending roadmap. Create it. Set it. Forget it.
Innovation is being pushed out in favor of consistent recurring revenue, even if, over time, the overall numbers are smaller, as long as they’re predictable.
When innovation is mentioned, it’s become a discussion of how to fit square AI pegs into round business goal holes, and just assuming revenue will go up as more people adopt AI universally.
Go-to-market is becoming another methodology unto itself, with flywheels imagined as perpetual motion machines, product-market fit sought for an ever-expanding collection of unnecessary features, and target audiences the size of a broad side of a barn. All of this leads to a lack of mechanics to, you know, narrowly define the target market, properly fit it, and spin the flywheel until you get traction.
This kind of thinking ultimately led to an unfortunate but inevitable outcome.
Product got kneecapped in the great tech RIF
I don’t know this first-hand, but anecdotally, based on who I’m hearing from, if software developers got slapped around hard in the mass layoffs of 2023 and 2024, “product people” got just plain decimated.
Here’s why.
Executives can make a case that AI can replace human ingenuity for tasks like developing a roadmap and sticking to it, hitting and maintaining consistent recurring revenue targets, and finding increasingly efficient and cost-effective ways to reach deeper into ever-broadening markets.
If you were a “product person” sitting in front of Jira and Confluence and spreadsheets all day, you likely felt the ax, or at least the rush of air as it came down.
Product is not dead yet
As usual, I threw some confusing (clickbaity) histrionics in the title. I’m not saying product management is dead. Yet. I’m just saying this is how it’s going to happen and it would indeed be untimely. Because if we’re going to get out of tech’s current malaise, it’s going to take the risk tolerance for innovation, the spirit of entrepreneurship, and knowing what the people want and how to bring it to them.
That’s what I told my friend. This didn’t put her off, in fact, it fired her up. Because on its face, this might not be so much a demise as it is a reset, just like I believe it is in software development.
See, when AI produces the same product built on the same code for the same target market – and does that over and over again enough times, well, now you need someone to come in and innovate your way out of that death loop.
Real “product people” are going to be just as critical as real “software developers.”
So now that we’ve crash-landed back on the bottom floor, it’s probably a pretty good time to get in. Maybe you’d like to ride along with me.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO
Thursday, October 24, 2024
This Expert Says AI Efficiency Soars When It Uses Video to Crunch Numbers
The newest feature of the smartest AI systems is called multimodality, a fancy term that means AIs can respond to prompts besides typed text. You can combine a query like “make up a logo for my fab new vintage furniture refurbishment company” with an image of your favorite shade of yellow, and a shell or other inspirational picture, for example. AI researcher Simon Willison recently found a clever way to use this capability to solve a tedious math problem. The way he did it shines a light on how we may use AI chatbots differently in the near future.
Willison was working on one of those everyday accounting tasks that sounds simple but inevitably ends up being time-consuming. He wanted to tally all the charges he’d incurred for using a cloud company’s services. But, as news site ArsTechnica notes, Willison’s data was embedded all over the place in lots of different emails and so on, so finding it all and manually extracting the info would be one of those soul-destroying office jobs.
Then inspiration struck. Willison turned on his computer’s “screen recording” system, which creates a video of everything you do on the desktop, and then he navigated between all the different emails and sources of the numbers he needed, simply scrolling past the right data along with all the other info in each message. Then he put that video into Google’s AI Studio system, which, as Ars explains, lets users try out “several versions of Google’s Gemini 1.5 Pro and Gemini 1.5 Flash AI mode” AI systems. Willison prompted the AI to look at the video, telling it to pull out any relevant numbers it could see, and then put them in a specially formatted file that could be easily loaded into a spreadsheet, including specific information like dates and exact price amounts.
The task took moments, was effectively free because of the experimental nature of AI Studio, and apparently delivered accurate data that Willison was able to verify—saving him a lot of time.
So far, so very nerdy. But why should you care about this feat, other than admiring Willison’s lateral thinking?
Because by screen recording like this, Willison noted that there’s no real limit to where the data you’re prompting an AI comes from. There’s “no level of website authentication or anti-scraping technology that can stop me from recording a video of my screen while I manually click around inside a web application,” he noted in a blog, meaning any user could record their scrolling through a website, flicking pages in a complex Excel sheet or even scanning proprietary company emails.
This may soon be how we all use AIs in our work and with other tasks. When OpenAI revealed its next-gen ChatGPT model in May, it showed how its computer apps could “watch” what users are doing on screen, acting in a kind of angel-on-your-shoulder role. You could then ask the AI to process what it had seen, without going through the tedious task of typing in lots of words or numbers—similar to Willison’s screen recording. In an office, this means you could get AI help with, say, a complex financial analysis of your company’s revenue data merely by showing it to the AI.
There’s an inherent security risk here, obviously: data uploaded to an AI may be used to train its algorithms, and lead to sensitive info “leaking” from the AI to other users later on. Microsoft’s slightly similar, slightly more eerie, AI Recall feature sparked controversy and criticism this summer for similar reasons.
But AI systems like this are on the way, and are being threaded ever deeper into our work PCs and smartphones. OpenAI, for example, just revealed its PC ChatGPT app, which will likely be able to do the “watching your screen” trick. And Apple is preparing to release the first integration of ChatGPT into its famous Siri chatbot.
All of this is a reminder that learning how to use AIs is not a one-and-done task: You’ll need to keep training your staff to stay up to date on the latest tricks. And you’ll also have to keep reminding them to be wary of showing an AI the wrong kind of sensitive company data.
BY KIT EATON @KITEATON
Monday, October 21, 2024
Job Seekers Are Using AI to Fight AI
“I’m serious. I lined up three job interviews in two days.”
What?!?
I mean, no offense.
That’s what “Sam” told me last week to finally get me to meet him at a Starbucks and show me the AI app he had been using to scrape LinkedIn and apply for tech jobs while he worked on various side projects on his second laptop.
Sam is not an AI developer. He’s not even super technical. All he did was download an app he read about in a forum, an app that uses AI on behalf of the job seeker to battle the AI being used on behalf of the company’s applicant tracking system — to even the odds with … volume and regurgitation, I guess.
This shouldn’t be shocking to me. Over the past couple of weeks alone, I’ve written thousands of words on the posting of fake jobs, the prospect of AI using AI, and the scourge of automated content screening — all of this congealing to make a mess of the average everyday job search.
I don’t know. I knew this kind of software was out there. I just didn’t think it worked.
Let me assure you. It “works” exactly as I’d feared.
Throwing Gasoline on a Dumpster Fire
I also need to reassure you, especially if you didn’t click on any of those links above (but if you did, you should join my email list), I’ve become somewhat of a Robin Hood for the tech job seeker. In spirit only. I’m not wearing tights and I’m not handing out money. I’m also not stealing from the rich. Much.
In other words, I’ve got no ties to “big HR.” And I’ve also got over a decade of experience with generative AI.
My first take on Sam’s story was the same as when anyone sees David chuck a stone at Goliath and connect. Nice shot. That was immediately replaced by fear, with my inner old man kicking in to remind me that I, too, may need a job someday, and while I can play this game, I’d rather stab myself in the brain than tweak my own job-hunting algos.
And then dread swooped in and that’s where I’m mired right now, as I contemplate the further devolution of a tech industry that I once thought was kind of all right.
OK, Fine, You Miscreants, Let’s Talk About the App
When I say Sam isn’t super technical, I mean he’s somewhat technical. He’s not a developer, but he understands tech, just enough to get himself into trouble, both good and bad. Like he was hard into crypto.
He’s got two laptops, and he’s not above burning one. He’s also not afraid to download Python and screw around with GitHub and have an OpenAI account. Which is exactly what you need to get started with AI Hawk, the AI job application bot creator in question.
I’m not going to link to the AI Hawk GitHub, because LinkedIn might ban you for using it and I don’t want to encourage the behavior. But here’s a pretty good rundown of the AI Hawk shenanigans in real time, including a secondhand case of someone using it to apply for 2,843 positions.
Short version: You download Python, download the GitHub code, tweak it, connect an OpenAI account, and you’re off. It’ll make your résumé and cover letter ATS-friendly, and then fill in all the LinkedIn prompts for your application. It’s buggy and it barfs the same way all GPTs barf, but it “works.”
Why Applying to 2,843 Jobs Is Not the Answer
Ultimately, AI is not going to magically fix the damage in hiring that AI itself has wrought. This is just my opinion as a tech and AI OG, and maybe it’s reckless speculation, but it’s just going to make things worse.
The people I talk to who know these things tell me that they can spot this too, and all it does is create an avalanche of mismatched candidates:
Which puts more stress on HR’s workload
Which means it’s all the more they have to automate out of the system
Which means tightening the settings on the automated candidate screening
Which means even fewer talented people ultimately get a look from a human
Yeah, that’s what I thought.
And I’ll jump in and say that — AI or no — the easier it is to apply for a job, the harder it is to get. I don’t love that, but that’s how it works, and it’s why I always tell people to spend more time networking than applying.
Oh, Sam Won’t Get Any of Those Jobs
Again, no offense. He wasn’t qualified for any of them. He wasn’t lying, he used his real name and his real résumé, but he … let’s say, he got real aspirational, and applied for jobs that were way above his pay grade, so to speak. He turned all three interviews down.
This is the biggest problem when fighting AI with AI.
AI doesn’t know if you’re qualified for a position or not, but it won’t let that stand in its way of producing the best first impression of the best “you” that it can. And while neither Sam nor I know if it was AI on the other side that got him in the door, I can assume that the two machines shook hands and let him in.
Sam put garbage in, maybe not even on purpose (and relax, I bought Sam lunch and he knows I love him). AI took that and turned it into exactly what the AI on the other side wanted.
That. That is what depresses me, not just about AI Hawk and automated candidate screening, but AI in general. We’re so far “over our skis” with this that if it were driverless vehicles we’d just be letting them mow down crowds and shrugging our shoulders while OpenAI raises billions.
As a lifelong technologist, I know that calling for a return to the good old days is as useless as it is unwise. Instead, I’ll just tell you that if you’re looking for a job, take your time, apply to the ones that seem like the best fit, make attempts at networking outside of the digital arena, or at least use the digital arena to make connections to the inside.
It’s daunting and hard work and takes forever, and it may not even be the right answer. But I feel like it’s better than doing the same thing 2,843 times and expecting a different result.
That said, if you need a job and you get a job using one of these bots, more power to you.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO
Friday, October 18, 2024
The Future of AI Is AI Building AI for AI
Last week, I read an article about YCombinator taking some blowback for backing an AI startup that admits it basically cloned another AI startup. Short version, a buncha bros released an AI code editor that was cloned from other AI code editors and then they used ChatGPT to clone their own closed license, “overwriting” (my quotes) the Apache open source license.
Yo! I’m a founder now! I pushed a button and made some software happen!
I found this to be hilarious. I’m a huge nerd.
Now, the snafu is more about open-source software and fair use and trying to pass off open-source code as their own. At least I think so. Because honestly, I don’t care. I can’t stop thinking about AI building AI to steal AI from someone else’s AI.
Allegedly.
And, to be super honest with you, as someone who helped invent this current AI flavor some 14 years ago, I always saw this as the eventual end game: AI building AI to be used by AI. Not humans. Humans would set the course and get the results and make the decisions, sure, but I never saw a mass-adoptable use case of humans sitting down to have a pleasant chat with a bot, let alone the kind of human/AI relationship like in the movie Her.
Even so, I never would have predicted a well-known incubator getting wrist-slapped online for investing in a clone of a clone of a clone of AI, or whatever, but here we are.
So here’s a snarky, satiric, but painfully probable list of several scenarios where AI won’t just be the provider, but also the user and the customer and maybe even take your vacations for you.
You’re going to laugh, but be warned–that’s exactly what they want you to do!
AI Writers and Readers
This first one is not too far-fetched, nor is it self-defeating, so we need to start paying attention to it.
Think about asking ChatGPT to write your ChatGPT prompt for you – a vicious cycle of douchery! But as we humans eventually employ more AI, especially generative AI, into everyday use cases, good or bad, we’ll skip codifying more and more of the UI until it’s just AI talking to AI.
Crazy?
Last week I wrote a column about how I can pretty quickly tell when people use generative AI and how it’s starting to work against those people in certain situations. While there was nuance in several of my hyper-judgmental rants, there were two of those situations where I was very clear.
One, if you’re using ChatGPT to write a post or article and passing it off as human-written for whatever reason, I’m not down with that. For obvious reasons, some of them selfish, others ethical.
Two, if you’re using ChatGPT to comment on an article, especially one of mine, I’m totally down with that. Because it’s the time you spent on me that I appreciate. See, your time, your effort, and your energy in response to my human-generated content make my content more valuable. Selfish reasons bordering on ethical reasons.
I’m no saint.
But what if I jettisoned ethics entirely and skipped rule number one?
The next logical step is for AI to write an article intended to be read by AI, which then comments on and even promotes the article using AI, creating an auto-generated swirl of noise that – unless advertisers and paywallers start catching on and stay ahead of it – creates real ill-gotten gains for someone.
This is already happening. If you want to help me fight it, it can be as easy as joining my email list.
AI Influencers and Followers
This is happening too, and has been since the dawn of social media. Now it’s just totally automated.
I know what pops into your head when I say “AI Influencer” – a generated, gorgeous, angular, maybe scantily clad model of an avatar (or whatever – I’m being traditionalist), using sugary sweet words to promote sketchy products to their millions of followers on the TikTok.
But it’s not about the traditionalist allure of the model or the silver of the tongue. It’s about those millions of followers, and how many of them are bots.
See, influencers get paid not by their followers – they get paid by advertisers. Followers are what they sell, and bots are cheap and getting cheaper. I could have a million followers tomorrow, if I bought them. And Makers Mark and Caesars Resorts, if you’re reading this, I already use the hell out of your products.
Just putting that out there.
AI Parents and Kids
There’s a great Silicon Valley episode in which Tres Comas billionaire Russ Hanneman lets his house AI nanny be the “bad guy” to his kid when it’s time for bed.
“We disrupted fatherhood.”
Gets me every time.
Look, my kids are already old enough that they stopped listening to me long ago, so I’ll leave it to you to tell me how much of this is going on today. But based on the stress of my experience over the past 20 years, I’m assuming a lot.
And I’m not 100 percent sure here, but I think most of Gen A is already AI. Either that or we really are living in the matrix.
AI Government and Voters
Here’s where you might start thinking I’m just spouting dystopian nonsense. But I’ve got two words for you.
Smart contracts.
Nobody seems to be able to let the awfulness of this idea go. And all I have to do is point to this upcoming November, and I don’t care what your politics are or what you think may or may not happen, I’m just sure the word “smooth” won’t be in your description.
But let’s take that to the extreme. It’s all about money anyway, and when it’s not, it’s about influence, so why not just digitize that and have AI vote for us based on what it knows about us, and then AI officials can make AI decisions about our very real lives.
Woof. I’m sorry. I hate myself for this entire section. I don’t even have a good joke here. I should have just stopped at “smart contracts,” which probably made you smile a bit.
AI Employers and Employees
This is also already happening, but it needs its own column. I’m on it. In the meantime, allow me to depress the holy hell out of you.
AI Programmers and Users
Back to the YCombinator kerfuffle and I’ll get semi-serious here. This too is already happening, and honestly, it’s not a horrible use case, within reason and except for all the “misrepresentation.”
I’ve already had more than one business plan cross my desk for a startup company that leans into AI for ideation, requirements development, coding and testing, sales and marketing, and most definitely support and customer communication. Not one or a few of those things. All of them.
One of the major problems with AI right now, and again – I’m an OG and still consider myself a champion of the technology – is that the people working on it have decided that general purpose AI, like chatbots or Amazon Echo or general search engine results, is the way to go – to bring in the most money in the shortest time with the most barely acceptable results.
I hate it. But I think we’re finally getting around to collectively learning something I learned when we first set out developing a generative AI platform. And that’s the primary use case of generative AI is writing when humans can’t write, or when the data is too rich or the output audience is too fragmented.
In the “real” AI sense, it’s about making decisions and taking actions when humans can’t, when they’re not there physically, or are too slow, or the expertise needed to do it isn’t worth the expense of that expertise.
You know, tasks like – pains me to say it – programming. Results may vary.
That doesn’t mean you and I can’t sit down and have a pleasant chat with an AI bot, but that’s not where the money is heading.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO
Wednesday, October 16, 2024
Inside AI’s $1 Trillion Cash Bonfire
Generative AI has lured an enormous amount of capital. Indeed, Goldman Sachs estimates companies will spend $1 trillion to use AI chatbots in their operations. A recent example of such capital flows is OpenAI’s recent $6 billion capital raise. This October 2024 investment nearly doubled the ChatGPT provider’s private market value to $157 billion, according to The New York Times.
A look inside the mindset of executives who are driving the investment of this capital reveals a powerful duality, according to my book Brain Rush.
The battle between two fears, missing out on the next big thing and big lawsuits or reputational damage, makes it hard for companies to earn a return on that investment. Keeping that $1 trillion from going up in flames depends on whether generative AI can find a killer app.
A case in point is Apple’s iTunes Store — which resulted in a near-quadrupling of iPod sales. The iTunes Store was a killer app because it made the iPod so much more useful for consumers. For example, joggers who formerly listened to music on a Sony Walkman flocked to the iPod. Why? It was smaller and lighter and enabled them to customize playlists.
Could generative AI find a killer app?
Some technology waves change the world — notably the World Wide Web. Others don’t — such as virtual reality. Despite the massive hype and enormous investment, generative AI appears unlikely to change the world.
Waves that change the world pass three tests:
They make life much better for people.
Many people eagerly pay a high price for the new technology.
Future profits more than offset the investment required to build it.
Sadly, generative AI does not pass any of these tests.
Before getting into the reasons, I have observed ChatGPT doing interesting things — but so far nothing significant enough to change the world.
For instance, in January, one of my Babson College students uploaded a book I assigned for a course into ChatGPT. He told me he spoke his questions to ChatGPT and the book responded in “a very high quality” voice. I recently used Google’s Deep Dive to create a podcast about Brain Rush led by two AI hosts.
Generative AI has yet to make peoples’ lives much better
Such uses do not make people willingly pay a high price to keep using it.
In a September Babson class, only two students — out of 30 — used ChatGPT occasionally for research. Because of hallucinations, the students had to double-check its results — reducing the technology’s value.
Hallucinations are a feature — rather than a bug — of large language models. That’s because they are trained on data — not all of which is accurate — to make predictions about the next word in a sentence. Sometimes the LLM guesses right, sometimes not.
The price users pay does not cover generative AI’s costs
Neither ChatGPT nor Microsoft Copilot earn enough revenue to cover its costs. In 2024, OpenAI expects to generate about $3.7 billion in annual revenue while spending $8.7 billion — producing a $5 billion loss, noted the Times.
While Microsoft declined to quantify its Copilot revenue in the most recent quarter, the AI-powered assistant’s costs are significantly higher than its revenue. For example, GitHub Copilot, a service that helps programmers create, fix, and translate code, costs between $20 and $80 per month to run — way above the service’s $10 per month subscription fee, according to The Wall Street Journal.
People are not willing to pay enough for ChatGPT and Copilot because they fail the critical test of a killer app — they do not relieve customer pain more effectively than current products. Performance and cost issues with Copilot are causing customers to pause their $30-per-month-per-user Copilot contracts for Office 365, according to The Information.
Profits may fall short of the investment needed to build AI chatbots
A big part of companies’ investments in generative AI is the electricity the data centers need to keep AI chatbots running.
Indeed, in September 2024, Microsoft spent an estimated $16 billion on a 10-year contract with Constellation Energy, the company that operates Three Mile Island — one of whose reactors famously melted down in 1979, according to my September 2024 Forbes post.
As I noted in my Value Pyramid case study, most generative AI use cases help people overcome creator’s block — such as the anxiety about writing an email. Fewer generative AI applications help improve the productivity of business functions such as customer service or coding. And few, if any, applications of AI chatbots enable companies to add new sources of revenue.
Until leaders deploy AI chatbots to make life much better for many people, their $1 trillion investment could go up in smoke.
EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN
Monday, October 14, 2024
Here’s Why OpenAI’s Deal With Hearst Should Scare Google
If AI really is the future of digital tech, it comes with a pretty awful side dish: AI data scraping. Popular large language model applications are so hungry for data that AI makers have frequently resorted to questionable methods when grabbing as much data as possible from wherever they can, regardless of any moral or legal implications. But in a surprising reversal of this trend, the Wall Street Journal reports that market leading AI brand OpenAI has signed a deal with giant publishing house Hearst, giving the AI maker access to the stable of over 20 magazine brands and 40 newspapers that Hearst owns—some of which are world-famous titles like Esquire and Cosmopolitan.
The Journal notes that the terms of the deal aren’t disclosed. And why would they be? OpenAI is technically still a startup, albeit a giant one, and it doesn’t want to give away too many business secrets. We can speculate about the number of zeros after the dollar sign, of course, but the amount going into Hearst’s bank account almost doesn’t matter.
That’s because it’s what this deal signifies that’s important. OpenAI will have access to millions of words, photos and other content from Hearst’s vast publishing archive. Hearst will see its content “surfaced” inside ChatGPT when users search for a relevant topic—with citations and direct links so that when a user clicks on a link, they’ll be taken to a Hearst site and thus could be shown ads, boosting revenues. OpenAI, by including this material inside its chatbot, helps keep ChatGPT useful and up to date for users, and thus boosts its revenues. Both companies use business jargon to describe this mutual gain, noting the deal will “enhance the utility and reach” of their offerings, the paper says.
Interestingly, the Journal also quotes Hearst Newspapers President Jeff Johnson on the complicated topic of AI “theft” of journalistic IP—a tricky issue that has seen newspapers, notably the New York Times, sue the AI maker. Johnson explained how important it was that journalism by “professional journalists” remain at the forefront of AI products’ output. Is this simply a case of “if you can’t beat ‘em, sell it to ‘em?”
There is a wrinkle here, and one that the Journal doesn’t investigate. The New York Times’ case, like several others against AI companies from newspapers, book authors and even record labels, rests on the fact that real human work is being used to train a very inhuman AI. Once it has the data, it can create all-but “cloned” output, mirroring the style and content of the training material. If an AI can, say, write pieces like a well-known newspaper journalist, isn’t that a threat to that person’s livelihood? Music A-listers recently penned an open letter decrying AI as a threat to human creativity, exploring a very similar hypothesis.
Will giving OpenAI access to Hearst’s archive merely mean the next-gen ChatGPT system can sound like any Cosmo writer on any given topic? Will readers act on sex and fashion tips from a digital creator that has no experience of either? (And, if you think about it, does that even matter?)
What this also means is that AI systems really might be the future of the online search experience. Why would users need to exit ChatGPT to find, say, the latest celebrity gossip when they could simply stay logged into the chatbot to read that stuff, contributing to OpenAI’s revenues. This is another shot across Google’s bows, a serious challenge to a search giant that’s kept a stranglehold on search for decades.
BY KIT EATON @KITEATON
Saturday, October 12, 2024
How AI can level the playing field in learning and education.
Patricia Scanlon, Ireland’s first AI ambassador, thinks that AI can make learning more efficient and accessible if we rethink our education system.
AI is rapidly transforming almost all spheres of human activity; from the way we work and the way we create content to even the way we find novel solutions to age-old problems.
While there is much excitement around what AI brings with it, some are also cautious about its implications on the future of jobs and the damage advanced AI can do if in the wrong hands.
One such area of human endeavour that AI has already started to upend is learning. The power of generative AI is disrupting classrooms in schools and colleges world-over, with educators scrambling to devise policies to prevent the technology’s misuse.
Individualised help
But with every disruption comes an opportunity to makes things better. Patricia Scanlon, Ireland’s first AI ambassador, thinks that when it comes to the impact of AI on education, the novel tech can actually be used as a force for good in learning and development.
“There’s a lot of power in being able to take AI and level the playing field in some ways,” Scanlon said in her keynote speech to an audience of more than 350 people at the Learnovation Summit held in the Aviva Stadium in Dublin today (5 October).
The annual summit was organised by the Learnovate Centre, a learning technology research centre based in Trinity College Dublin.
“Not everybody has access to low student-teacher ratios, after school tutors, helpful parents at home, English as their first language – you can see how that more individualised help can really help in education.”
But it’s not just children in schools that can benefit from the equalising ability of AI technology. Adults, too, Scanlon said, can make the most out of LLMs [large language model] that can help them learn things they never had access to learning before.
“Maybe somebody never gets to go to college, but they can educate themselves to a certain point with AI. Not the point of a full format education system, but a tool to help and that’s where the productivity aspect comes in,” she explained.
“And then in the working world, AI can be hugely helpful – particularly for people with dyslexia. You’re levelling the playing field for people like that, who struggle to write in the blank page.”
‘Turn education system upside down’
But is the effect of AI on learning, especially for younger people and children, really such a bed of roses? Scanlon said there are ways in which these tools could be harmful to our development if we start to rely on it too much.
“We wonder if kids are ever going to be able to write for themselves or engage in critical thinking. Conversely, AI can help to ensure that integrity – but it’s going to take work,” she went on.
“You can use the LLM to create live questioning that somebody couldn’t possibly be prepared for, and change the questions based on the answers to drill downs someone’s knowledge.
“Then, together with a little bit of security and analytics, or maybe their style of writing or what they said before or what we know that LLMs produce, you can get to something more like an oral assessment or a defence of a thesis if you want, and AI can help that.”
According to Scanlon, the easy thing to do would be to ban AI. But it’s not necessarily the best way forward. What’s far more beneficial, she argued, is to “turn our whole education system upside down” and look at AI in a different light.
“It’s not going away, so we need to think about how we can use this tool to help with critical thinking, to help them [learners] progress in all aspects of teaching and learning.”
Wednesday, October 9, 2024
What Your Business Can Learn From the World’s Greatest Mathematician’s View of AI
Artificial intelligence is capable of some stunning feats, from generating mind-bogglingly convincing imagery and video to chatting in an incredibly human-like voice. Users’ growing embrace of AI for fun and help at work shows the technology already has practical applications–even before it evolves to super-genius levels–and may take over some roles in the workplace. But UCLA math professor Terence Tao, known as the “Mozart of Math,” isn’t particularly worried that AI is coming for his job soon. Tao, considered the world’s greatest living mathematician, spoke to the Atlantic recently about AI, and his words have impact far beyond the world of equations, proofs and hypotheses.
Tao was asked about the impact of AI on the field of math, because he’d recently posted some scathing comments about OpenAI’s latest and supposedly greatest GPT o1 model. Touted as the first model from the leading AI brand that can “reason” as well as simply answer back to user queries, Tao posted on social platform Mathstodon that in his opinion the cutting edge AI was only as smart as a “mediocre, but not completely incompetent” graduate student.
The magazine elicited a more in-depth explanation of Tao’s views, and what he said was deeply interesting for anyone who’s thinking about embracing AI into their workplace, or those who are worried that AI will displace people from their jobs. Entrepreneur Peter Thiel, for example, recently suggested AI could actually “come for” roles that rely on math first.
Expanding on his criticism of ChatGPT, Tao said his remarks were misinterpreted. What he had been trying to do, rather than dismiss GPT o1’s capabilities, was point out that he “was interested in using these tools as research assistants.” That’s because a research project has “a lot of tedious steps: You may have an idea and you want to flesh out computations, but you have to do it by hand and work it all out.” This sort of methodical task is exactly what “reasoning” AI models should be great at, saving time and money for people like Tao, whose job involves this kind of data processing. Tao thinks AI tech–at least for now–is only good at this type of “assistant” role, and not necessarily a shining example of excellence here either.
And Tao’s concluding remarks are even more telling. Asked about how AI is taking over some methodical math work, Tao pointed out that this has always been true of technology. To adapt, humans simply “have to change the problems you study.” In terms of AI use, Tao noted he’s “not super interested in duplicating the things that humans are already good at. It seems inefficient.” Instead he foresees a time when, “at the frontier, we will always need humans and AI. They have complimentary strengths.”
So far, so very math nerdy. But, sitting at your desk in your office, working at tasks that seemingly have little to do with sophisticated mathematics, why should you care what Tao thinks? At least partly because of the kind of sentiment that Thiel voiced about the future of work. In terms of the question “will AI steal my job?” Tao is very definitely on the side of voices that argue “no.”
In a similar way that the PC changed the average office job, AI will simply change what employees do hour by hour. The tech will mechanize some humdrum “research” tasks, and actually allow some of your workers to work more efficiently at tasks that directly generate revenues. So if you’ve been hesitant to embrace AI’s promise thus far, maybe you can go ahead—and reassure your staff that they’re not at risk.
BY KIT EATON @KITEATON
Monday, October 7, 2024
OpenAI Just Announced 4 New AI Features, and They’re Available Now
OpenAI announced a slew of updates to its API services at a developer day event today in San Francisco. These updates will enable developers to further customize models, develop new speech-based applications, reduce prices for repetitive prompts, and get better performance out of smaller models.
OpenAI announced four major API updates during the event: model distillation, prompt caching, vision fine-tuning, and the introduction of a new API service called Realtime. For the uninitiated, an API (application programming interface) enables software developers to integrate features from an external application into their own product.
Model Distillation
The company introduced a new way to enhance the capabilities of smaller models like GPT-4o mini by fine-tuning them with the outputs of larger models, called model distillation. In a blog post, the company said that “until now, distillation has been a multi-step, error-prone process, which required developers to manually orchestrate multiple operations across disconnected tools, from generating datasets to fine-tuning models and measuring performance improvements.”
To make the process more efficient, OpenAI built a model distillation suite within its API platform. The platform enables developers to build their own datasets by using advanced models like GPT-4o and o1-preview to generate high-quality responses, fine-tune a smaller model to follow those responses, and then create and run custom evaluations to measure how the model performs at specific tasks.
OpenAI says it will offer 2 million free training tokens per day on GPT-4o mini and 1 million free training tokens per day on GPT-4o until October 31 in order to help developers get started with distillation. (Tokens are chunks of data that AI models process in order to understand requests.) The cost of training and running a distilled model is the same as OpenAI’s standard fine-tuning prices.
Prompt Caching
OpenAI has been laser-focused on driving down the price of its API services, and has taken another step in that direction with prompt caching, a new feature that enables developers to reuse commonly-occurring prompts without paying full price every time.
Many applications that use OpenAI’s models include lengthy prefixes in front of prompts that detail how the model should act when completing a specific task, like directing the model to respond to all requests with a chipper tone or to always format responses in bullet points. Longer prefixes typically improve the model and help keep responses consistent, but they also increase the cost per API call.
Now, OpenAI says the API will automatically save or “cache” lengthy prefixes for up to an hour. If the API detects a new prompt with the same prefix, it will automatically apply a 50-percent discount to the input cost. For developers of AI applications with very focused use cases, the new feature could save a significant amount of money. OpenAI rival Anthropic introduced prompt caching to its own family of models in August.
Vision Fine-Tuning
Developers will now be able to fine-tune GPT-4o with images in addition to text, which OpenAI says will enhance the model’s ability to understand and recognize images, enabling “applications like enhanced visual search functionality, improved object detection for autonomous vehicles or smart cities, and more accurate medical image analysis.”
By uploading a dataset of labeled images to OpenAI’s platform, developers can hone the model’s performance when it comes to understanding images. OpenAI says that Coframe, a startup building an AI-powered growth engineering assistant, has used vision fine-tuning to improve the assistant’s ability to generate code for websites. By giving GPT-4 hundreds of images of websites and the code used to create them, “they improved the model’s ability to generate websites with consistent visual style and correct layout by 26% compared to base GPT-4o.”
To get developers started, OpenAI will give out 1 million free training tokens every day during the month of October. From November on, fine-tuning GPT-4o with images will cost $25 per one million tokens.
Realtime
Last week, OpenAI made its human-sounding advanced voice mode available for all ChatGPT subscribers. Now, the company is enabling developers to build speech-to-speech applications using its technology.
If a developer had previously wanted to create an AI-powered application that could speak to users, they’d first need to transcribe the audio, pass the text over to a language model like GPT-4 in order to be processed, and then send the output to a text-to-speech model. OpenAI says this approach “often resulted in loss of emotion, emphasis, and accents, plus noticeable latency.”
With the Realtime API, audio is immediately processed by the API without needing to link multiple applications together, making it much faster, cheaper, and more responsive. The API also supports function calling, meaning applications powered by it will be able to take actions, like ordering a pizza or making an appointment. Realtime will eventually be updated to handle multimodal experiences of all kinds, including video.
To process text, the API will cost $5 per one million input tokens and $20 per 1 million output tokens. When processing audio, the API will charge $100 per 1 million input tokens and $200 per 1 million output tokens. OpenAI says this equates to “approximately $0.06 per minute of audio input and $0.24 per minute of audio output.”
Friday, October 4, 2024
Longshoremen’s Fight Against Automation Confronts an AI Future
A strike by dockworkers across the U.S. threatens to close ports on the East and Gulf coasts and seriously impact thousands, if not millions, of business supply chains, causing retailers to brace for potential shortages of some products and disappointed customers. This kind of disruption is the goal of strikes, of course, but the longshoremen’s major demand, beyond higher wages, is quite startling. As the New York Post put it, the workers’ union is demanding a “total ban on automation,” and is holding the industry hostage for what analysis firm J.P. Morgan estimates as a $5 billion a day impact to the economy.
Specifically, the International Longshoremen’s Association says 85,000 U.S. workers and “tens of thousands” more around the world are demanding a ban on all kinds of automation at cargo ports. That prohibition would apply to cranes, gates, and moving shipping containers around the busy, sometimes chaotic scenes at commercial dockyards, according to the Post. What this means is that when a giant container ship arrives at a dock, every one of those multi-ton shipping containers would be shackled to a crane’s cable, lifted off the ship, moved ashore, stacked, organized, and moved around by trucks and hoists, each with a human at the controls.
This is dangerous, heavy-duty work, and in many cases it requires an expert driver: Forklifts and cranes are complicated machines, and in the case of cranes it’s often necessary to understand the physics of which type of load is being moved by the cable in order to safely lift it. The danger present in this industry is typified by dozens of articles each year documenting crane- or container-related accidents at ports around the world: Two weeks ago, industry news site the Maritime Executive reported on an incident in the Chinese port of Yantian where a crane collapsed onto a container ship, for example. In July, Taiwan News showed dramatic video of a container crane failure at the port of Kaohsiung. There are countless other examples.
But herein lies the problem. Accidents at ports risk not only physically harming people, but mean potential economic hits through damaged cargo or expensive dockside machinery or shipboard equipment. Apart from accidents, mislabeling or misdirecting cargo at a port could also hit businesses’ revenue. Replacing fallible human workers could thus save port operators a lot of money.
Tracking cargo as it moves from staging point to staging point and through customs clearance is a job that could be handled extraordinarily well by, say, AI-powered robot lifting vehicles. The task is perfectly suited to digital asset management, enabled by high-precision 5G and internet of things tech. Automated cranes, auditing equipment, and robot trucks can work 24/7, 365, and never ask for more pay or get injured.
In a way, the longshoremen’s demand for job protection echoes much of what’s going on as AI use increases throughout the working world, There are strong echoes of 2023’s SAG-AFTRA actors strike, which centered on protecting real humans’ incomes against the threat of technological AI replicas. A recent video game performers’ strike had the same issue at its heart.
The dock worker strike has more than a hint of the ongoing “Will AI steal my office job?” debate, and experts can’t seem to offer a definitive yes or no answer. The longshoremen’s union claims about the threat of automation may even remind history buffs of the Luddite movement in 19th-century England, where workers rioted against the automated textile machines that were replacing them.
But this is the 21st century, and self-driving truck technology really does seem like the coming reality for some aspects of cargo transport. AI sophistication and complexity advances day by day, and robots in the form of AI-driven androids are expected to reach many factory floors over the course of the next several years. Meanwhile, research shows AI won’t necessarily steal office jobs, but simply offer the chance to boost workers efforts as they labor, and even offer totally new roles in a new industry, like a recent report about transportation jobs showed. Can manual workers really hold off against “automation” forever?
BY KIT EATON @KITEATON
Wednesday, October 2, 2024
How AI Helps Retail Grow Faster
Generative AI chatbots that respond to natural language questions with cogent sentences have the potential to help retailers grow faster.
If chatbot responses are consistently accurate and effective, customers will happily use them at all hours of the day. If the chatbots too frequently hallucinate — confidently reply to user questions with incorrect answers — customers could stop doing business with the retailer.
The power and peril of generative AI for customer service
Consider the experience of Rick McConnell, CEO of Dynatrace, a Massachusetts-based software company. Dynatrace is optimistic about the use of generative AI-powered chatbots to improve customer service. “A couple of preliminary killer apps will emerge for generative AI. Based on my recent experiences, one of them will be customer service,” McConnell told me in a February 2024 interview.
“One went very well: I was trying to fix a billing issue with a cellular provider and the chatbot solved the problem fast,” he noted. “The second one went so badly that I will never do business with the company again. I was trying to correlate the contact lenses I received with the prescription. The contact lens provider’s chatbot couldn’t get me a solution. After three different segments, I never got it resolved.”
McConnell sees two keys for companies seeking to deploy generative AI for customer service. “First, they should train their large language models with high-quality data that includes relevant questions and great answers,” he says. “Second, the underlying data source for answering customer questions is essential to the process. The LLM must be accessing accurate and up-to-date data about each customer — that companies do not want to share with ChatGPT,” he said.
Retailers should invest in the highest payoff applications of generative AI. Rather than focusing on helping employees overcome creator’s block or boosting the productivity of coding or other business functions, retailers should use AI chatbots to drive revenue growth, according to my Value Pyramid concept.
Bearing McConnell’s caveats in mind, here are three ways retailers are using generative AI to drive revenue growth.
1. Personalized shopping experiences
Generative AI can use a customer’s prior purchasing behavior to suggest new purchases likely to appeal to each individual. Amazon uses a customer’s entire interaction with the e-tailer — from browsing to buying and paying — to refine recommendations to match each customer’s unique preferences, according to Closeloop.
Amazon is not the only retailer using generative AI to personalize shopping experiences, Carrefour’s Hopla chatbot uses a consumer’s budgets, dietary preferences, and menu ideas to offer real-time grocery suggestions and makes shopping more engaging, according to Oracle.
2. AI-enhanced virtual reality experiences
When people shop for furniture, they worry whether what looks good in the store or online will work when it arrives at their home. By pairing virtual reality and generative AI, furniture retailers are boosting confidence and achieving higher conversion rates for online shoppers. For example, Ikea lets customers see how furniture and décor will look in their homes, noted Closeloop.
Wayfair, the Boston-based online furniture retailer, introduced Decorify, a free generative AI tool to help customers redesign their living rooms. Customers upload a photo of their living rooms and Decorify creates “photorealistic images” of proposed designs and prompts consumers with real products similar to the ones in the photo, said Fiona Tan, Wayfair’s chief technology officer, according to Brain Rush: How to Invest and Compete in the Real World of Generative AI.
3. Dynamic Pricing and Promotions
Generative AI can help retailers boost revenue by setting the right price at the right moment. By analyzing “demand fluctuations, competitor activity, customer preferences, and historical sales trends,” according to Closeloop, retailers can boost revenue by quickly adjusting prices and promotions.
For example, Macy’s — which projects AI will drive over $7.5 billion in new business by 2029 — adjusts prices across its online and physical stores. The retail giant uses generative AI to increase or decrease prices dynamically based on how certain items are sold and offers “targeted discounts to customers based on their past shopping habits and preferences,” Closeloop noted.
If these three uses of generative AI help retailers exceed investors’ revenue growth expectations, other retailers will try to replicate these AI applications.
EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN
Subscribe to:
Posts (Atom)