Monday, December 30, 2024
GEN-Z IN 2024
They make up one-fifth of the labor force, they have strong entrepreneurial inclinations, and they’re not afraid to make their voices heard—especially when they feel they’ve been wronged. The members of the generation born between 1997 and 2012 are coming into their own and changing the face of the workplace in the process. It’s no wonder why, in 2024, we just couldn’t stop talking about Gen-Z.
This year, Inc. paid close attention to this cohort of entrepreneurs and employees. We’ve taken notice of their drive for change, as editor-in-chief Mike Hofman sat down with the youngest honorees of this year’s Inc. 5000 to understand how Gen-Z business owners are shaping workplace culture. We’ve also dug into the growing pains Gen- Zers are facing as they step into the corporate world for the first time: why they’re “consciously un-bossing,” getting fired from their first jobs, feeling unsatisfied, and quitting as a result. For these reasons, we’ve shared advice for business owners to better hire—and, critically, retain—their Gen-Z talent.
As this generation has increased its buying power, we’ve also paid close attention to its digital habits (as it fueled the seemingly never-ending brat summer) and figured out the ways brands could better market to it, giving in to unhinged content that is decidedly not demure.
With 2025 just around the corner, we’ll be keeping an eye on how this generation continues to reshape the working world—and paying close attention to the Gen-Z entrepreneurs who are building some of the most world-changing companies. There’s a lot to look forward to as Gen-Z continues to carve its own path forward.
—Rebecca Deczynski, senior editor at Inc.
Friday, December 27, 2024
How Top Cybersecurity Firms Are Scaling Faster and Smarter to Win in 2025
The cybersecurity market continues to take off—and so does the competition. With global spending on information security projected to hit $212 billion by 2025, according to consultancy Gartner—a 15 percent increase from 2024—cybersecurity companies face a relentless battle on two fronts: defending against evolving threats and outpacing rivals to seize market share.
The opportunity is massive, but so is the pressure to deliver. In cybersecurity, the buying process is inherently nonlinear. Buying cycles are unpredictable, threats evolve daily, and missed revenue signals can cost millions.
For companies chasing growth, IPOs, or exits, the game has changed. To stay ahead, top cybersecurity companies are modernizing go-to-market strategies, zeroing in on high-growth segments, and creating repeatable, predictable wins.
As CEO and co-founder of Clari, an enterprise revenue platform, I work alongside Fortune 500 companies—including cybersecurity clients Fortinet and Okta—to drive predictable revenue growth. Here’s how they’re doing it:
Scaling revenue growth exponentially
In a hyper-competitive market, predictable revenue growth is the result of discipline, data, and decisive action. To achieve this, companies need a reliable revenue baseline to accurately assess the probability of closing a sales opportunity, identify warning signs to weed out no-decision and slipped opportunities early, and follow rigorous forecasting and sales principles to drive more predictable business outcomes.
Fortinet, a global cybersecurity leader, tackles this problem head-on by centralizing revenue operations (RevOps) data. The company integrates information from separate platforms—like emails, calls, and calendar invites—into a single system. This unified data creates a shared source of truth, enabling Fortinet to build a consistent revenue process.
This isn’t just about centralization, it’s about transformation. Armed with advanced forecasting capabilities and historical trend analysis, Fortinet gains an edge. Automated insights into future performance allow the entire revenue team to take action proactively earlier in the quarter.
With this rigor, Fortinet achieved 97 percent forecasting accuracy. Fortinet’s leadership can make confident decisions about resource allocation, capacity planning, and reinvestment. For a market as high-stakes as cybersecurity, this level of precision doesn’t just drive growth—it gives companies a competitive edge.
Fortinet has proved that when companies eliminate guesswork and operate with precision, growth becomes scalable and success becomes repeatable.
Predictable growth: command and control revenue
Before its IPO, Okta had a problem that most high-growth companies face: the chaos of unreliable forecasting. Sales reps relied on best-guess numbers, rolling up inconsistent estimates into leadership reports. This manual, time-consuming system was destined to break under the weight of Okta’s rapid growth—and for a company racing toward an IPO, that level of unpredictability wasn’t an option.
Okta’s leadership knew that to sustain their momentum, they needed to overhaul not just their processes, but their entire approach to revenue alignment and execution. They implemented a structured, scalable forecasting framework and brought every critical team—sales, marketing, and customer success—into lockstep.
The shift wasn’t just operational; it was cultural. A consistent cadence of pipeline reviews and forecast meetings became the backbone of their revenue strategy, driving collaboration and accountability across the organization.
The results were transformative. Okta gained the visibility and consistency needed to navigate its IPO with confidence. And leadership had clarity and trust in their revenue data, empowering teams to make data-driven decisions that balanced short-term priorities with long-term growth.
By turning forecasting into a disciplined, cross-functional process, Okta didn’t just solve a pain point—it built a foundation for predictable, scalable growth. The story illustrates that when teams align around a shared revenue strategy, chaos becomes control, and growth becomes achievable.
Why operational excellence will define the next cybersecurity leaders
The cybersecurity industry is operating under relentless pressure: Threats are evolving, competition is fierce, and the margin for error is razor-thin. Cybersecurity IPO winners have proved that staying ahead requires more than innovation—winning demands operational excellence across the entire revenue team.
By unifying data, aligning teams, and modernizing forecasting, they’ve used data and technology to deliver precision. For cybersecurity firms aiming to grow, scale, or go public, the lesson is clear: Operational discipline isn’t a choice—it’s the new standard for success.
EXPERT OPINION BY ANDY BYRNE, CEO, CLARI
Wednesday, December 25, 2024
I Called 1-800-ChatGPT and Talked to the AI Chatbot. It Might Be the Smartest Idea I’ve Seen Yet
This morning, I spent 15 minutes on the phone with ChatGPT. You probably know by now that OpenAI released the ability to dial 1-800-ChatGPT to interact with the chatbot via voice call. And, so, for your sake, after my kids left for school, I sat down and made a phone call.
It’s a weird thing to consider, partially because—as a general rule—I try as hard as I can never to talk to anyone on the phone other than my wife or kids. Beyond that, I’d almost always rather communicate by text, or email, or Slack, or anything that doesn’t involve a synchronous voice conversation.
Also, I’m not sure the last time I dialed a 1-800 number that wasn’t to call an airline. I’d pretty much assumed they’d all been taken. Though considering the amount of money OpenAI paid to buy the chat.com vanity url, I imagine this was a bit easier.
I came prepared with a list of questions to see how the experience is different from all of the other ways you can interact with ChatGPT.
On the one hand, it’s a lot less useful than visiting chat.com or using one of the various apps. It’s more of a novelty or party trick that you might pull out because, hey, why not make a phone call to a chatbot just for fun?
On the other hand, it works exactly like ChatGPT, except more friendly and more polished than voice mode in the app. I used the iOS 18 feature that lets you record phone calls, and when my iPhone gave the “this call is being recorded” alert, ChatGPT responded, “Great, let’s talk!”
Then, I asked what seemed like an obvious question for mid-December: “What are some common things that kids do to try to have a snow day?”
“Kids have some classic snow day rituals,” ChatGPT replied. “They might wear their pajamas inside out, flush ice cubes down the toilet, or even sleep with a spoon under their pillow, all in the hopes of a snow day miracle.“
Which, to be fair, is exactly the correct answer. If you had asked any of our four children, they would have given you that answer almost word for word. Of course, you could get the same information via any of the other ways you can interact with ChatGPT, so why a 1-800 phone number?
Apparently, a lot of people think it’s a great idea. While OpenAI wouldn’t give specifics, CEO Sam Altman tweeted that “Wow people really love 1-800-CHATGPT lol.” It’s hard to measure the success of a product or feature on the basis of a vague tweet from a CEO, but it’s not surprising to me at all that a lot of top people would want to try this out. Again, even if for no other reason than party trick.
I think it’s pretty clear that I am not the audience for this. I have a ChatGPT Plus account, and I regularly use ChatGPT on my iPhone and Mac. I even changed my default search engine in Brave to the ChatGPT extension.
I’m not likely to make a phone call to talk to a robot unless it’s just for fun. In fact, I’ve written before that my least favorite thing in the world is when companies force you to have a conversation with a customer service bot instead of just letting you talk to a real human. That’s not fun.
Fun, it turns out, is a pretty important part of this—and we’ll come back to it in a minute.
One thing, however, did surprise me: ChatGPT is very good at understanding voice prompts. Much better than other voice assistants, at least in my experience with the phone version.
Which, honestly, is kind of brilliant actually. It does not seem far-fetched that, over the next few weeks, as people get together for the holidays, someone will have a conversation or ask a question, and someone else will say, “Hey, I know how we can get the answer to that.” How fun will it be at that moment to just dial 1-800-ChatGPT? If you do, you’ll be demoing ChatGPT to a bunch of people who have probably heard of the chatbot but have never used it in any meaningful way.
This is why this is so brilliant. It’s fun, and it reduces the friction involved with downloading an app or navigating to a website and creating an account. In that sense, it’s exposing an entirely new audience to ChatGPT, in a fun and accessible way. That’s one of the smartest ideas I’ve seen yet.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Monday, December 23, 2024
Gen-Zers Are Big on Side Hustles, and They’re Using AI to Juggle It All
The gig economy is still very much alive, thanks in part to Gen-Z and Millennials. A new survey from Intuit finds that nearly two-thirds of people between the ages of 18 and 35 have either started or plan to launch a side hustle. And, increasingly, they’re leaning on artificial intelligence to do so.
Gen-Zers are opting for a more entrepreneurial approach to their careers. And this, writes Intuit, represents “a material shift in how younger generations approach work, purpose, and financial independence.”
Gen-Zers and Millennials have a strong desire to be their own boss, according to the survey. Nearly half of the 1,000 people Intuit spoke with said they wanted to be in charge of their own destiny. Another 42 percent said they were pursuing their passions with these side gigs. The flexibility of these jobs and the chance to build something personal and unique were also key motivators.
Gen Zers’ status as digital natives, the first generation to grow up with the internet as a part of daily life, is a big part of the embrace of side gigs as well. Some 80 percent of Gen-Z business owners started their businesses online or had a mobile component.
Social media is also a tool they’re employing, with 44 percent relying on platforms, including Instagram and TikTok, to market their side hustle’s services and raise brand awareness.
“There’s an entire cultural renaissance happening on social media where creators, business owners and side hustlers are finding their target audience, customer or next gig all in the palm of their hand whenever or wherever they decide to work,” Intuit’s Consumer Trend Expert Marissa Cazem.
The side hustles, for now, are being run alongside a regular job. And while 65 percent of those surveyed who are operating one currently say they plan to continue doing so in 2025, figuring out the timing remains the biggest challenge.
That’s where AI comes in. The advances the technology offers in reducing the time required for certain tasks has been a boon for Gen-Zers. They are using AI for things like content creation, customer service, and even logo creation and web design.
Some Gen-Z founders are even leaning on AI to exit their side hustles. In 2018, Ben Zogby launched HighStrike, which offers educational resources and webinars to help people learn how to invest. He worked on the business during nights and weekends after finishing his 9-to-5 engineering job.
Eventually, it found an audience—and earlier this year, the 27-year-old Bostonian sold the business for $1.8 million.
Zogby was able to exit for that impressive amount thanks to Flippa, an online platform that uses AI to connect founders with potential buyers, suggesting valuations in as little as 30 minutes. The tool’s large database of buyers also let Zogby locate targeted bidders, saving the time that might otherwise have gone into looking for the right person to buy the business. In the end, HighStrike found itself the subject of a bidding war, which resulted in the $1.8 million exit.
Just 3 percent of Gen-Z workers with side gigs say they have failed. Most pivot, Intuit said, when things aren’t working. The gigs can be lucrative, as well. On average, side hustles are profitable after three to six months. And a separate study from Bankrate found the average income of these gigs is $891 per month.
“Gen Z and millennials are reshaping the economic landscape,” wrote Cazem of Intuit. “They’re not just participating in the gig economy—they’re leading it, armed with digital tools, entrepreneurial spirit, and a drive for autonomy.”
BY CHRIS MORRIS @MORRISATLARGE
Friday, December 20, 2024
Why Microsoft’s New AI May Speed Up Your Company’s Use of New Technology
While businesses embrace AI systems like OpenAI’s ChatGPT or Google’s Gemini, keen to reap the money- or time-saving benefits they can offer, it’s worth remembering that the technology requires vast, often pricey computer resources. This means companies that want to run their own custom AI systems either have to install expensive facilities, or access a third party’s AI via the cloud—a process that can be insecure. Enter Microsoft’s new Phi-4 AI, a much smaller AI model, technologically speaking, than its big name rivals. But though Phi is small, it’s still mighty: data show it performs as well as, if not outperforms the bigger AIs, news site VentureBeat reports.
As VentureBeat notes, enterprises that are deploying AI solutions to help streamline their company’s costs, or turbo-boost worker productivity, can face high bills for the computer and energy resources needed to run conventional “big” AI models. As VentureBeat says, “many organizations have hesitated to fully embrace” large AI models due to the cost.” But Microsoft’s new Phi-4 doesn’t need such large technological systems, and could even bring cutting-edge AI capabilities within reach of mid-sized companies, or non-tech outfits that lack big IT budgets. As well as being small, data on how Phi-4 works show it’s really good at math problems, making it a promising tool for use in research, engineering problem-solving and financial modeling, and similar tasks that smaller companies could tackle with a little AI help.
Why else would a smaller company embrace a small AI like Phi-4?
A recent report in the Economist offered a surprising reason. While fast-developing AI may be considered a threat to some enterprises, since market-leading models are already capable of replicating—perhaps for free—the niche capabilities some companies sell as their core business. While that threatens their future profitability, other enterprises may find benefits to embracing the tech early and innovatively.
The publication cites an AI-boosted success at the translation app Duolingo. The app’s core language-learning lessons can be delivered by a chatbot like ChatGPT for free, potentially casting a shadow on Duolingo’s future. But the company leaped to embrace AI, launching an souped-up video chatbot to let language learners practice speaking and getting feedback on their efforts. They even used this AI avatar as part of a recent financial call with investors. The AI delivered Duolingo’s quarterly results—to critical acclaim.
So how exactly does Phi-4 differ from, say, a large AI model like Google’s Gemini and why should you care?
It’s a question of scale. As VentureBeat explains, models like Gemini can have hundreds of billions—or maybe trillions—of parameters built into their algorithms. These parameters get subtly tweaked when a chatbot AI is “trained” using real-world data. The industry has been advancing on the general principle that bigger is better, with more parameters in the model apparently equating to more sophisticated answers from the chatbot when users query it. But a huge database of parameters needs giant server-scale computers for storage, and countless expensive AI processing chips to trawl through the data when the AI is queried or being trained with new information.
To give a sense of the scale involved, Google and Microsoft have said their next-gen AI systems will need $100 billion-dollar investments in the hardware and software. But Phi-4 has just 14 billion parameters in its model, making it much more reasonably sized, so it could be run on a typical server that’s affordable for much smaller companies—enterprises that want to run tailor-made AI systems like this under their own control, to prevent sensitive company info leaking out when using a cloud-based AI service.
Recently some AI companies, like OpenAI, seem to have stalled a little in pushing for ever-bigger next-generation AI models. So it’s possible that Microsoft’s Phi-4 model shows that in some AI matters, size doesn’t really matter—what makes it good for business is how your company might use it.
BY KIT EATON @KITEATON
Wednesday, December 18, 2024
This Futurist Predicts a Coming ‘Living Intelligence’ and AI Supercycle
Recent advancements in artificial intelligence hold immense disruptive potential for businesses big and small. But Amy Webb, a futurist and NYU Stern School of Business professor, says AI isn’t the only transformative technology that businesses need to prepare for. In a new report, published by Webb’s Future Today Institute, she predicts the convergence of three technologies. Artificial intelligence together with advanced sensors and bioengineering will create what’s known as “living intelligence” that could drive a supercycle of exponential growth and disruption across multiple industries.
“Some companies are going to miss this,” Webb says. “They’re going to laser focus on AI, forget about everything else that’s happening, and find out that they are disrupted again earlier than they thought they would.”
A ‘Cambrian Explosion’ of sensors will feed AI
Webb refers to AI as “the foundation” and “everything engine” that will power the living intelligence technology supercycle. The exponential costs of computing to train large language models, the report also notes, are driving the formation of small language models that use less, but more focused, data. Providing some of that data will be a “Cambrian Explosion” of advanced sensors, notes the report, referring to a period of rapid evolutionary development on Earth more than 500 million years ago. Webb anticipates that these omnipresent sensors will feed data to information-hungry AI models.
“As AI systems increasingly demand diverse data types, especially sensory and visual inputs, large language models must incorporate these inputs into their training or risk hitting performance ceilings,” the report reads. “Companies have realized that they need to invent new devices in order to acquire even more data to train AI.”
Webb anticipates personalized data, particularly from wearable sensors, will lead to the creation of personalized AI and “large action models” that predict actions, rather than words. This extends to businesses and governments, as well as individuals, and Webb anticipates these models interacting with one another “with varying degrees of success.”
The third technology that Webb anticipates shaping the supercycle is bioengineering. Its futuristic possible applications include computers made of organic tissue, such as brain cells. This so-called organoid intelligence may sound like science fiction—and for the most part today, it is—but there are already examples of AI revolutionizing various scientific fields including chemical engineering and biotech through more immediate applications like research in drug discovery and interaction. In fact, the scientists who won the Nobel prize in chemistry this year were recognized for applying artificial intelligence to the design and prediction of novel proteins.
What it means for businesses
Living intelligence may not seem applicable for every business—after all, a local retail shop, restaurant, or services business may not seem to have much to do with bioengineering, sensors, and AI. But Webb says that even small and medium-size businesses can gain from harnessing “living intelligence.” For example, a hypothetical shoe manufacturer could feel its impact in everything from materials sourcing to the ever-increasing pace of very fast fashion.
“It means that materials will get sourced in other places, if not by that manufacturer, then by somebody else,” she says. “It accelerates a lot of the existing functions of businesses.”
Future-proofing for living intelligence
Webb says an easy first step for leaders and entrepreneurs hoping to prep for change is to map out their value network, or the web of relationships from suppliers and distributors to consumers and accountants that help a company run. “When that value network is healthy, everybody is generating value together,” she says.
Second, she advises entrepreneurs to “commit to learning” about the coming wave of innovation and how it could intersect with their businesses.
“Now is a time for every single person in every business to just get a minimal amount of education on what all of these technologies are, what they aren’t, what it means when they come together and combine,” she says. “It’ll help everybody make decisions more easily when the time comes.”
Finally, she urges companies large and small to plan for the future by mapping out where they’d like to see their company—and reverse engineer a strategy for getting there.
“I know that’s tough. They’re just trying to keep the lights on or go quarter by quarter,” she says. “Every company should develop capabilities and strategic foresight and figure out where they want to be and reverse engineer that back to the present.”
Monday, December 16, 2024
Exclusive: MasterClass Is Introducing AI Mentors, Including a Mark Cuban Chatbot. Any Questions?
MasterClass is bringing its famous teachers into the AI arena.
The online learning platform known for its wide variety of celebrity instructors is launching MasterClass On Call, a standalone product that will allow customers to chat with AI-powered duplicates of the platform’s teachers. The cost will be $10 per month or $84 per year.
MasterClass founder and CEO David Rogier says the company has been experimenting with the concept of AI versions of its instructors since the launch of OpenAI’s GPT-3 in 2022. He sees the technology as the key to unlocking a feature that MasterClass customers have been requesting for years: the ability to ask its celebrity instructors for advice. Big names like Ray Dalio, Richard Branson, and yes, Mark Cuban, have already inked deals to collaborate with MasterClass on these AI personas.
With the rise of generative AI, Rogier says a shift toward on-demand learning is underway. “If I’m negotiating a business deal, I need advice right now,” he says. “I don’t want to sit through an eight-hour class. Just tell me what to do.”
Subscribers to MasterClass On Call will gain unlimited, on-demand access to a collection of AI personas designed to be artificial mentors. For example, Rogier says that aspiring entrepreneurs could ask Cuban’s AI to help improve a pitch and role-play as a potential investor. Cuban said in a statement that the new product is “going to be an important tool for entrepreneurs and something I’m excited to be a part of.”
In an exclusive demo, Inc. got access to AI versions of sleep expert Matt Walker and Black Swan Group founder and former FBI hostage negotiator Chris Voss, the first two personas currently available in the public beta. The AI voices are remarkably similar to their human counterparts, with natural-sounding cadence and fast response times. When asked for help with a hypothetical salary negotiation, the AI-Voss discussed how to approach the conversation, provided tips on how to strike a balance between confidence and humility, and drafted an initial outreach email.
Future updates will bring new in-development personas, enable the AI mentors to remember previous conversations, and give users the ability to upload documents (like pitch decks) for the AI instructors to review.
Creating these AI mentors is no easy feat. MasterClass chief technology officer Mandar Bapaye says that the company links together “an orchestra of multiple AI models” to handle individual components, like providing the mentors’ knowledge base or transforming text into speech.
The knowledge model is trained on information contained in the mentors’ already-existing MasterClass courses, along with a curated selection of writings and audio recordings. In addition, MasterClass holds extensive interviews with mentors to gather both voice samples and data regarding how they respond to a wide variety of questions. Mentors also give periodic feedback to continuously improve the AI’s performance, like choosing which of two responses to the same question is more accurate to the advice the mentor would actually give.
When MasterClass began internal tests of On Call, Rogier was surprised by how comfortable people were talking to the AI mentors. Early testers felt more comfortable sharing with the AI because they didn’t feel any judgement or pressure to impress anyone. They were empowered to ask the “dumb questions” they might be embarrassed to ask otherwise, says Rogier.
MasterClass On Call is now available in beta with access to Voss’s and Walker’s AI personas. More mentors, including fashion designer and Queer Eye style expert Tan France, superstar chef Gordon Ramsay, and legendary feminist writer Gloria Steinem are expected to be added over the coming months.
BY BEN SHERRY, STAFF REPORTER @BENLUCASSHERRY
Friday, December 13, 2024
OpenAI Just Released Its AI Video Generator, Sora
After months of anticipation, OpenAI has released Sora, its first-ever AI model designed for text-to-video and image-to-video generation. In a live streamed video presentation, OpenAI CEO Sam Altman announced the model’s launch, available now at Sora.com.
The release of Sora is arguably OpenAI’s biggest launch of 2024—one the company’s been teasing since first revealing the model in February. Only a select number of customers have had access to Sora since then, such as Toys “R” Us, which debuted the first Sora-created ad in June. In an early review, technology influencer Marques Brownlee called Sora “horrifying and inspiring at the same time.”
Sora users will be able to generate video in different resolutions, from 480p (SD) to 1,080p (HD), with higher resolutions taking more time to generate. The size dimensions, length, and speed of the video can also be customized. In addition, users will be able to see other people’s AI-video creations and then remix or alter them. In an example, Brownlee successfully altered a video of a house on a cliff to add a golf course to the background.
Users will also be able to upload images and ask Sora to turn them into videos. Brownlee says he found the most success by generating images with OpenAI’s Dall-E, and then uploading them to Sora.
Brownlee says there are still some major areas where Sora isn’t ready yet. The model struggles with object permanence—with objects often blipping into and out of existence—and it hasn’t quite figured out how to flawlessly recreate physics. The videos also don’t currently include sound of any kind.
As for how this first version of Sora can best be used commercially, Brownlee suggests using the model to create abstract videos and title designs. Sora can generate incredibly detailed textures and complex patterns, so it’s especially good at generating the kind of eye-catching abstracts that can often be found on modern websites. Plus, Brownlee says Sora can be quite accurate at creating titles or logos when given specific words to recreate.
Altman says that Sora will be available for users in the United States later today, but access in the U.K. and most of Europe will take some time.
According to Sora’s product page, ChatGPT Plus subscribers, who pay $20 per month, will be able to generate 50 videos per month with a 720p resolution and a maximum length of five seconds. ChatGPT Pro subscribers, who pay $200 per month, will be able to generate unlimited videos, and make videos with resolutions of 1,080p and a maximum length of 20 seconds. Pro subscribers will also be able to download videos without a watermark.
Wednesday, December 11, 2024
How Is Using Generative AI Not Considered Theft?
Over the course of 2024, I put everything I’ve ever written behind a paywall.
It’s not something I wanted to do – it’s something I had to do to protect the value of what I write. I have nightmares about some joker in a basement somewhere using my “vibe” to sell crypto scams to old folks.
This isn’t (total) hyperbole. What I just described is merely the worst-case scenario of a very common phenomenon with generative AI, and one that we’re all kind of sweeping under the proverbial rug. I know this because I was working with NLG and generative AI as far back as 2011, before AI ethics were even a thing, and even then I could smell trouble on the horizon.
Well, here comes 2025 and here comes trouble.
See, a lot of LLMs were created during a wild west period of scraping websites for content without permission. Precious little of that was properly verified, let alone properly attributed.
So every time you use generative AI, no matter how altruistic your initiative, you’re running the risk of stealing from other people—writers, designers, musicians, coders, attorneys, et al.—to produce information that may also end up being completely inaccurate.
It’s fine until you get caught, right? And honestly, what are the odds that massive intellectual theft is going to get traced back to you?
I mean, everybody’s doing it, right?
Well, I believe businesses are quickly approaching the not-so-fine line between ethics and penalties when it comes to using generative AI. So if you’re using generative AI to support your business, it’s time to decide whether or not it’s worth it.
Google Doesn’t Like Crap Content
Regardless of how it’s made, Google is starting to get a little more serious about parasite SEO content – websites that host garbage clickbait content to boost SEO juice, like a sports website running unrelated product reviews.
A lot of that content now is primarily being produced by generative AI.
In fact, that recent article above (from The Verge) references the case of Sports Illustrated getting caught last year using generative AI. But as I pointed out when I wrote about it at the time, while everyone was (rightfully) blasting SI for using AI, they were missing the point.
SI was using AI primarily to write product reviews unrelated to its content, and this was a content scheme it had been employing for much longer than it had been using AI to create said content.
So why does Google care now? Enough to inflict major search engine setbacks?
It’s Not About the Starving Artists Either
Yeah, suck it up, writer-boy! You should be grateful that you get to clickity-clack on the keyboard!
Except the problem is bigger than artists.
In an article I wrote about how using generative AI works against you, I made a tacit distinction between people using AI as a helper tool and using it to imitate the work of a reviewer that’s actually used the product.
Because the latter is more than theft – it’s also lying. And maybe light fraud.
But even if you’re using ChatGPT to perfect a cover letter for a job you really need, that doesn’t mean the crime – or transaction, I guess – is victimless.
I’ve had offers made to me to scrape all my content and turn it into some kind of advice-slinging Joe-bot. None of them were going to make me rich, or even slightly offset the revenue I’m making from various publishers or readers of my (now) private newsletter.
What I’m saying is, it exposes the age-old advice warning: You get what you pay for.
So even if you’re using generative AI as a tool, even for the most altruistic reasons – and let’s face it, most folks are just trying to make a buck with it – there’s still a very good chance you’re committing theft, as well as a 100 percent chance that you’re getting only a cheap derivative of someone else’s work.
It’s why all my content is behind a paywall now.
So it’s not just an ethical quagmire – it’s also a poor value proposition.
Free AI Is a Myth
There’s no such thing as a free lunch, even a free artificial lunch. And this is where I get speculative and conspiratorial.
You might be paying pennies or dimes for access to someone else’s processing power and words, but believe me, the proprietors of that power and those words still want your money.
Now, let’s talk about Apple and its 30 percent cut of mobile app revenue.
Yeah, it was ridiculously cheap to set up a developer license to get our mobile apps onto Apple’s storefront. They want us to do that, they welcome our business. But of course, anything we create and pump out the other end is going to be subject to almost a third of our revenue going back to Apple.
Try not paying that. Ask Epic Games about it.
So what happens when these ethical AI problems become capital P Problems?
Today, as you read this, there are already various lawsuits underway against proprietors of AI for massive theft (allegedly). And it appears that at least a handful of these are going to be winnable.
On top of that, an AI Wall is approaching, which is basically a law of diminishing returns on adding any more data to AI datasets because of limits on processing power and, well, a general lack of demand for more complex logic.
And if you want to get super conspiratorial, there’s the spooky case of certain names crashing ChatGPT. The reason why is still under speculation, but it’s clear there is some privacy monkey business going on behind the wizard’s curtain.
What do you think happens when the resulting revenue problems hit OpenAI, Anthropic, Google, Amazon, and so on?
My guess is that it’s going to severely impact whatever the end users are using the generative AI for – and I’ll bet the language to be able to impact the end use is already somewhere in those wordy licensing agreements that those end users glossed over (if they read them at all).
You get. What. You pay. For.
Look, people like me (and please join my email list to follow along) have long pointed at crypto and said, “That’s cool and all, but it’s not money.” Now it’s time to point at generative AI magic and say, “That’s really neat, but it’s stealing.”
And eventually, someone is going to have to pay the price. Don’t let it be you.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Monday, December 9, 2024
AI’s role in scientific research is evolving from a tool to a primary driver of discovery.
Last month, the scientific community experienced a groundbreaking moment with the announcement of the 2024 Nobel Prizes in physics and chemistry. In an unprecedented outcome, both prizes were awarded for achievements involving artificial intelligence—signaling the beginning of an AI-driven era in scientific discovery. This historic event not only honors the visionary minds behind these innovations but also indicates a profound shift: AI is transitioning from being a mere tool to becoming the true driver of discovery itself.
Nobel Prize in Physics: Neural networks propel AI revolution
The Nobel Prize in physics was awarded to Geoffrey Hinton, often referred to as the “Godfather of Artificial Intelligence,” and John Hopfield. In the 1980s, these pioneers laid the foundation for artificial neural networks—mathematical systems inspired by the human brain. Hopfield designed a network capable of storing and reconstructing complex patterns, which Hinton subsequently advanced. Hinton’s application of the Boltzmann machine enabled feature detection and automated learning, forming the backbone of modern AI technologies, including systems like ChatGPT.
Nobel Prize in Chemistry: AI unlocks the protein folding mystery
The Nobel Prize in chemistry was awarded to Demis Hassabis, John Jumper, and David Baker for their pioneering use of AI in protein research. In 2020, Hassabis and Jumper developed AlphaFold2, an AI model that solved a 50-year-old challenge: predicting protein structures. This model can now predict the structure of approximately 200 million proteins and is utilized by researchers in 190 countries for drug development, antibiotic resistance studies, and the creation of enzymes to break down plastic. Baker expanded on this by using AI to design entirely new proteins, paving the way for applications in drug development, vaccines, nanomaterials, and microscopic sensors.
AI: The new discoverer in scientific advancement
This year’s Nobel Prizes in physics and chemistry represent a pivotal moment, elevating AI from a supporting role to a primary force in scientific progress. The breakthroughs in physics, chemistry, and biology that led to these awards were made possible due to neural networks and advanced machine learning tools—not just through human ingenuity.
The combined efforts of Hinton, Hopfield, Hassabis, Jumper, and Baker signify a significant transformation in how scientific research is conducted. The traditional perception of slow, meticulous experiments has shifted to AI-driven acceleration, where new insights are uncovered at speeds never before imagined. As technology continues to evolve, science may be entering an era where an AI system itself could win a Nobel Prize—not just the individuals who developed it.
Startups harness AI potential to shape industries
This transformation goes beyond academia. Major corporations are now putting AI at the core of their scientific efforts. Recently, pharmaceutical giant Eli Lilly appointed Thomas Fuchs as its first AI chief, indicating a new direction for the entire industry.
Startups, too, are leveraging AI’s transformative capabilities. A prime example is Xaira, Nobel laureate David Baker’s new venture, which recently secured a billion-dollar investment to commercialize his discoveries. While this level of funding blurs the line between startup and corporation, it highlights AI’s enormous potential for scientific innovation and entrepreneurship.
Another example is Somite.ai, the my company, which developed the DeltaStem platform—named after AlphaFold, the system that earned Hassabis and Jumper their Nobel Prize. Somite.ai’s platform trains foundational models to predict intercellular communication and cell differentiation, enabling the discovery and optimization of novel therapies. It also generates vast amounts of biological data, driving further scientific breakthroughs with the ultimate goal of developing treatments that could potentially cure tens of millions of people.
Indeed, the future of medicine lies at the intersection of artificial intelligence and biology. But perhaps more profoundly, the very essence of “discovery” is being redefined. AI is no longer just assisting human scientists—it may just yet become the scientist, pushing the boundaries of what was once thought to be solely within human reach.
As this AI-driven revolution gathers momentum, the future holds limitless potential for companies and startups to reshape industries and improve human lives.
EXPERT OPINION BY MICHA BREAKSTONE, FOUNDER AND CEO OF SOMITE.AI @MICHABREAKSTONE
Friday, December 6, 2024
Here Are the Big 2025 Predictions for AI, From a CEO Who Was Right About This Year’s Developments
It’s that time of year, when tech luminaries offer thoughts on where innovations will take us in the year ahead, and Ai promises to be a driving force. Back in December 2023, tech luminary Bill Gates made a bold prediction about how AI would advance in 2024, guessing that “we are 18-24 months away from significant levels of AI use by the general population.” Gates was largely correct, as the explosive growth of ChatGPT shows, while Apple, Google and Microsoft integrate AI into their consumer- and business-centric tools.
Looking to 2025, another AI executive with an even more impressive track record has made his forecast, with some startling surprise forecasts. Clem Delangue, CEO of Hugging Face, an AI-development platform and user community used by millions of developers and big names like Intel and Qualcomm, expects that we’ll see “the first major public protest related to AI” next year.
Delangue published his predictions as a list of bullet-points on his LinkedIn page (almost as if an AI had written them). While he doesn’t detail his thoughts on a very human response to supercharged computing capabilities, based on recent controversies swirling around AI adoption, the pushback could be about anything from AI stealing jobs en masse—perhaps in the style of the Occupy Wall St protests—to inappropriate use of AI tech by police, government bodies or health care systems.
Delangue also predicts that a “big company will see its market cap divided by two or more because of AI,” implying AI breakthroughs will suddenly render obsolete some core tech or core business philosophy of a major corporation—perhaps in the way that the arrival of the internet hit newspapers’ core print advertising revenue business model.
The third prediction is interesting because it crosses from software to hardware: “at least 100,000 personal AI robots will be ordered,” Delangue said. This is right in line with AI robot developments from companies like Tesla and Figure. It also tracks with pronouncements from Elon Musk about humanoid robots, including his own plans to put them to work on Tesla production lines.
Delangue also predicted there will be AI breakthroughs in biology and chemistry—resonating with research uses of AI for tasks like drug molecule discovery—and that China will “start to lead the AI race,” a fact that may interest certain concerned parties, like the U.S. government. Lastly, Delangue said the user base of his own company is likely to rise to 15 million “AI builders,” up from this year’s tally of 7 million users.
Before you dismiss these predictions as entrepreneurial hucksterism, it’s worth noting that many of Delangue’s AI predictions for this year were accurate, including rising general awareness of the monetary and environmental costs of developing better AI models. Delangue’s musings could also be seen as a useful weather vane for how changing AI tech in 2025 may impact your personal digital life, as well as the technology applied in your company.
If your company has been slow to embrace AI tech, this is another reminder that the AI wave is already washing over us, and maybe it’s time to catch up. At the very least, you should maybe try to ensure that it’s not your firm that’s the cause of the first mass public protests against AI tech.
BY KIT EATON @KITEATON
Wednesday, December 4, 2024
Intel Just Forced Out Its CEO. It’s a Brutal Lesson Every Leader Should Learn
Pat Gelsinger was supposed to save Intel.
That was the promise when the company named him to the top job back in February of 2021. A respected CEO and former Intel engineer, Gelsinger checked all the boxes and showed up with a plan to restore the company to its former glory. Not long after he took over as CEO, Gelisnger told an audience his thoughts about the company’s focus moving forward:
“We’re bringing back the execution discipline of Intel. I call it the Grovian culture that we do what we say we will do. That we have that confidence in our execution. That our teams are fired up. That we said we’re going to do x, we’re going to 1.1x, every time that we make a commitment. That’s the Intel culture that we are bringing back.”
I don’t think anyone would disagree that Intel’s culture was a problem. And, I don’t think anyone would disagree that if there was anyone who understood the culture of Intel, it was Gelsinger. He spent a good part of his career at Intel and—notably—was the architect of the 80486 processor. He had also come as a highly respected CEO, having been voted the best tech CEO while he was at VMware.
If there was a company more in need of being saved than Intel, I can’t think of what it could be. Under Gelisnger’s predecessors, the company had faced major delays in its advanced chip processes. It had also fallen far behind its biggest competitors, especially TSMC
However, Gelsigner did not turn the company around. In fact, it’s not clear whether Gelsigner has succeeded in any meaningful way in restoring the elusive “Grovian culture,” but I’m also not sure it really matters.
Instead, on Monday, the company announced he was out after the Board grew impatient with his plan to turn around the iconic chip manufacturer. According to Bloomberg, Intel’s Directors gave him the option of retiring or being fired. It’s a dramatic, though not altogether surprising, turn of events for a CEO who—for a number of reasons—couldn’t live up to the promise of fixing a company where he had spent most of his career.
Look, I have no idea whether Gelsigner was the right person for the job, though it is hard to imagine someone with a better overall resume. When he arrived, it definitely seemed like it. Instead, he saw the company fall further behind, culminating in its removal from the Dow Jones stock index after 25 years.
Today, the company’s market cap is less than half what it was when Gelsinger took over, while, at the same time, its biggest competitors have skyrocketed. Nvidia, for example, was worth $350 billion the day Gelisnger became CEO. Today, it’s worth $3.3 trillion.
It seems, from the outside, like Intel is in a very messy place. It’s a company that, for almost every business reason, shouldn’t exist. Right now, the main rationale for keeping Intel afloat seems to be that it’s critical to national security to have an American company making computer chips. I think that’s certainly true, but it’s just not clear that Intel is going to be that company.
I’m not sure that Intel can—or should be saved, but that’s not the point. The point—and the real lesson here—is that it really doesn’t matter if you keep your promises if you make the wrong promises in the first place. Just doing what you say you’ll do isn’t actually enough. It’s equally important that you be doing the right thing. You have to be doing something worth doing. Or, said another way, you don’t get bonus points for keeping the wrong promises.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Monday, December 2, 2024
5 High-ROI Marketing Strategies for Today’s Evolving Landscape
In today’s evolving business landscape, the most effective marketing strategies are shifting, with more companies relying on personal connections in addition to digital platforms and AI.
Recently, Inc. Editor-in-Chief Mike Hofman queried Inc. 5000 Community members on their best marketing channels. Generally speaking, founders said the highest-ROI now comes from methods that prioritize authenticity, personal recommendations, and partnerships over flashy ad campaigns. For some, platforms like LinkedIn have proved to be key drivers of business growth, while others are leaning into content marketing, podcasts, or even the boost of being featured in Inc. Honorees also reported that relationships and credibility outweigh traditional advertising in a crowded marketplace. For many Inc. 5000 honorees, referrals, whether from personal connections or online networks, have become indispensable marketing strategies.
Below, a few of our business leaders share insights on what’s working for them:
“Relationships are the heartbeat of trust, communication, and execution.” — Jennifer (Takenaka) Schielke, CEO, Summit Group Solutions
“Referrals haven’t been a sustainable effort for me. Leveraging my Inc. Masters articles with my podcast has led to speaking invitations.” — Gina Anderson, Co-Fonder, Luma
“Combined partnerships offer full lifecycle solutions, so that potential clients do not have to shop each piece independently. Our partners in two such efforts are transparent with each other. We realize that by giving potential clients an a la carte approach, full lifecycle services have a much better ROI.” — Beth Maser, CEO, History Associates Incorporated
“Referral, and our experiential marketing events.” — Natasha Miller, Founder, Entire Productions
“For me, it’s been leveraging my Inc. Masters articles with my podcast to get invites to speaking events.” — Gina Anderson, Co-Founder, Luma
“Our highest return has come from formatting done by a proposal writing service, LinkedIn strategy engagement, and Inc. Masters features. Additionally, partnerships that offer full lifecycle solutions have a much better ROI than marketing campaigns or social media.” — Paul L. Gunn Jr., CEO, KUOG
“LinkedIn organic networking and account-based engagement is our highest ROI channel right now.” — Lisa Larson-Kelley, CEO, Quantious
BY MARLI GUZZETTA
Friday, November 29, 2024
Microsoft’s Response to Its Major Outage Is the 1 Thing No Company Should Ever Do
On Monday, Microsoft suffered a widespread outage that affected Outlook and Teams. Reports started appearing early Monday morning, and escalated throughout the day as more people showed up at work. It’s not entirely clear how many people were affected but there are more than 400 million Outlook users, according to Microsoft. The company acknowledged the outage, though it stopped short of explaining what happened.
“We’ve identified a recent change which we believe has resulted in impact,” the company wrote in a post on X. “We’ve started to revert the change and are investigating what additional actions are required to mitigate the issue.”
It’s not really clear what that even means. Nothing in that post, or the subsequent thread, explains what that change is, why it caused people to lose access to their email or messaging, or what exactly Microsoft is doing about it. It sounds like someone accidentally pushed the wrong button or somehow introduced a bug, which is a pretty bad look for one of the largest companies on Earth.
On the one hand, I guess it’s good news that it wasn’t a breach or some kind of hack. Certainly, the email service that millions of businesses count on would be a valuable target for bad actors. There’s some consolation that all of those email accounts weren’t breached.
On the other hand, it doesn’t inspire a lot of confidence if the primary form of communication for millions of workers can be brought down by something a company puts out intentionally. This is especially true after the Crowdstrike outage this summer when that company issued an update to its anti-malware software that caused a fatal error in Windows machines, causing them to be unable to boot.
In that case, instead of losing access to email, the consequence included thousands of canceled flights, and hospitals reverting to paper charting when they couldn’t access computer systems. That’s probably worse, but it doesn’t change the fact that this is a bad look for Microsoft. At the same time, the company’s response made things objectively worse.
Look, I get that IT and software professionals speak a different language when it comes to situations like this. The problem is, the people who are trying to do their job don’t care about the nuance of software bugs or unintentional downtime. They care about getting their work done.
To be fair, most companies are really bad at handling this. For the most part that’s because they often don’t immediately know what caused the problem. It takes time to diagnose what went wrong, come up with a fix, and deploy it across a massive network of computers around the world.
Then there’s the fact that companies are hesitant to be transparent about problems if it might make them look bad. What they often fail to understand is that being clear and transparent goes a long way, even when things are going wrong.
Also, this is Microsoft, a $3 trillion company that makes the software that powers most of the world’s computers. This is the kind of thing that isn’t supposed to happen. And, when it does, you’d expect Microsoft—of all companies—to understand that it has to do better.
That means explaining what happened. A lot of people work almost entirely out of their email. Even in the year 2024, it’s still a primary way of communication for hundreds of millions of people. If their email goes down, they deserve to know why, if for no other reason than they should be able to make an informed decision about whether or not they should find another option.
People understand that downtime happens, but–in this case–Microsoft has had a hard time bringing its services back online, and it has had an even harder time talking about what happened. That doesn’t exactly inspire confidence.
The bottom line is that if you make a piece of software that millions of people depend on for their work, trust is your most valuable asset. And trust is a thing you earn through clear and transparent information. Anything less is the one thing no company should ever do.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Thursday, November 28, 2024
Salesforce CEO Marc Benioff Thinks AI Has Hit a Roadblock
It’s hard to turn on a computer and not see evidence of AI’s advances into our online lives. It’s in the Microsoft or Google tools you use on your work PC, and the social media apps you use to escape the stresses of reality, and it seems that some kind of buzzy new AI advance gets announced almost daily. But are all these AI chatbots, with ChatGPT in the lead, actually as smart as we think they are? One tech leader, Salesforce CEO Marc Benioff, is beginning to doubt the hype. In fact Benioff thinks we may have hit a ceiling in the development of “large language model” (LLM) AIs, and suggests they actually won’t get much smarter, despite the news of new models or new capabilities. Where the real next-gen AI action is at, Benioff thinks, is actually in AI agents, not chatbots, and he’s betting big on that prediction within his own company.
In an interview on the Wall Street Journal’s Future of Everything podcast, Benioff explained his thinking. Essentially even though AI companies are desperately trying to push for “next generation” LLM chatbots, like the much-rumored GPT-5 from OpenAI, Benioff thinks we’re “hitting the upper limits of the LLMs right now.”
Benioff admits we have “incredible tools to augment our productivity, to augment our employees, to prove our margins, to prove our revenues, to make our companies fundamentally better,” and even “have higher fidelity relationships with our customers.” But he also says we’re nowhere near the level of AI seen in in “these crazy movies”—meaning the kind of super-smart AI seen in popular sci-fi. In particular Benioff worries that there are some players in the AI game who are evangelizing the tech by suggesting it can solve some of the world’s biggest problems, but it really can’t, and that’s actually a distraction from the actual benefits AI can provide.
What’s really coming up in AI tech, Benioff thinks, isn’t super-smart AIs like in the Terminator movies (and we’re all glad the apocalyptic vision of the franchise hasn’t come to pass. Yet.) but powerful “agentic” AI. While chatbots work in a call-and-response style, answering queries when users ask for help, AI agents are chunks of code that can actually perform “actions” in an online environment, like finding appropriate data and then using it for filling in forms, or pressing “buy” on a shopping cart in an online store.
In an X posting yesterday, Benioff argued that government “regulatory, compliance, and political demands” are “consuming up to 40% of budgets,” and they’re growing fast. So it’s time for a “transformation” via AI agents, which can “revolutionize operations-automating reporting, audits, case management,” and more. He suggested it was time to “replace bureaucracy with an agentic layer that serves people, not politics.” He added a personal spin on the idea, by saying “Welcome to the future—welcome Agentforce!” in a blatant advert for his company’s recently unveiled agent-based AI system called Agentforce that can, at launch, act like a digital sales rep.
Why should we care, though? Benioff is a billionaire, and though he’s certainly got his finger on the pulse of tech, and he’s got skin in the game, his company isn’t developing cutting-edge AI in the same manner as OpenAI or Google.
Though his posting on X was aimed at a certain sector—the paperwork load from various government offices—Benioff is essentially predicting the near future of AI-assisted work, where many menial or frustrating “bureaucratic” office tasks are dramatically sped up by agent-based tools.
AI critics will worry Benioff is predicting AI will replace some, perhaps more menial, office roles—but arguably he’s saying agents will free up employees’ time to be more effective at the tasks that actually comprise their jobs: for example, if filing a travel expense takes up a few hours of a worker’s day, they’re not going to be contributing to the company’s bottom line…but if an AI can do that job for them, then they’ve gained two useful work hours. And Benioff’s words about slow-paced AI development may ring true in other ways: recently it emerged that AI giant OpenAI was struggling to develop its next-gen ChatGPT engine, and was being forced to try wholly new tactics. If we’re all expecting “smart” AIs to transform our workplace, we may have to wait a while.
BY KIT EATON @KITEATON
Tuesday, November 26, 2024
Gen-Zers Blaze the AI Workplace Trail, but Still Want More Guidance
Companies of all sizes continue the rapid adoption of emerging artificial intelligence (AI) applications in an effort to lower the costs and improve performance of their businesses. Now, a series of recent studies offers owners and managers insights into how their employees are using the tech—especially digital native Gen Zers, who are embracing it faster than their older peers.
The polls confirm the growing inroads that generative AI is making into business, and reflect how Gen Zers are embracing the tech more rapidly than older cohorts. While that may seem a logical role for members of the first generation brought up with digital devices in their hands to assume, it’s also an indicator of how AI use is likely to rapidly snowball. As with social media habits and the adoption of office applications like Zoom or Slack, younger people have tended to blaze the trails and set the pace of new tech use for other age groups to follow—as now seems to be true with artificial intelligence.
While their numbers differ, all the recent surveys indicate Gen Zers are taking to AI for work in a very big way. According to a poll by online tech upskilling company upGrad Enterprise, “73 per cent of Gen Z (are) already integrating GenAI into their daily tasks.” A nearly identical portion of respondents are also using results those apps supply with minimal or no editing.
A study of 1,000 U.S.-based knowledge workers aged 22-39 released Monday by Google found 93 percent of Gen Zers regularly using those advanced tech tools. That compares to 79 percent of Millennials and 82 percent across all generations. Perhaps not surprisingly, the most frequent use cases cited were tasks for which early AI applications are widespread and easily accessible.
According to Google—which, provides AI enhanced services like Gmail, Docs, and Drive—respondents frequently used apps for “email responses, writing challenging emails from scratch, or helping to overcome language barriers.” It also noted about 88 percent of participants said those tools eased starting tasks that seem overwhelming, with similar numbers feeling the tech improved their writing and afforded greater work flexibility.
But despite the rising use and influence of AI in the workplace, it’s clear from the polling that employees are also still feeling somewhat torn about the tech in other ways.
For example, upGrad Enterprise’s survey found 52 percent of Gen Z respondents said their company’s AI training was insufficient, and 54 percent that guidelines for the ways the tech may and must not be used aren’t clear enough. Another poll showed 62 percent of younger employees fear AI apps may eventually eliminate their work. That job security concern that may explain why 56 percent said they preferred to rely on smart bots for finding answers they need, rather than going to their bosses for help.
A similar ambivalence was reflected in the 52 percent of Gen Z employees who said they regularly discussed AI uses with co-workers, according to the Google study. Yet at the same time, it found 75 percent of people questioned said they had suggested using AI tools to office peers who need help, further fueling overall workplace adoption.
And that, said, Google Workspace product vice president Yulie Kwon Kim, suggests ambitious employees of all ages “are not simply using AI as a tool for efficiency, but as a catalyst to help grow their careers.”
However, upGrade CEO Srikanth Iyengar noted his company’s study also reflects not just how “Gen Z is embracing AI but, also the urgent need for organizations to establish supportive policies and implement targeted training.”
Maybe once they do, younger employees will feel more comfortable sounding out their older managers than huddling with ChatGPT to learn what they need to know.
BY BRUCE CRUMLEY
Saturday, November 23, 2024
How Mark Cuban, Tim Cook, and Bill Gates Are Using AI to Be Massively More Productive
Generally it’s pretty hard for the average entrepreneur or professional to emulate the productivity habits of the likes of Tim Cook, Mark Cuban, and Bill Gates. Billionaire CEOs have a small army of assistants to manage their days and plan their schedules down to the minute, after all.
But there’s only productivity-booking trick of theirs absolutely anyone can steal and benefit from—time-saving artificial intelligence hacks.
Generative AI tools like ChatGPT have only been available for public use for two years, but according to a series of recent interviews, they’re already changing how some of the most successful CEOs in the world manage their days.
Former Shark and serial entrepreneur Mark Cuban, Apple boss Tim Cook, and Microsoft founder-turned-philanthropist Bill Gates all recently shared how they’re using AI tools. And handily for everyday workers, all the tools and techniques they mentioned are freely available for anyone to experiment with.
Tim Cook uses AI to summarize his emails
Take Tim Cook’s love of Apple Intelligence’s email summaries feature, for example. If you think your email overload is bad, spare a thought for the Apple CEO who gets upwards of 800 emails a day. Being a conscientious guy, he tried to read them all, he recently told the Wall Street Journal. That was a huge time suck until he started using Apple’s AI tool to summarize the deluge in his inbox every morning.
“If I can save time here and there, it adds up to something significant across a day, a week, a month,” Cook told the WSJ. “It’s changed my life. It really has.”
This could seem like just another CEO touting his company’s offerings (and there is no doubt some element of that going on here), but there are a host of AI email summary tools available for both Mac users and Microsoft fans. If you’re skeptical of Cook’s rave review of Apple’s products, try any of these tools to see if they can change your working life too.
Mark Cuban’s favorite AI hack
When it comes to Mark Cuban’s recommendation, there is no such conflict of interest. Cuban’s email problem is even worse than Cook’s. He receives thousands of often repetitive emails a day, he recently told CNBC. His solution? Using Gemini, Google’s generative AI assistant, to help him power through his replies in much less time.
“It’s reduced the need for me to write out routine replies,” he told CNBC. “I can spend 30 seconds evaluating its response and hit ‘send’ versus typing it all out myself.”
Cuban called outsourcing much of his email writing to AI the “ultimate time-savings hack.” Other CEOs can certainly experiment with AI tools to see if they could similarly streamline their inbox wrangling.
Bill Gates is a big fan of AI meeting notes
Not every iconic business leader is most excited about using AI to process emails. Bill Gates explained in a recent interview with The Verge that his favorite way to use new AI tools is for taking and searching through meeting notes.
Gates has long been known as extremely detail oriented and a dedicated note taker. But he used to be a big believer in the old fashioned pen and paper approach.
“You won’t catch me in a meeting without a legal pad and pen in hand—and I take tons of notes in the margins while I read. I’ve always believed that handwriting notes helps you process information better,” Gates once wrote on LinkedIn.
But AI has convinced him to update his note-taking approach, he told The Verge. Now he also has AI sit in on and transcribe meetings so he can reference those records later.
“I’d say the feature I use the most is the meeting summary, which is integrated into [Microsoft] Teams, which I use a lot,” he explained. “The ability to interact and not just get the summary, but ask questions about the meeting, is pretty fantastic.”
There’s no shortage of AI tools to experiment with
Much like Tim Cook’s Apple-boosting reply, Gates is clearly plumping for Microsoft products here. But again, those looking to experiment with using AI for meeting notes aren’t limited to using Microsoft tools. There are tons of competing products to play around with.
The main point here isn’t to try to sell you on any particular tool. It’s to highlight that some of the smartest and most tech-savvy leaders around are already finding massive value in integrating AI into their daily routines.
If you’re not experimenting with AI tools for similar uses, you’re probably missing an opportunity to save yourself time and hassle.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
Wednesday, November 20, 2024
Google’s Latest Search Update Suggests Business Owners Need a New Content Marketing Strategy
A new update to Google’s search algorithm has top SEO consultants in agreement: The rules of Google have changed, and the old playbooks need to be rewritten.
Earlier this week, Google released its latest “Core Update,” meaning the tech giant’s search algorithms and systems are being refreshed and adjusted. This is Google’s third core update of 2024, and while SEO experts say the impact from November’s update won’t be known for a few weeks, they expect it to follow a recent trend of punishing websites for producing spammy or AI written content. Their advice to business owners? Quality over quantity.
SEO experts like David Riggs, founder of SEO firm Pneuma Media (No. 170 on the 2024 Inc. 5000) says that Google’s recent efforts are intended to “reduce the impact of gamification” on search results. “The SEO strategy of 2010 was to just throw a bunch of keywords in and it’ll rank,” he says. “Now, it’s very different.”
Riggs says that many of the tricks and techniques that SEO pros used to rely on, like filling up articles with backlinks, publishing short “quick hits,” and creating keyword-filled blog posts, are now being actively disincentivized by Google, as the company attempts to fight against AI-generated content intentionally designed to game the system. “Google caught on and changed the cheat codes,” adds Riggs, “and now you’ve got to change your strategy.”
David Kauzlaric, co-founder of SEO consultancy Agency Elevation (No. 461 on the 2024 Inc. 5000) says that the last two years have seen a flurry of core updates that have totally upended how SEO professionals approach their work. “These updates are helping Google’s users,” he says, “they’re not helping business owners who are trying to do SEO. It makes our job far worse and far harder.”
“If you don’t pivot to provide what Google wants,” Kauzlaric says, “you’re going to continue to see a decline in traffic, because Google is getting very particular.”
How can businesses ensure that their websites and content still rank highly in this new era of Google? Steven Wilson, director of SEO at Above The Bar Marketing (No. 614 on the 2024 Inc. 5000) says if you’re using AI to write full blog posts for your website, you need to stop now. “There is a war on AI,” says Wilson, who adds that his own research has found that “the more AI content you have, the less likely that you’ll show up in search.”
Instead of relying entirely on AI, Wilson recommends writing content in a conversational, more casual tone. “AI can’t do that conversational tone,” says Wilson, who also says business owners should be careful not to produce an overabundance of content just for the sake of getting ranked by Google. Wilson says you can still use AI to help write pieces and optimize headlines, but the majority of the writing should come from a human.
Wilson also recommends limiting the majority of your content to topics relevant to your business and that you are an expert in. Google’s algorithm highly values authors that appear to have authority on certain subjects, so sticking to “topic clusters” in your realm of expertise is an efficient way to build that authority.
Another new strategy that seems to be showing promise is deleting old SEO-focused content from your website. Parker Evensen, founder of digital marketing agency Honest Digital (No. 878 on the 2024 Inc. 5000), says that in previous years, “if you had a lot of authority, you could push out huge quantities of content, and that could help your website. But we’ve found that paring down a lot of that content, especially irrelevant content, can actually help your website.”
“I think what Google is trying to do is get people to stop fighting the algorithm and focus on creating the best, most high-quality content they can,” says Riggs. “They want something from a human perspective that’s creating good value and answering real questions. That’s the content that’s going to win.”
Monday, November 18, 2024
OpenAI, Competitors Look for Ways to Overcome Current Limitations
Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever bigger large language models by developing training techniques that use more human-like ways for algorithms to “think”.
A dozen AI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI’s recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips.
OpenAI declined to comment for this story. After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that “scaling up” current models through adding more data and computing power will consistently lead to improved AI models.
But now, some of the most prominent AI scientists are speaking out on the limitations of this “bigger is better” philosophy.
Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training — the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures — have plateaued.
Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through the use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI.
Growth and stagnation
“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.”
Sutskever declined to share more details on how his team is addressing the issue, other than saying SSI is working on an alternative approach to scaling up pre-training.
Behind the scenes, researchers at major AI labs have been running into delays and disappointing outcomes in the race to release a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to three sources familiar with private matters.
The so-called ‘training runs’ for large models can cost tens of millions of dollars by simultaneously running hundreds of chips. They are more likely to have hardware-induced failure given how complicated the system is; researchers may not know the eventual performance of the models until the end of the run, which can take months.
Another problem is large language models gobble up huge amounts of data, and AI models have exhausted all the easily accessible data in the world. Power shortages have also hindered the training runs, as the process requires vast amounts of energy.
To overcome these challenges, researchers are exploring “test-time compute,” a technique that enhances existing AI models during the so-called “inference” phase, or when the model is being used. For example, instead of immediately choosing a single answer, a model could generate and evaluate multiple possibilities in real-time, ultimately choosing the best path forward.
This method allows models to dedicate more processing power to challenging tasks like math or coding problems or complex operations that demand human-like reasoning and decision-making.
“It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer,” said Noam Brown, a researcher at OpenAI who worked on o1, at TED AI conference in San Francisco last month.
OpenAI has embraced this technique in their newly released model known as “o1,” formerly known as Q* and Strawberry, which Reuters first reported in July. The O1 model can “think” through problems in a multi-step manner, similar to human reasoning. It also involves using data and feedback curated from PhDs and industry experts. The secret sauce of the o1 series is another set of training carried out on top of ‘base’ models like GPT-4, and the company says it plans to apply this technique with more and bigger base models.
Competition ramps up
At the same time, researchers at other top AI labs, from Anthropic, xAI, and Google DeepMind, have also been working to develop their own versions of the technique, according to five people familiar with the efforts.
“We see a lot of low-hanging fruit that we can go pluck to make these models better very quickly,” said Kevin Weil, chief product officer at OpenAI at a tech conference in October. “By the time people do catch up, we’re going to try and be three more steps ahead.”
Google and xAI did not respond to requests for comment and Anthropic had no immediate comment.
The implications could alter the competitive landscape for AI hardware, thus far dominated by insatiable demand for Nvidia’s AI chips. Prominent venture capital investors, from Sequoia to Andreessen Horowitz, who have poured billions to fund expensive development of AI models at multiple AI labs including OpenAI and xAI, are taking notice of the transition and weighing the impact on their expensive bets.
“This shift will move us from a world of massive pre-training clusters toward inference clouds, which are distributed, cloud-based servers for inference,” Sonya Huang, a partner at Sequoia Capital, told Reuters.
Demand for Nvidia’s AI chips, which are the most cutting edge, has fueled its rise to becoming the world’s most valuable company, surpassing Apple in October. Unlike training chips, where Nvidia dominates, the chip giant could face more competition in the inference market.
Asked about the possible impact on demand for its products, Nvidia pointed to recent company presentations on the importance of the technique behind the o1 model. Its CEO Jensen Huang has talked about increasing demand for using its chips for inference.
“We’ve now discovered a second scaling law, and this is the scaling law at a time of inference … All of these factors have led to the demand for Blackwell being incredibly high,” Huang said last month at a conference in India, referring to the company’s latest AI chip.
Sunday, November 17, 2024
Crypto’s Year of Capitulation Is a Joke
Thank God for the crypto visionaries, who heroically declared “We’re leaving banking and finance in the dust” and are now huddled in a panic room, clutching a single American flag. Listen closely and you’ll hear them whispering: “We simply cannot innovate without first being kissed on the forehead by the new president.”
To put it another way, crypto’s capitulation to trad markets and politics over the past year is an absolute joke.
When Bitcoin emerged in 2009, the concept was revolutionary. It promised a decentralized currency that operated without the oversight of banks or governments. In Satoshi Nakamoto’s groundbreaking whitepaper, Bitcoin was described as a “peer-to-peer electronic cash system.” This ambition was radical in its simplicity. Bitcoin offered a way to bypass intermediaries entirely. It would grant people the ability to control their financial interactions and assets.
From day one, Bitcoin was all about decentralization, sticking it to the banks and tearing down the financial establishment. Cut out the middlemen, they said. Liberate the masses, they said. It was a vision of freedom with a side of chaos.
Crypto’s Promise Versus Reality
Then came the gold rush. Bitcoin’s value exploded, altcoins multiplied like weeds, and DeFi platforms popped up. Each one claimed it was about to overthrow Wall Street any day now. True believers swore that crypto could make banks obsolete, that it was building a utopian financial playground where everyone—especially the people the banks ignored—could finally get ahead.
Since then, the same banks and corporations that once sneered at crypto as a scam are now jumping on the bandwagon, especially through shiny new Bitcoin exchange-traded funds. With the U.S. Securities and Exchange Commission’s blessing earlier this year, Wall Street can now offer “crypto exposure” without anyone having to get an actual coin. Such heavyweights as BlackRock and Fidelity wasted no time launching their own ETFs. Institutional money is flooding in.
Crypto firms that once swore to disrupt the system are bending over backward to join it. In the U.K., where the Financial Conduct Authority barely approves a fraction of crypto applications, companies are eagerly adopting know-your-customer and anti-money-laundering protocols. Just to get a foot in the door. The “movement” that should have been finance’s punk rock is now happily cozying up to traditional finance, trading rebellion for respectability.
From Crypto Visionaries to Sell-Outs
In 2024 alone, crypto firms and influencers have shelled out millions in political contributions, with Coinbase and their crew leading the charge, all to butter up the right people and lock down favorable regulations. Lobbying, schmoozing, and campaign donations. It’s a long way from decentralization and “power to the people.”
Companies are now openly aligning with politicians who wave the pro-crypto flag—such as former President Donald Trump, who’s been cheerleading for Bitcoin and the whole digital currency circus. The anti-establishment rebellion is another talking point for politicians who smell votes and dollar signs.
By hitching themselves to politicians and pushing agendas, crypto leaders risk turning the whole industry into just another lobby group clawing for a slice of influence in the swamp of power games. The more idealistic crowd—myself included—see this as a total betrayal of what crypto was supposed to stand for.
Crypto’s got itself a civil war, and it’s as messy as you’d expect. On one side, you’ve got the pragmatists, grumbling about how “mainstream adoption” might require a little soul-selling. Or a lot of soul-selling. Or a damned fire sale.
As the debate rages across Twitter threads, Warpcast, and Discord servers, the real question looms: Can crypto stay true to its anti-establishment roots—or did it already sell out the minute someone printed a whitepaper in Helvetica?
Maybe that’s just the natural life cycle of any “revolution.” Sooner or later, everything goes Hot Topic.
First, you’re the scrappy underdog, shaking your fist at the establishment, shouting about freedom and autonomy. Then you get a taste of the good life—private jets, Davos invites, a little pat on the head from your friendly neighborhood investment banker. Suddenly, you’re not so different from the suits you swore to dethrone. At some point, the righteous battle cry of “decentralize everything” turns into “well, maybe just a little centralization… for regulatory purposes.”
Too Late for a Revolution?
Do I still think crypto matters? In some ways, yes. I know, I know. It’s a lonely hill to die on.
But somewhere under all the jargon, lobbying dollars, and Wall Street handshakes, I still believe there’s a spark left, a shot at reclaiming crypto’s anarchic roots. A system that empowers the individual, shakes off the leeches, and actually challenges the entrenched power structures instead of just asking to sit with them.
If you dig deep enough, there’s still a chance to resurrect that original spark—to build something that truly stands outside the walls of power, rather than bending a knee to get inside them. Because if crypto’s going to mean anything at all, it has to remember what it set out to destroy. Before it becomes just another face in the crowd.
Otherwise, the “decentralized revolution” that spent a decade screaming about autonomy will keep showing up to the big leagues begging for a seat at the same rotten table it swore to flip.
EXPERT OPINION BY JOAN WESTENBERG, FOUNDER AND CEO, STUDIO SELF @JOANWESTENBERG
Wednesday, November 13, 2024
Forget the Nanny, Check the Chatbot. AI May Soon Help With Parenting
As AI technology advances, it’s natural that startups and big tech names want to profit off the revolution by finding ways to put it into more corners of everyday life. Current examples include applications that help you out at the office, assisting in fighting employee burnout, and in more intimate, subtle scenarios like health care. Now, according to Andreessen Horowitz partner Justine Moore, AI is set to help out with something very “human” indeed: the complex, stressful, heartfelt, wonderful job of being a parent.
In a posting on X yesterday, reported by news site TechCrunch, Moore posited an interesting question: “What if parents could tap into 24/7 support that was much more personal and efficient?” The idea is simple, on its face—we’ve been busy loading up all these super-smart AI systems with megatons of real-world data, tapping into it for help making decisions like, “Which marketing campaign should our startup use?”
Within all that data is lots of very practical material, too, including advice that may help a stressed-out parent trying to solve a tricky moment with the kids. Unlike friends and family and even many sources of professional human help, an AI assistant is also always available … even when it’s 3 a.m. and your infant has just thrown up all over the nursery.
Moore went a step further, TechCrunch noted, highlighting what she called a new “wave of ‘parenting co-pilots’ built with LLMs and agents.” Moore touted the opportunity to develop dedicated family-focused AI tools with specialist knowledge and expertise—specific variants of the large language model (LLM) chatbot tech that we’re all getting used to. She suggested that the upcoming wave of AI agents, which are small AI-powered tools that can perform actions all by themselves in a digital environment, could help too. It’s easy to imagine the usefulness of an AI agent that almost instantly finds a deal on the brand of disposable diapers you like and then have them delivered when you need them.
But Moore also highlighted several startups with innovative tech to help with parenting, including Cradlewise, which uses AI connected to a baby monitor to help analyze a baby’s sleep pattern—and even rock the crib. There’s also the opportunity for this sort of AI system to be “always in your corner,” Moore said, ready to just listen to your emotional outbursts, even if they happen just after 3 a.m. while your partner is sleeping and you’re cleaning up baby vomit.
Moore’s words may evoke memories of the Eliza program among tech-savvy readers. It’s a bit of a deep cut, but this was developed way back at the end of the 1960s, and was one of the very first chatbots. Primitive as it seems now, Eliza paved the way for lots of much smarter tech that followed it, not least because it was thought by some medical professionals to offer benefits to patients that chatted to it. A 21st-century, parenting-focused AI Eliza could play a role in helping new parents navigate all the challenges of rearing kids.
It’s certainly an idea that may be having its moment. In a post on self-described parenting platform Motherly in April, writer Sarah Boland described what she said was an “unpopular opinion,” and noted that she was using AI to help her parent, including for simple things like task planning. And, in May, popular site Lifehacker set out a list of ways AI can help you with parenting jobs.
But why should we care specifically about Moore’s social media musings?
Firstly, because of whom she works for. Venture capital firm Andreessen Horowitz is one of the biggest names in the business, and it’s recently been heralding a “new era” in venture funding with a $7.2 billion fund it’s drawn together. If a partner at a firm like this, which has already shown its positive thinking about AI technology, takes time to highlight a whole new area that a buzzy tech may be set to exploit, it’s worth paying attention.
The parenting business is already lucrative—analysis site Statista pegs the parenting mobile app global market alone as likely growing to $900 million by 2030. Though it may seem a “soft” market that’s more about human feelings than high tech, technology has been becoming a part of child-raising for years. If your AI startup is looking for unexpected ways to leverage your innovation, perhaps it’s time to consider how you could help raise the next generation of kids. They’ll be the first to be born into a world where AI is normal.
Just be thoughtful and perhaps a little wary. AI tech is not without some risks, especially when it comes to younger or more vulnerable users.
BY KIT EATON @KITEATON
Monday, November 11, 2024
Why Gen-Z Workers Are Consciously ‘Unbossing’
Company leaders aren’t happy with their Gen-Z employees. Sixty percent said in a recent survey they’ve fired Gen-Z team members that they hired this year. But these leaders could have another problem on their hands: The Gen-Z employees who are sticking around might not be interested in stepping up within the organization — otherwise known as unbossing.
That’s according to recent data from Robert Walters, a global recruitment company, in a trend the company deems conscious unbossing. Fifty-seven percent of the U.S. Gen-Z workers they surveyed said they weren’t interested in becoming middle managers. Rather, 60 percent are opting for an “individual route to career progression over managing others.”
Why? According to 67 percent of the Gen-Z respondents, middle management roles “are too high stress with low reward.”
Managers have indeed had plenty on their plates in recent years. Seventy-six percent of HR leaders surveyed by Gartner in 2023 said their managers were “overwhelmed by the growth of their job responsibilities” — which, according to experts who previously spoke with Inc., include managing return-to-office policies, AI developments, and more.
Perhaps it’s no surprise, then, that in a LinkedIn survey this year, 47 percent of managers said they felt burned out — more so than directors or individual contributors.
Adding to this is the fact that many Gen-Zers are already unhappy with their roles. According to data from the workforce management platform Deputy, shared exclusively with Inc., hourly Gen-Z employees experienced twice as many frustrating shifts in the year’s third quarter as they did in the first.
Despite their reluctance, 43 percent of Gen-Z workers do expect that they will need to move into a middle management at some point, according to the Robert Walters report. And that ascent to management is already underway: According to ADP, Gen-Zers now make up 3 percent of the managerial workforce compared with just over 1 percent in 2020, though their share remains small overall.
Nevertheless, 40 percent of surveyed Gen-Z workers remain resolute in their “unbossing” and “adamant” that they will “avoid middle management altogether,” instead set on taking a career route more focused on “personal growth and skills accumulation.”
This could have serious repercussions for the companies that employ these workers, says Sean Puddle, managing director of Robert Walters New York, as middle managers are often the “driving force” behind an organization’s growth: “If you’ve got a load of people who aren’t interested in moving up into that middle management function, it can actually end up limiting your growth and, or, stretching the managers that you have got really, really thin.”
But there are ways that companies can boost their Gen-Z workers’ excitement about their work and thus discourage unbossing. According to Deloitte’s latest Gen-Z and Millennial Survey, 86 percent of Gen-Z respondents say a sense of purpose at work is “important to their overall job satisfaction,” and work-life balance ranks as the top priority when choosing an employer.
There are also ways that companies can better support and prioritize their managers, Puddle says, and in turn make the role of middle managers more appealing to younger workers.
“How are they going to give more autonomy and more decision-making power to that group of people? Are they able to make sure they’re assessing workload regularly so that people aren’t just getting overburdened?” he says. “And what are some of the mechanisms that they could deploy to try and ease that overburden, if that happens?”
BY SARAH LYNCH
Friday, November 8, 2024
The No. 1 Business Skill You’ll Need in 2025
It doesn’t matter if you’re an entrepreneur or technologist or just someone trying to innovate at your company. If you’re doing new things to keep yourself on the cutting edge rather than getting sliced up and left behind in a million pieces, the rules are changing drastically.
Allow me to connect the dots.
Innovation Died This Year and No One Told You
2024 is going to be remembered as the year that promises around artificial intelligence became the go-to substitute for real technical innovation. You can lump the marketing machines for generative AI, machine learning, and artificial general intelligence into the same bucket. And then throw that bucket at a canvas and sell it for billions of dollars.
Now, for those unaware, I’m not anti-AI. I was on the AI commercialization train over a decade ago, we sold to a private equity firm, and then, like any good entrepreneur, I went and did other things.
But as the generative AI bubble was inflating—and all the venture capital money was being sucked into that bubble, and the venture and innovation arms of Google, Microsoft, and Apple leaped into the age of miracles—it quickly became obvious that the scraps left over for normie entrepreneurs and technologists solving complex problems with advancements in technology that weren’t so artificial, those scraps just weren’t going to be enough to compete with “AI for… whatever, it doesn’t matter, just write me a check… LLMs!!!”
The money consolidated quickly, and it helped accelerate a decline in the overall VC investor ranks, as a crop of new venture capitalists that had sprouted during the post-pandemic, pre-inflation era of cheap money suddenly decided that there were better, safer, saner options, and those vests were kind stupid-looking anyway—and so they quit.
But as it turned out, those corporate options weren’t so safe after all. See, the corporate world was innovating too.
And so here we are, where your entrepreneurial and innovation skills are withering on the vine, because those jobs have all been cut while the new jobs are being created by AI and filled by AI, skipping over your proven, real-world experience while it scans your résumé for AI skills to help AI build more AI.
Here’s the Counterplay
We already know it’s unwise to fight AI with AI, and I don’t want to be the millionth person telling you that you need to learn AI to compete with AI.
Nah, I’m going to get a little ahead of the curve on this, I’ll speculate recklessly, and so some of it might seem like nonsense.
Oh, also, if you’re of a certain age, I’m going to slam a Billy Joel song into your brain for a few days, so I apologize in advance. Seriously, stop reading now if you’re over 40 and you’d like to be able to think clearly over the next 48 hours.
Honesty Is Such a Lonely Word
You were warned.
This is the skill you need. I’m serious.
I’m not talking about not lying. I’m not your mom. I’m talking about how to approach everything that you do on the innovation and technology fronts, from building to positioning to communication to marketing and selling.
I’m talking about intellectual honesty.
Let me go to Wikipedia, because sometimes when I put those words together people think I’m calling them stupid liars and they want to punch me even more than usual. Scrolling down the definition a bit:
“Within the realm of business, intellectual honesty entails basing decisions on factual evidence, consistently pursuing truth in problem-solving, and setting aside personal aspirations.”
Now, in future articles, I’ll cover some of the more rubber-meets-road applications of intellectual honesty, but since our time together today is short, I’m just gonna high-level it.
Ideas Are Back
I actually like this definition better, it’s from something called Wikiversity: “Intellectual honesty is honesty in the acquisition, analysis, and transmission of ideas.”
Yeah, ideas are valuable again.
When people give business advice, one of their go-to moves is to talk about how ideas are cheap, plentiful, and how everyone has loads of them so they’re worth nothing.
This is kinda true, but kinda true is never actually true.
That advice is usually given in the context of the best business idea in the universe being useless if you don’t execute on it properly. But the reverse is true too. The best minds with the most experience doing everything right cannot save a terrible idea from being terrible.
In a post-AI world—and let’s face it, we’re already there—the proper “acquisition, analysis, and transmission of ideas” is the critical business skill to keep oneself, one’s product, and one’s company on the cutting edge, AI or not.
That means talk is cheap, not the idea itself.
Show, Don’t Tell
You need to be able to show—show the value of the idea, as well as the manifestation of that idea into reality and the level of execution required to get there, and finally the coming-to-fruition that results in an exponential return on the resources and investment poured into that idea.
Show, don’t tell. Because anything you can just say, AI can do, at least in the minds of the people you need backing from.
You can’t just slam an idea and a plan into a pitch deck or business proposal anymore. The first question in their minds, whether they ask it or not, is “Why can’t we just have AI do this?”
You’ll need to be able to counter that, with intellectual honesty.
Let’s get honest.
Where Did You Get That Idea?
On the idea acquisition front, bandwagoning is out. And with that, first-mover advantage is probably a thing of the past as well. Ideas that are simple, unoriginal, or just a twist on an existing success are going to be far more likely to be sniped by an AI-driven entity cashing in on any company’s initial traction.
If an idea has weak intellectual property prospects or you can’t develop a strong competitive moat, it’s copyable. And if there’s one thing that no one can argue AI doesn’t do well, it’s copy, at speed.
Is Your Idea Any Good?
I do get a dozen ideas a day, and on initial analysis, almost all of them are immediately identifiable as crap. It’s the ones that aren’t immediately identifiable that get me in trouble.
I can spend years stuck on a bad idea. And I only do this because I also have the experience of taking years to discover one little tweak that turns a bad idea into a brilliant idea. I can’t do that without the idea being on the market.
But AI can determine a million of those little tweaks, try them, and maybe find the right one before you finish breakfast. You need to know and be sure of the value of your idea, and fast.
Can You Communicate the Value of Your Idea?
As the traditional pitch deck heads to the dustbin, so goes the traditional elevator pitch. And while I’ll have more on the demise of the entrepreneurial pitch play in future posts, the only thing that speaks louder than words is numbers. And those numbers only come with action.
It has always been important to focus primarily on those tasks that lead to growth—in customers, revenue, and profit—but now it’s mandatory. It’s time to drop all pretense of traditional business models and metrics that don’t contribute to growth, and preferably rapid growth.
The competition is doing it. They’re cutting everything that isn’t directly responsible for revenue and replacing it with… AI.
You have to do it with… you.
Can you make that happen? Be honest. And please follow along as I head down that path.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Wednesday, November 6, 2024
Physical Intelligence, now valued at about $2 billion, aims to create AI models that can power robots.
The artificial intelligence start-up Physical Intelligence, which aims to build general-purpose AI models and algorithms for real-world robots, is set to announce a new $400 million funding round today with backing from Jeff Bezos and other big-name investors.
The financing—which has not yet been disclosed on Physical Intelligence’s website but was reported by The New York Times and will reportedly go live today—was led by Bezos, the founder and executive chairman of Amazon, as well as two venture capital firms, Thrive Capital and Lux Capital. The Times reports that the AI giant OpenAI as well as the investment firms Redpoint Ventures and Bond also participated.
A spokesperson for Thrive Capital confirmed the details of the Times story to Inc. Physical Intelligence, which has not yet posted about the round on its blog, directed Inc. to that same spokesperson.
“The fund-raising valued the company at about $2 billion, not including the new investments,” Times reporter Michael J. de la Merced wrote. “That’s significantly more than the $70 million that the start-up, which was founded this year, had raised in seed financing.”
Physical Intelligence, also known as Pi and sometimes stylized as Ï€, currently lists five investors on its website: three of the participants in this latest round (Lux Capital, Thrive Capital and OpenAI) and two not named in the Times story (Khosla Ventures and Sequoia Capital). Data on Crunchbase indicates that those five firms, along with Outset Capital and Greenoaks, constituted Physical Intelligence’s March 2024 seed round. Thrive confirmed to Inc. that they participated in the seed round.
Physical Intelligence aims to bring general-use artificial intelligence out into the physical world, according to its website, which features videos of robots folding laundry and assembling cardboard boxes.
“What we’re doing is not just a brain for any particular robot,” co-founder and chief executive Karol Hausman told the Times. “It’s a single generalist brain that can control any robot.”
Lux Capital, Redpoint Ventures and Bond did not immediately respond to a request for comment. Bezos’ family investment office Bezos Expeditions could not be reached, but it does list Physical Intelligence among its portfolio companies. An OpenAI spokesperson confirmed their participation in the round.
Last month, The Information reported that Physical Intelligence was pursuing a $300 million round at a valuation of about $2 billion.
Subscribe to:
Posts (Atom)