Friday, November 29, 2024

Microsoft’s Response to Its Major Outage Is the 1 Thing No Company Should Ever Do

On Monday, Microsoft suffered a widespread outage that affected Outlook and Teams. Reports started appearing early Monday morning, and escalated throughout the day as more people showed up at work. It’s not entirely clear how many people were affected but there are more than 400 million Outlook users, according to Microsoft. The company acknowledged the outage, though it stopped short of explaining what happened. “We’ve identified a recent change which we believe has resulted in impact,” the company wrote in a post on X. “We’ve started to revert the change and are investigating what additional actions are required to mitigate the issue.” It’s not really clear what that even means. Nothing in that post, or the subsequent thread, explains what that change is, why it caused people to lose access to their email or messaging, or what exactly Microsoft is doing about it. It sounds like someone accidentally pushed the wrong button or somehow introduced a bug, which is a pretty bad look for one of the largest companies on Earth. On the one hand, I guess it’s good news that it wasn’t a breach or some kind of hack. Certainly, the email service that millions of businesses count on would be a valuable target for bad actors. There’s some consolation that all of those email accounts weren’t breached. On the other hand, it doesn’t inspire a lot of confidence if the primary form of communication for millions of workers can be brought down by something a company puts out intentionally. This is especially true after the Crowdstrike outage this summer when that company issued an update to its anti-malware software that caused a fatal error in Windows machines, causing them to be unable to boot. In that case, instead of losing access to email, the consequence included thousands of canceled flights, and hospitals reverting to paper charting when they couldn’t access computer systems. That’s probably worse, but it doesn’t change the fact that this is a bad look for Microsoft. At the same time, the company’s response made things objectively worse. Look, I get that IT and software professionals speak a different language when it comes to situations like this. The problem is, the people who are trying to do their job don’t care about the nuance of software bugs or unintentional downtime. They care about getting their work done. To be fair, most companies are really bad at handling this. For the most part that’s because they often don’t immediately know what caused the problem. It takes time to diagnose what went wrong, come up with a fix, and deploy it across a massive network of computers around the world. Then there’s the fact that companies are hesitant to be transparent about problems if it might make them look bad. What they often fail to understand is that being clear and transparent goes a long way, even when things are going wrong. Also, this is Microsoft, a $3 trillion company that makes the software that powers most of the world’s computers. This is the kind of thing that isn’t supposed to happen. And, when it does, you’d expect Microsoft—of all companies—to understand that it has to do better. That means explaining what happened. A lot of people work almost entirely out of their email. Even in the year 2024, it’s still a primary way of communication for hundreds of millions of people. If their email goes down, they deserve to know why, if for no other reason than they should be able to make an informed decision about whether or not they should find another option. People understand that downtime happens, but–in this case–Microsoft has had a hard time bringing its services back online, and it has had an even harder time talking about what happened. That doesn’t exactly inspire confidence. The bottom line is that if you make a piece of software that millions of people depend on for their work, trust is your most valuable asset. And trust is a thing you earn through clear and transparent information. Anything less is the one thing no company should ever do. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Thursday, November 28, 2024

Salesforce CEO Marc Benioff Thinks AI Has Hit a Roadblock

It’s hard to turn on a computer and not see evidence of AI’s advances into our online lives. It’s in the Microsoft or Google tools you use on your work PC, and the social media apps you use to escape the stresses of reality, and it seems that some kind of buzzy new AI advance gets announced almost daily. But are all these AI chatbots, with ChatGPT in the lead, actually as smart as we think they are? One tech leader, Salesforce CEO Marc Benioff, is beginning to doubt the hype. In fact Benioff thinks we may have hit a ceiling in the development of “large language model” (LLM) AIs, and suggests they actually won’t get much smarter, despite the news of new models or new capabilities. Where the real next-gen AI action is at, Benioff thinks, is actually in AI agents, not chatbots, and he’s betting big on that prediction within his own company. In an interview on the Wall Street Journal’s Future of Everything podcast, Benioff explained his thinking. Essentially even though AI companies are desperately trying to push for “next generation” LLM chatbots, like the much-rumored GPT-5 from OpenAI, Benioff thinks we’re “hitting the upper limits of the LLMs right now.” Benioff admits we have “incredible tools to augment our productivity, to augment our employees, to prove our margins, to prove our revenues, to make our companies fundamentally better,” and even “have higher fidelity relationships with our customers.” But he also says we’re nowhere near the level of AI seen in in “these crazy movies”—meaning the kind of super-smart AI seen in popular sci-fi. In particular Benioff worries that there are some players in the AI game who are evangelizing the tech by suggesting it can solve some of the world’s biggest problems, but it really can’t, and that’s actually a distraction from the actual benefits AI can provide. What’s really coming up in AI tech, Benioff thinks, isn’t super-smart AIs like in the Terminator movies (and we’re all glad the apocalyptic vision of the franchise hasn’t come to pass. Yet.) but powerful “agentic” AI. While chatbots work in a call-and-response style, answering queries when users ask for help, AI agents are chunks of code that can actually perform “actions” in an online environment, like finding appropriate data and then using it for filling in forms, or pressing “buy” on a shopping cart in an online store. In an X posting yesterday, Benioff argued that government “regulatory, compliance, and political demands” are “consuming up to 40% of budgets,” and they’re growing fast. So it’s time for a “transformation” via AI agents, which can “revolutionize operations-automating reporting, audits, case management,” and more. He suggested it was time to “replace bureaucracy with an agentic layer that serves people, not politics.” He added a personal spin on the idea, by saying “Welcome to the future—welcome Agentforce!” in a blatant advert for his company’s recently unveiled agent-based AI system called Agentforce that can, at launch, act like a digital sales rep. Why should we care, though? Benioff is a billionaire, and though he’s certainly got his finger on the pulse of tech, and he’s got skin in the game, his company isn’t developing cutting-edge AI in the same manner as OpenAI or Google. Though his posting on X was aimed at a certain sector—the paperwork load from various government offices—Benioff is essentially predicting the near future of AI-assisted work, where many menial or frustrating “bureaucratic” office tasks are dramatically sped up by agent-based tools. AI critics will worry Benioff is predicting AI will replace some, perhaps more menial, office roles—but arguably he’s saying agents will free up employees’ time to be more effective at the tasks that actually comprise their jobs: for example, if filing a travel expense takes up a few hours of a worker’s day, they’re not going to be contributing to the company’s bottom line…but if an AI can do that job for them, then they’ve gained two useful work hours. And Benioff’s words about slow-paced AI development may ring true in other ways: recently it emerged that AI giant OpenAI was struggling to develop its next-gen ChatGPT engine, and was being forced to try wholly new tactics. If we’re all expecting “smart” AIs to transform our workplace, we may have to wait a while. BY KIT EATON @KITEATON

Tuesday, November 26, 2024

Gen-Zers Blaze the AI Workplace Trail, but Still Want More Guidance

Companies of all sizes continue the rapid adoption of emerging artificial intelligence (AI) applications in an effort to lower the costs and improve performance of their businesses. Now, a series of recent studies offers owners and managers insights into how their employees are using the tech—especially digital native Gen Zers, who are embracing it faster than their older peers. The polls confirm the growing inroads that generative AI is making into business, and reflect how Gen Zers are embracing the tech more rapidly than older cohorts. While that may seem a logical role for members of the first generation brought up with digital devices in their hands to assume, it’s also an indicator of how AI use is likely to rapidly snowball. As with social media habits and the adoption of office applications like Zoom or Slack, younger people have tended to blaze the trails and set the pace of new tech use for other age groups to follow—as now seems to be true with artificial intelligence. While their numbers differ, all the recent surveys indicate Gen Zers are taking to AI for work in a very big way. According to a poll by online tech upskilling company upGrad Enterprise, “73 per cent of Gen Z (are) already integrating GenAI into their daily tasks.” A nearly identical portion of respondents are also using results those apps supply with minimal or no editing. A study of 1,000 U.S.-based knowledge workers aged 22-39 released Monday by Google found 93 percent of Gen Zers regularly using those advanced tech tools. That compares to 79 percent of Millennials and 82 percent across all generations. Perhaps not surprisingly, the most frequent use cases cited were tasks for which early AI applications are widespread and easily accessible. According to Google—which, provides AI enhanced services like Gmail, Docs, and Drive—respondents frequently used apps for “email responses, writing challenging emails from scratch, or helping to overcome language barriers.” It also noted about 88 percent of participants said those tools eased starting tasks that seem overwhelming, with similar numbers feeling the tech improved their writing and afforded greater work flexibility. But despite the rising use and influence of AI in the workplace, it’s clear from the polling that employees are also still feeling somewhat torn about the tech in other ways. For example, upGrad Enterprise’s survey found 52 percent of Gen Z respondents said their company’s AI training was insufficient, and 54 percent that guidelines for the ways the tech may and must not be used aren’t clear enough. Another poll showed 62 percent of younger employees fear AI apps may eventually eliminate their work. That job security concern that may explain why 56 percent said they preferred to rely on smart bots for finding answers they need, rather than going to their bosses for help. A similar ambivalence was reflected in the 52 percent of Gen Z employees who said they regularly discussed AI uses with co-workers, according to the Google study. Yet at the same time, it found 75 percent of people questioned said they had suggested using AI tools to office peers who need help, further fueling overall workplace adoption. And that, said, Google Workspace product vice president Yulie Kwon Kim, suggests ambitious employees of all ages “are not simply using AI as a tool for efficiency, but as a catalyst to help grow their careers.” However, upGrade CEO Srikanth Iyengar noted his company’s study also reflects not just how “Gen Z is embracing AI but, also the urgent need for organizations to establish supportive policies and implement targeted training.” Maybe once they do, younger employees will feel more comfortable sounding out their older managers than huddling with ChatGPT to learn what they need to know. BY BRUCE CRUMLEY

Saturday, November 23, 2024

How Mark Cuban, Tim Cook, and Bill Gates Are Using AI to Be Massively More Productive

Generally it’s pretty hard for the average entrepreneur or professional to emulate the productivity habits of the likes of Tim Cook, Mark Cuban, and Bill Gates. Billionaire CEOs have a small army of assistants to manage their days and plan their schedules down to the minute, after all. But there’s only productivity-booking trick of theirs absolutely anyone can steal and benefit from—time-saving artificial intelligence hacks. Generative AI tools like ChatGPT have only been available for public use for two years, but according to a series of recent interviews, they’re already changing how some of the most successful CEOs in the world manage their days. Former Shark and serial entrepreneur Mark Cuban, Apple boss Tim Cook, and Microsoft founder-turned-philanthropist Bill Gates all recently shared how they’re using AI tools. And handily for everyday workers, all the tools and techniques they mentioned are freely available for anyone to experiment with. Tim Cook uses AI to summarize his emails Take Tim Cook’s love of Apple Intelligence’s email summaries feature, for example. If you think your email overload is bad, spare a thought for the Apple CEO who gets upwards of 800 emails a day. Being a conscientious guy, he tried to read them all, he recently told the Wall Street Journal. That was a huge time suck until he started using Apple’s AI tool to summarize the deluge in his inbox every morning. “If I can save time here and there, it adds up to something significant across a day, a week, a month,” Cook told the WSJ. “It’s changed my life. It really has.” This could seem like just another CEO touting his company’s offerings (and there is no doubt some element of that going on here), but there are a host of AI email summary tools available for both Mac users and Microsoft fans. If you’re skeptical of Cook’s rave review of Apple’s products, try any of these tools to see if they can change your working life too. Mark Cuban’s favorite AI hack When it comes to Mark Cuban’s recommendation, there is no such conflict of interest. Cuban’s email problem is even worse than Cook’s. He receives thousands of often repetitive emails a day, he recently told CNBC. His solution? Using Gemini, Google’s generative AI assistant, to help him power through his replies in much less time. “It’s reduced the need for me to write out routine replies,” he told CNBC. “I can spend 30 seconds evaluating its response and hit ‘send’ versus typing it all out myself.” Cuban called outsourcing much of his email writing to AI the “ultimate time-savings hack.” Other CEOs can certainly experiment with AI tools to see if they could similarly streamline their inbox wrangling. Bill Gates is a big fan of AI meeting notes Not every iconic business leader is most excited about using AI to process emails. Bill Gates explained in a recent interview with The Verge that his favorite way to use new AI tools is for taking and searching through meeting notes. Gates has long been known as extremely detail oriented and a dedicated note taker. But he used to be a big believer in the old fashioned pen and paper approach. “You won’t catch me in a meeting without a legal pad and pen in hand—and I take tons of notes in the margins while I read. I’ve always believed that handwriting notes helps you process information better,” Gates once wrote on LinkedIn. But AI has convinced him to update his note-taking approach, he told The Verge. Now he also has AI sit in on and transcribe meetings so he can reference those records later. “I’d say the feature I use the most is the meeting summary, which is integrated into [Microsoft] Teams, which I use a lot,” he explained. “The ability to interact and not just get the summary, but ask questions about the meeting, is pretty fantastic.” There’s no shortage of AI tools to experiment with Much like Tim Cook’s Apple-boosting reply, Gates is clearly plumping for Microsoft products here. But again, those looking to experiment with using AI for meeting notes aren’t limited to using Microsoft tools. There are tons of competing products to play around with. The main point here isn’t to try to sell you on any particular tool. It’s to highlight that some of the smartest and most tech-savvy leaders around are already finding massive value in integrating AI into their daily routines. If you’re not experimenting with AI tools for similar uses, you’re probably missing an opportunity to save yourself time and hassle. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL

Wednesday, November 20, 2024

Google’s Latest Search Update Suggests Business Owners Need a New Content Marketing Strategy

A new update to Google’s search algorithm has top SEO consultants in agreement: The rules of Google have changed, and the old playbooks need to be rewritten. Earlier this week, Google released its latest “Core Update,” meaning the tech giant’s search algorithms and systems are being refreshed and adjusted. This is Google’s third core update of 2024, and while SEO experts say the impact from November’s update won’t be known for a few weeks, they expect it to follow a recent trend of punishing websites for producing spammy or AI written content. Their advice to business owners? Quality over quantity. SEO experts like David Riggs, founder of SEO firm Pneuma Media (No. 170 on the 2024 Inc. 5000) says that Google’s recent efforts are intended to “reduce the impact of gamification” on search results. “The SEO strategy of 2010 was to just throw a bunch of keywords in and it’ll rank,” he says. “Now, it’s very different.” Riggs says that many of the tricks and techniques that SEO pros used to rely on, like filling up articles with backlinks, publishing short “quick hits,” and creating keyword-filled blog posts, are now being actively disincentivized by Google, as the company attempts to fight against AI-generated content intentionally designed to game the system. “Google caught on and changed the cheat codes,” adds Riggs, “and now you’ve got to change your strategy.” David Kauzlaric, co-founder of SEO consultancy Agency Elevation (No. 461 on the 2024 Inc. 5000) says that the last two years have seen a flurry of core updates that have totally upended how SEO professionals approach their work. “These updates are helping Google’s users,” he says, “they’re not helping business owners who are trying to do SEO. It makes our job far worse and far harder.” “If you don’t pivot to provide what Google wants,” Kauzlaric says, “you’re going to continue to see a decline in traffic, because Google is getting very particular.” How can businesses ensure that their websites and content still rank highly in this new era of Google? Steven Wilson, director of SEO at Above The Bar Marketing (No. 614 on the 2024 Inc. 5000) says if you’re using AI to write full blog posts for your website, you need to stop now. “There is a war on AI,” says Wilson, who adds that his own research has found that “the more AI content you have, the less likely that you’ll show up in search.” Instead of relying entirely on AI, Wilson recommends writing content in a conversational, more casual tone. “AI can’t do that conversational tone,” says Wilson, who also says business owners should be careful not to produce an overabundance of content just for the sake of getting ranked by Google. Wilson says you can still use AI to help write pieces and optimize headlines, but the majority of the writing should come from a human. Wilson also recommends limiting the majority of your content to topics relevant to your business and that you are an expert in. Google’s algorithm highly values authors that appear to have authority on certain subjects, so sticking to “topic clusters” in your realm of expertise is an efficient way to build that authority. Another new strategy that seems to be showing promise is deleting old SEO-focused content from your website. Parker Evensen, founder of digital marketing agency Honest Digital (No. 878 on the 2024 Inc. 5000), says that in previous years, “if you had a lot of authority, you could push out huge quantities of content, and that could help your website. But we’ve found that paring down a lot of that content, especially irrelevant content, can actually help your website.” “I think what Google is trying to do is get people to stop fighting the algorithm and focus on creating the best, most high-quality content they can,” says Riggs. “They want something from a human perspective that’s creating good value and answering real questions. That’s the content that’s going to win.”

Monday, November 18, 2024

OpenAI, Competitors Look for Ways to Overcome Current Limitations

Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever bigger large language models by developing training techniques that use more human-like ways for algorithms to “think”. A dozen AI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI’s recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips. OpenAI declined to comment for this story. After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that “scaling up” current models through adding more data and computing power will consistently lead to improved AI models. But now, some of the most prominent AI scientists are speaking out on the limitations of this “bigger is better” philosophy. Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training — the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures — have plateaued. Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through the use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI. Growth and stagnation “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.” Sutskever declined to share more details on how his team is addressing the issue, other than saying SSI is working on an alternative approach to scaling up pre-training. Behind the scenes, researchers at major AI labs have been running into delays and disappointing outcomes in the race to release a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to three sources familiar with private matters. The so-called ‘training runs’ for large models can cost tens of millions of dollars by simultaneously running hundreds of chips. They are more likely to have hardware-induced failure given how complicated the system is; researchers may not know the eventual performance of the models until the end of the run, which can take months. Another problem is large language models gobble up huge amounts of data, and AI models have exhausted all the easily accessible data in the world. Power shortages have also hindered the training runs, as the process requires vast amounts of energy. To overcome these challenges, researchers are exploring “test-time compute,” a technique that enhances existing AI models during the so-called “inference” phase, or when the model is being used. For example, instead of immediately choosing a single answer, a model could generate and evaluate multiple possibilities in real-time, ultimately choosing the best path forward. This method allows models to dedicate more processing power to challenging tasks like math or coding problems or complex operations that demand human-like reasoning and decision-making. “It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer,” said Noam Brown, a researcher at OpenAI who worked on o1, at TED AI conference in San Francisco last month. OpenAI has embraced this technique in their newly released model known as “o1,” formerly known as Q* and Strawberry, which Reuters first reported in July. The O1 model can “think” through problems in a multi-step manner, similar to human reasoning. It also involves using data and feedback curated from PhDs and industry experts. The secret sauce of the o1 series is another set of training carried out on top of ‘base’ models like GPT-4, and the company says it plans to apply this technique with more and bigger base models. Competition ramps up At the same time, researchers at other top AI labs, from Anthropic, xAI, and Google DeepMind, have also been working to develop their own versions of the technique, according to five people familiar with the efforts. “We see a lot of low-hanging fruit that we can go pluck to make these models better very quickly,” said Kevin Weil, chief product officer at OpenAI at a tech conference in October. “By the time people do catch up, we’re going to try and be three more steps ahead.” Google and xAI did not respond to requests for comment and Anthropic had no immediate comment. The implications could alter the competitive landscape for AI hardware, thus far dominated by insatiable demand for Nvidia’s AI chips. Prominent venture capital investors, from Sequoia to Andreessen Horowitz, who have poured billions to fund expensive development of AI models at multiple AI labs including OpenAI and xAI, are taking notice of the transition and weighing the impact on their expensive bets. “This shift will move us from a world of massive pre-training clusters toward inference clouds, which are distributed, cloud-based servers for inference,” Sonya Huang, a partner at Sequoia Capital, told Reuters. Demand for Nvidia’s AI chips, which are the most cutting edge, has fueled its rise to becoming the world’s most valuable company, surpassing Apple in October. Unlike training chips, where Nvidia dominates, the chip giant could face more competition in the inference market. Asked about the possible impact on demand for its products, Nvidia pointed to recent company presentations on the importance of the technique behind the o1 model. Its CEO Jensen Huang has talked about increasing demand for using its chips for inference. “We’ve now discovered a second scaling law, and this is the scaling law at a time of inference … All of these factors have led to the demand for Blackwell being incredibly high,” Huang said last month at a conference in India, referring to the company’s latest AI chip.

Sunday, November 17, 2024

Crypto’s Year of Capitulation Is a Joke

Thank God for the crypto visionaries, who heroically declared “We’re leaving banking and finance in the dust” and are now huddled in a panic room, clutching a single American flag. Listen closely and you’ll hear them whispering: “We simply cannot innovate without first being kissed on the forehead by the new president.” To put it another way, crypto’s capitulation to trad markets and politics over the past year is an absolute joke. When Bitcoin emerged in 2009, the concept was revolutionary. It promised a decentralized currency that operated without the oversight of banks or governments. In Satoshi Nakamoto’s groundbreaking whitepaper, Bitcoin was described as a “peer-to-peer electronic cash system.” This ambition was radical in its simplicity. Bitcoin offered a way to bypass intermediaries entirely. It would grant people the ability to control their financial interactions and assets. From day one, Bitcoin was all about decentralization, sticking it to the banks and tearing down the financial establishment. Cut out the middlemen, they said. Liberate the masses, they said. It was a vision of freedom with a side of chaos. Crypto’s Promise Versus Reality Then came the gold rush. Bitcoin’s value exploded, altcoins multiplied like weeds, and DeFi platforms popped up. Each one claimed it was about to overthrow Wall Street any day now. True believers swore that crypto could make banks obsolete, that it was building a utopian financial playground where everyone—especially the people the banks ignored—could finally get ahead. Since then, the same banks and corporations that once sneered at crypto as a scam are now jumping on the bandwagon, especially through shiny new Bitcoin exchange-traded funds. With the U.S. Securities and Exchange Commission’s blessing earlier this year, Wall Street can now offer “crypto exposure” without anyone having to get an actual coin. Such heavyweights as BlackRock and Fidelity wasted no time launching their own ETFs. Institutional money is flooding in. Crypto firms that once swore to disrupt the system are bending over backward to join it. In the U.K., where the Financial Conduct Authority barely approves a fraction of crypto applications, companies are eagerly adopting know-your-customer and anti-money-laundering protocols. Just to get a foot in the door. The “movement” that should have been finance’s punk rock is now happily cozying up to traditional finance, trading rebellion for respectability. From Crypto Visionaries to Sell-Outs In 2024 alone, crypto firms and influencers have shelled out millions in political contributions, with Coinbase and their crew leading the charge, all to butter up the right people and lock down favorable regulations. Lobbying, schmoozing, and campaign donations. It’s a long way from decentralization and “power to the people.” Companies are now openly aligning with politicians who wave the pro-crypto flag—such as former President Donald Trump, who’s been cheerleading for Bitcoin and the whole digital currency circus. The anti-establishment rebellion is another talking point for politicians who smell votes and dollar signs. By hitching themselves to politicians and pushing agendas, crypto leaders risk turning the whole industry into just another lobby group clawing for a slice of influence in the swamp of power games. The more idealistic crowd—myself included—see this as a total betrayal of what crypto was supposed to stand for. Crypto’s got itself a civil war, and it’s as messy as you’d expect. On one side, you’ve got the pragmatists, grumbling about how “mainstream adoption” might require a little soul-selling. Or a lot of soul-selling. Or a damned fire sale. As the debate rages across Twitter threads, Warpcast, and Discord servers, the real question looms: Can crypto stay true to its anti-establishment roots—or did it already sell out the minute someone printed a whitepaper in Helvetica? Maybe that’s just the natural life cycle of any “revolution.” Sooner or later, everything goes Hot Topic. First, you’re the scrappy underdog, shaking your fist at the establishment, shouting about freedom and autonomy. Then you get a taste of the good life—private jets, Davos invites, a little pat on the head from your friendly neighborhood investment banker. Suddenly, you’re not so different from the suits you swore to dethrone. At some point, the righteous battle cry of “decentralize everything” turns into “well, maybe just a little centralization… for regulatory purposes.” Too Late for a Revolution? Do I still think crypto matters? In some ways, yes. I know, I know. It’s a lonely hill to die on. But somewhere under all the jargon, lobbying dollars, and Wall Street handshakes, I still believe there’s a spark left, a shot at reclaiming crypto’s anarchic roots. A system that empowers the individual, shakes off the leeches, and actually challenges the entrenched power structures instead of just asking to sit with them. If you dig deep enough, there’s still a chance to resurrect that original spark—to build something that truly stands outside the walls of power, rather than bending a knee to get inside them. Because if crypto’s going to mean anything at all, it has to remember what it set out to destroy. Before it becomes just another face in the crowd. Otherwise, the “decentralized revolution” that spent a decade screaming about autonomy will keep showing up to the big leagues begging for a seat at the same rotten table it swore to flip. EXPERT OPINION BY JOAN WESTENBERG, FOUNDER AND CEO, STUDIO SELF @JOANWESTENBERG

Wednesday, November 13, 2024

Forget the Nanny, Check the Chatbot. AI May Soon Help With Parenting

As AI technology advances, it’s natural that startups and big tech names want to profit off the revolution by finding ways to put it into more corners of everyday life. Current examples include applications that help you out at the office, assisting in fighting employee burnout, and in more intimate, subtle scenarios like health care. Now, according to Andreessen Horowitz partner Justine Moore, AI is set to help out with something very “human” indeed: the complex, stressful, heartfelt, wonderful job of being a parent. In a posting on X yesterday, reported by news site TechCrunch, Moore posited an interesting question: “What if parents could tap into 24/7 support that was much more personal and efficient?” The idea is simple, on its face—we’ve been busy loading up all these super-smart AI systems with megatons of real-world data, tapping into it for help making decisions like, “Which marketing campaign should our startup use?” Within all that data is lots of very practical material, too, including advice that may help a stressed-out parent trying to solve a tricky moment with the kids. Unlike friends and family and even many sources of professional human help, an AI assistant is also always available … even when it’s 3 a.m. and your infant has just thrown up all over the nursery. Moore went a step further, TechCrunch noted, highlighting what she called a new “wave of ‘parenting co-pilots’ built with LLMs and agents.” Moore touted the opportunity to develop dedicated family-focused AI tools with specialist knowledge and expertise—specific variants of the large language model (LLM) chatbot tech that we’re all getting used to. She suggested that the upcoming wave of AI agents, which are small AI-powered tools that can perform actions all by themselves in a digital environment, could help too. It’s easy to imagine the usefulness of an AI agent that almost instantly finds a deal on the brand of disposable diapers you like and then have them delivered when you need them. But Moore also highlighted several startups with innovative tech to help with parenting, including Cradlewise, which uses AI connected to a baby monitor to help analyze a baby’s sleep pattern—and even rock the crib. There’s also the opportunity for this sort of AI system to be “always in your corner,” Moore said, ready to just listen to your emotional outbursts, even if they happen just after 3 a.m. while your partner is sleeping and you’re cleaning up baby vomit. Moore’s words may evoke memories of the Eliza program among tech-savvy readers. It’s a bit of a deep cut, but this was developed way back at the end of the 1960s, and was one of the very first chatbots. Primitive as it seems now, Eliza paved the way for lots of much smarter tech that followed it, not least because it was thought by some medical professionals to offer benefits to patients that chatted to it. A 21st-century, parenting-focused AI Eliza could play a role in helping new parents navigate all the challenges of rearing kids. It’s certainly an idea that may be having its moment. In a post on self-described parenting platform Motherly in April, writer Sarah Boland described what she said was an “unpopular opinion,” and noted that she was using AI to help her parent, including for simple things like task planning. And, in May, popular site Lifehacker set out a list of ways AI can help you with parenting jobs. But why should we care specifically about Moore’s social media musings? Firstly, because of whom she works for. Venture capital firm Andreessen Horowitz is one of the biggest names in the business, and it’s recently been heralding a “new era” in venture funding with a $7.2 billion fund it’s drawn together. If a partner at a firm like this, which has already shown its positive thinking about AI technology, takes time to highlight a whole new area that a buzzy tech may be set to exploit, it’s worth paying attention. The parenting business is already lucrative—analysis site Statista pegs the parenting mobile app global market alone as likely growing to $900 million by 2030. Though it may seem a “soft” market that’s more about human feelings than high tech, technology has been becoming a part of child-raising for years. If your AI startup is looking for unexpected ways to leverage your innovation, perhaps it’s time to consider how you could help raise the next generation of kids. They’ll be the first to be born into a world where AI is normal. Just be thoughtful and perhaps a little wary. AI tech is not without some risks, especially when it comes to younger or more vulnerable users. BY KIT EATON @KITEATON

Monday, November 11, 2024

Why Gen-Z Workers Are Consciously ‘Unbossing’

Company leaders aren’t happy with their Gen-Z employees. Sixty percent said in a recent survey they’ve fired Gen-Z team members that they hired this year. But these leaders could have another problem on their hands: The Gen-Z employees who are sticking around might not be interested in stepping up within the organization — otherwise known as unbossing. That’s according to recent data from Robert Walters, a global recruitment company, in a trend the company deems conscious unbossing. Fifty-seven percent of the U.S. Gen-Z workers they surveyed said they weren’t interested in becoming middle managers. Rather, 60 percent are opting for an “individual route to career progression over managing others.” Why? According to 67 percent of the Gen-Z respondents, middle management roles “are too high stress with low reward.” Managers have indeed had plenty on their plates in recent years. Seventy-six percent of HR leaders surveyed by Gartner in 2023 said their managers were “overwhelmed by the growth of their job responsibilities” — which, according to experts who previously spoke with Inc., include managing return-to-office policies, AI developments, and more. Perhaps it’s no surprise, then, that in a LinkedIn survey this year, 47 percent of managers said they felt burned out — more so than directors or individual contributors. Adding to this is the fact that many Gen-Zers are already unhappy with their roles. According to data from the workforce management platform Deputy, shared exclusively with Inc., hourly Gen-Z employees experienced twice as many frustrating shifts in the year’s third quarter as they did in the first. Despite their reluctance, 43 percent of Gen-Z workers do expect that they will need to move into a middle management at some point, according to the Robert Walters report. And that ascent to management is already underway: According to ADP, Gen-Zers now make up 3 percent of the managerial workforce compared with just over 1 percent in 2020, though their share remains small overall. Nevertheless, 40 percent of surveyed Gen-Z workers remain resolute in their “unbossing” and “adamant” that they will “avoid middle management altogether,” instead set on taking a career route more focused on “personal growth and skills accumulation.” This could have serious repercussions for the companies that employ these workers, says Sean Puddle, managing director of Robert Walters New York, as middle managers are often the “driving force” behind an organization’s growth: “If you’ve got a load of people who aren’t interested in moving up into that middle management function, it can actually end up limiting your growth and, or, stretching the managers that you have got really, really thin.” But there are ways that companies can boost their Gen-Z workers’ excitement about their work and thus discourage unbossing. According to Deloitte’s latest Gen-Z and Millennial Survey, 86 percent of Gen-Z respondents say a sense of purpose at work is “important to their overall job satisfaction,” and work-life balance ranks as the top priority when choosing an employer. There are also ways that companies can better support and prioritize their managers, Puddle says, and in turn make the role of middle managers more appealing to younger workers. “How are they going to give more autonomy and more decision-making power to that group of people? Are they able to make sure they’re assessing workload regularly so that people aren’t just getting overburdened?” he says. “And what are some of the mechanisms that they could deploy to try and ease that overburden, if that happens?” BY SARAH LYNCH

Friday, November 8, 2024

The No. 1 Business Skill You’ll Need in 2025

It doesn’t matter if you’re an entrepreneur or technologist or just someone trying to innovate at your company. If you’re doing new things to keep yourself on the cutting edge rather than getting sliced up and left behind in a million pieces, the rules are changing drastically. Allow me to connect the dots. Innovation Died This Year and No One Told You 2024 is going to be remembered as the year that promises around artificial intelligence became the go-to substitute for real technical innovation. You can lump the marketing machines for generative AI, machine learning, and artificial general intelligence into the same bucket. And then throw that bucket at a canvas and sell it for billions of dollars. Now, for those unaware, I’m not anti-AI. I was on the AI commercialization train over a decade ago, we sold to a private equity firm, and then, like any good entrepreneur, I went and did other things. But as the generative AI bubble was inflating—and all the venture capital money was being sucked into that bubble, and the venture and innovation arms of Google, Microsoft, and Apple leaped into the age of miracles—it quickly became obvious that the scraps left over for normie entrepreneurs and technologists solving complex problems with advancements in technology that weren’t so artificial, those scraps just weren’t going to be enough to compete with “AI for… whatever, it doesn’t matter, just write me a check… LLMs!!!” The money consolidated quickly, and it helped accelerate a decline in the overall VC investor ranks, as a crop of new venture capitalists that had sprouted during the post-pandemic, pre-inflation era of cheap money suddenly decided that there were better, safer, saner options, and those vests were kind stupid-looking anyway—and so they quit. But as it turned out, those corporate options weren’t so safe after all. See, the corporate world was innovating too. And so here we are, where your entrepreneurial and innovation skills are withering on the vine, because those jobs have all been cut while the new jobs are being created by AI and filled by AI, skipping over your proven, real-world experience while it scans your résumé for AI skills to help AI build more AI. Here’s the Counterplay We already know it’s unwise to fight AI with AI, and I don’t want to be the millionth person telling you that you need to learn AI to compete with AI. Nah, I’m going to get a little ahead of the curve on this, I’ll speculate recklessly, and so some of it might seem like nonsense. Oh, also, if you’re of a certain age, I’m going to slam a Billy Joel song into your brain for a few days, so I apologize in advance. Seriously, stop reading now if you’re over 40 and you’d like to be able to think clearly over the next 48 hours. Honesty Is Such a Lonely Word You were warned. This is the skill you need. I’m serious. I’m not talking about not lying. I’m not your mom. I’m talking about how to approach everything that you do on the innovation and technology fronts, from building to positioning to communication to marketing and selling. I’m talking about intellectual honesty. Let me go to Wikipedia, because sometimes when I put those words together people think I’m calling them stupid liars and they want to punch me even more than usual. Scrolling down the definition a bit: “Within the realm of business, intellectual honesty entails basing decisions on factual evidence, consistently pursuing truth in problem-solving, and setting aside personal aspirations.” Now, in future articles, I’ll cover some of the more rubber-meets-road applications of intellectual honesty, but since our time together today is short, I’m just gonna high-level it. Ideas Are Back I actually like this definition better, it’s from something called Wikiversity: “Intellectual honesty is honesty in the acquisition, analysis, and transmission of ideas.” Yeah, ideas are valuable again. When people give business advice, one of their go-to moves is to talk about how ideas are cheap, plentiful, and how everyone has loads of them so they’re worth nothing. This is kinda true, but kinda true is never actually true. That advice is usually given in the context of the best business idea in the universe being useless if you don’t execute on it properly. But the reverse is true too. The best minds with the most experience doing everything right cannot save a terrible idea from being terrible. In a post-AI world—and let’s face it, we’re already there—the proper “acquisition, analysis, and transmission of ideas” is the critical business skill to keep oneself, one’s product, and one’s company on the cutting edge, AI or not. That means talk is cheap, not the idea itself. Show, Don’t Tell You need to be able to show—show the value of the idea, as well as the manifestation of that idea into reality and the level of execution required to get there, and finally the coming-to-fruition that results in an exponential return on the resources and investment poured into that idea. Show, don’t tell. Because anything you can just say, AI can do, at least in the minds of the people you need backing from. You can’t just slam an idea and a plan into a pitch deck or business proposal anymore. The first question in their minds, whether they ask it or not, is “Why can’t we just have AI do this?” You’ll need to be able to counter that, with intellectual honesty. Let’s get honest. Where Did You Get That Idea? On the idea acquisition front, bandwagoning is out. And with that, first-mover advantage is probably a thing of the past as well. Ideas that are simple, unoriginal, or just a twist on an existing success are going to be far more likely to be sniped by an AI-driven entity cashing in on any company’s initial traction. If an idea has weak intellectual property prospects or you can’t develop a strong competitive moat, it’s copyable. And if there’s one thing that no one can argue AI doesn’t do well, it’s copy, at speed. Is Your Idea Any Good? I do get a dozen ideas a day, and on initial analysis, almost all of them are immediately identifiable as crap. It’s the ones that aren’t immediately identifiable that get me in trouble. I can spend years stuck on a bad idea. And I only do this because I also have the experience of taking years to discover one little tweak that turns a bad idea into a brilliant idea. I can’t do that without the idea being on the market. But AI can determine a million of those little tweaks, try them, and maybe find the right one before you finish breakfast. You need to know and be sure of the value of your idea, and fast. Can You Communicate the Value of Your Idea? As the traditional pitch deck heads to the dustbin, so goes the traditional elevator pitch. And while I’ll have more on the demise of the entrepreneurial pitch play in future posts, the only thing that speaks louder than words is numbers. And those numbers only come with action. It has always been important to focus primarily on those tasks that lead to growth—in customers, revenue, and profit—but now it’s mandatory. It’s time to drop all pretense of traditional business models and metrics that don’t contribute to growth, and preferably rapid growth. The competition is doing it. They’re cutting everything that isn’t directly responsible for revenue and replacing it with… AI. You have to do it with… you. Can you make that happen? Be honest. And please follow along as I head down that path. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO

Wednesday, November 6, 2024

Physical Intelligence, now valued at about $2 billion, aims to create AI models that can power robots.

The artificial intelligence start-up Physical Intelligence, which aims to build general-purpose AI models and algorithms for real-world robots, is set to announce a new $400 million funding round today with backing from Jeff Bezos and other big-name investors. The financing—which has not yet been disclosed on Physical Intelligence’s website but was reported by The New York Times and will reportedly go live today—was led by Bezos, the founder and executive chairman of Amazon, as well as two venture capital firms, Thrive Capital and Lux Capital. The Times reports that the AI giant OpenAI as well as the investment firms Redpoint Ventures and Bond also participated. A spokesperson for Thrive Capital confirmed the details of the Times story to Inc. Physical Intelligence, which has not yet posted about the round on its blog, directed Inc. to that same spokesperson. “The fund-raising valued the company at about $2 billion, not including the new investments,” Times reporter Michael J. de la Merced wrote. “That’s significantly more than the $70 million that the start-up, which was founded this year, had raised in seed financing.” Physical Intelligence, also known as Pi and sometimes stylized as Ï€, currently lists five investors on its website: three of the participants in this latest round (Lux Capital, Thrive Capital and OpenAI) and two not named in the Times story (Khosla Ventures and Sequoia Capital). Data on Crunchbase indicates that those five firms, along with Outset Capital and Greenoaks, constituted Physical Intelligence’s March 2024 seed round. Thrive confirmed to Inc. that they participated in the seed round. Physical Intelligence aims to bring general-use artificial intelligence out into the physical world, according to its website, which features videos of robots folding laundry and assembling cardboard boxes. “What we’re doing is not just a brain for any particular robot,” co-founder and chief executive Karol Hausman told the Times. “It’s a single generalist brain that can control any robot.” Lux Capital, Redpoint Ventures and Bond did not immediately respond to a request for comment. Bezos’ family investment office Bezos Expeditions could not be reached, but it does list Physical Intelligence among its portfolio companies. An OpenAI spokesperson confirmed their participation in the round. Last month, The Information reported that Physical Intelligence was pursuing a $300 million round at a valuation of about $2 billion.

Monday, November 4, 2024

Unexpected Lessons From 4 Years of Remote Work

It’s been more than four years since the Covid-19 pandemic changed how and where knowledge workers interact with their workplaces. Today, we see a dynamic mix of remote, hybrid, and in-person attendance models, proving that 2020 permanently changed our collective outlook on work, flexibility, and talent. But if there’s anything the past four years have taught us, it’s that the idea that people need to be at an office from 9 to 5, Monday through Friday, to be productive is an outdated way of thinking. In fact, recent data reveals only 16 percent of white-collar workers feel they’re more productive in the office, compared to 46 percent who said they’re more productive at home. The shift to remote work proved that geographical differences are not the limitations they once were. Broader geography allows companies to access a wider range of talent, all while meeting talent exactly where they’re at. As the CEO of Oyster, a fully remote company of more than 500 employees in 70 countries, I like to think that we’ve truly mastered the art of remote work—but that didn’t come without learning lessons along the way. Here are five of the best lessons to come out of those years. 1. Remote work is not just about the tools—it’s about the rules, too. In early 2020, at the onset of the pandemic, my co-founder, Jack Mardack, and I had just started our new business: a global employment platform to enable cross-border hiring. While the world was on lockdown, we had to launch, fundraise, and go from no product to a minimum viable product in a matter of months. All during the pandemic; all across two continents; and all over Slack and Zoom. Collaborative tools like Slack, Zoom, Loom, and Notion, helped us stay connected during our company’s infancy. But building the “tools and rules” early on allowed us to create necessary structures and internal cultural norms that are still in place today. As we’ve grown across many time zones, the tools and rules have never been more necessary. For example, we branded our way of working as “follow the sun,” meaning that employees should not feel pressured to work “global hours” just because they work for a “global company.” Prioritizing asynchronous workflows and communication, team members ending their days can hand off work to those starting their days in other parts of the world and trust that it’s being carried forward. This enables teams to be productive no matter the location of their members, helping us collaborate across time zones more effectively, and making it possible for our company to have a 24-hour workday for ultimate customer service around the globe. 2. When it comes to remote, track results, not hours. “It’s easier to trust people are working if we can physically see them working.” How many times have we read headlines, RTO policies, and quotes from business leaders in the past few years that boiled down to this outdated approach to management? Before I became a remote CEO, I’d spent the majority of my working life in offices with leaders who strongly believed that the more employees were physically in seats, the more productive they were. This overreliance on superficial measures of productivity often resulted in missed opportunities, inefficiencies, and work/life imbalances for employees. Remote work has effectively shifted the focus from hours worked to results achieved. This priority shift is a driving force that lends to a more logical and satisfying work approach that’s good for your business and good for your people. As a CEO, it’s not my job to ensure that people are at their desks by 9 a.m. It’s my job to set the vision and overarching business goals for the company. Instead of focusing on how many hours an employee spends on idle in the office messaging app or their number of keystrokes, connect their work to high-level objectives and key results. For example, if you set a company goal of “increasing the share of new business from enterprise companies by 15 percent,” individual teams and departments have a North Star objective that they can track, measure, and work towards. This hasn’t always been my approach, even as a remote CEO. But one of the most important lessons I’ve learned is that when you prioritize alignment, productivity tends to follow. 3. Don’t be afraid to adapt your leadership style. Your leadership style isn’t set in stone, even if you’ve been an executive for half of your career. When it came to leading a remote company, adapting my leadership style for a virtual environment was necessary. Early on, I learned that we always need to assume good intent from our people. In a remote setting, it’s easy to become paranoid and feel the need to over-communicate to gain a greater sense of control. The right approach is the opposite: hire talent that aligns with your values and mission and trust them to do their best at all times. To counteract the instinct to micromanage in a remote or hybrid environment, leaders should adopt more supportive and coaching-oriented management styles. Focus more on providing clear objectives and removing obstacles for their teams, rather than hovering and following up with non-stop Slack messages. Manage 1:1’s and updates in a flexible but results-oriented way. Get into the habit of giving continuous feedback so your reports know where they stand and how they’re performing. 4. Remote-first doesn’t mean remote only. There is a popular misconception that remote-first companies have to miss out on the human touchpoints that come with in-person work. One of the most surprising lessons I learned as a remote CEO is how organic these touchpoints can be. While a work environment can be remote-first, it doesn’t—and shouldn’t—mean your organizational culture exists only virtually. At Oyster, meeting people in person is a part of our culture. With colleagues and friends all around the world, you may be surprised by how often our people get together—co-working with colleagues who live in the same city or who are vacationing, or just passing through. Leaders in a fully remote or very distributed company may also find it especially helpful to meet once a year (at least) for executive offsites—both to have the most complex strategic conversations and to build trust within the team While Oyster will always be a fully remote, very distributed company, none of us are limited by the idea that every interaction, connection, and collaboration must be either entirely virtual or entirely in person. 5. Self-discovery meets personal growth. One of the most impactful things I’ve seen come out of the remote work revolution is the opportunity to become the most authentic version of ourselves. Gone are the days of keeping up with office dress codes and social pressures. Remote workers can show up every day exactly as they are, allowing them to explore their personal style and interests more freely. Over the years I’ve worked remotely, I saw the most personal growth—both in and outside of my work. I’ve gleaned a better understanding of my work habits like when I do my best work (at night, after my kids have gone to sleep), as well as my personal strengths and weaknesses. Because if we can altogether reach a deeper level of self-knowledge, we can optimize our work schedules and improve our overall performance and well-being. It’s 2024, and it’s time we all make work work for us. EXPERT OPINION BY TONY JAMOUS, CEO AND CO-FOUNDER, OYSTER @JAMINGO

Friday, November 1, 2024

Why Tech Employees Are Ready to Revolt

The last couple of years have been a shitshow for anyone who works in tech or at a tech-adjacent company. And it’s coming to a head. But before the revolution gets underway, we should look at how we got here. This will be short, but feel free to dive down any of the associated rabbit holes. Where to Begin? If I have to pick a starting point—the match on the dumpster fire, so to speak—I’d pick the period just after the period after the pandemic, so let’s call that mid-2022. That was when the free and cheap money started drying up, putting both the consumer and the venture crowd in an immediate pinch. This hit tech startups the hardest, and the earlier the stage of the startup, the harder you were hit. Here’s why. Startups, especially tech startups, have always existed in two different worlds, and they might as well be on two entirely different planets. One is Jupiter, a big gas giant with a storm that’s been raging for centuries. This is the world of unicorns, West Coast VCs, and people leaving Google or Amazon only to fail quickly and then go back to Google or Amazon. The other is Pluto, which is small and dark and icy. You hardly hear about what happens on Pluto, but a lot of entrepreneurs are happy there and making a decent living, sometimes becoming quite successful, but quietly. Now, I know it’s like this because I’ve built or been a part of several startups on both planets. And I can tell you, they have always been far apart, but when the cheap money dried up in 2022, they drifted even further apart. Once AI entered the mainstream in 2023, Jupiter got bigger and closer to the sun and everyone wanted to live there. Those of us on Pluto shrugged. More space for us. But then the gulf between the two became wider and wider, and suddenly Pluto wasn’t technically even a planet anymore. It happened fast. I stopped writing because it was changing so fast, I couldn’t keep up. When I started writing again, my once-optimistic take on the future became a cautionary tale, and reading those pieces again now, I was kind of spot-on about the reasons why: the tech industry taking the customer for granted, the infatuation with AI, and the money—across the tech landscape—losing its taste for innovation and instead hopping on any bandwagon it could find. As always, change, good and bad, impacts the startup world(s) first, then it comes for big tech. The Artificial Elephant in The Room Yes, I could blame AI for 99 percent of the problems in the tech industry right now. But also, I can’t help but feel like AI is a symptom here and not a cause. When I said that scared money was “hopping on any bandwagon” in the last section, I probably could have said “hopping on the AI bandwagon.” Now look, do not get me wrong. I helped start this fire, and I am not a Luddite or reformed nerd in any sense. I just know that when these major technical tectonic shifts happen, well, people go nuts. I’m old enough to have lived through a couple of these, starting with the capital “I” Internet, and we react to that tectonic shift the same way every time so, ultimately, we are the problem. By that, I mean that AI is neither savior nor villain. It’s not the answer for everything, and it’s not going to have us bowing to our AI overlords any time soon. But opportunists on either side won’t let nuance stop them from profiting. And the rewards and risks with AI are so big and so high that the natural amplification of advancement in the tech is deafening. This is crushing tech workers in two ways. It’s turning them into villains. And, oddly enough, the vilification is bouncing off of the Sam Altmans and the Elon Musks and landing squarely on to the unsuspecting head of Jane Programmer. Jane Programmer is increasingly being seen as expendable, regardless of what she does or how much experience or talent she has. We hate you now. And also, you’re fired. Doomsday Prepping for a Day That May Never Come So… tech companies are cutting tech employees across the board, and it’s almost like a fire sale in reverse. It’s so cheap and no one is going to know. So why not just pull the trigger? Did rampant over-hiring happen in the cheap money era? Absolutely. Will those folks be missed? Some of them. Most of them? Eh. Can AI do the work of all kinds of tech workers and maybe hundreds of them at once? Not now. Could it someday? People are saying there’s a chance. Hey, can we do this layoff thing surreptitiously by requiring everyone to come back to the office, even if it means our newly hired CEO will have to get on a private jet every week? Pregnant pause. And then you wake up one day and realize there is a consensus building that AI can do everything. Everything from your job to your co-worker’s job to the job of the person who is supposed to be buying the thing you work on. Then AI can hire all the replacements, and only the ones it needs. You realize this is not true, this is just another greater fool theory. But it takes so much nuance to explain why that you just shrug. You don’t have the time. You need to figure out what’s next for you. And quick. I have the time. Well, I don’t, but I write fast. And now that we’re on the same page, please follow me as I document what happens next, starting with the Great Tech Worker Revolution of… 2025? EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO