Wednesday, July 31, 2024

AI Fears Are Fueling Gen-Z's Doubts About Office Jobs

Gen-Zers don't see white-collar careers as a safe bet. More than half of prospective Gen-Z employees in the U.S. worry about being replaced by artificial intelligence if they pursue office jobs, according to a recent study by the Edmonton, Canada-based business management platform Jobber. Their fears aren't unfounded--44 percent of companies surveyed in a Resume Builder report last year said they'd lay off workers because of AI in 2024. "In the face of advancing AI technology, Gen-Z no longer equates a white-collar profession with job security," the Jobber study said. The study's 1,000 respondents, who were all American students ages 18 to 20, also pointed to economic challenges and outsourcing as some of their top concerns about office jobs. Jobber CEO and co-founder Sam Pillar says that for generations, opportunity used to lie in getting a college degree and climbing the corporate ladder. But now, "that path is not yielding the outcome that was promised," he says. "This ... hangover opinion from past generations just doesn't reflect the reality on the ground today," says Pillar. Young people concerned about AI should consider blue-collar work instead, he says, as skilled workers are less vulnerable to being replaced by the technology. In fact, AI will benefit those who want to create small businesses in those industries, he notes, because the technology will democratize access to entrepreneurship. "You can use the tools and the automation and the AI to handle a lot of that back office administration for you," Pillar says. "[It gives] you a leg up so that you have an opportunity to be successful." But while Gen-Zers are more likely than older cohorts to start a company after graduating, few Gen-Z students currently consider this as an option, according to the Jobber study. Of those surveyed who said they have already decided on a career, 68 percent said they have chosen to pursue an office job.

Monday, July 29, 2024

What Is CrowdStrike, the Cybersecurity Company Behind the Global Tech Chaos?

Late last two Thursday night on the U.S. East Coast, reports began trickling out that PC-based systems were not functioning. Flights were grounded, the U.K. health system had to pause certain operations, and emergency services were cut off. Around the globe, people experienced what's known as the blue screen of death, a dreaded error message against a blue background indicating the system was not functioning. It soon became clear that there was an issue with an update to CrowdStrike cybersecurity software for Windows users. CrowdStrike co-founder and CEO George Kurtz posted on X early Friday morning that it was not a cyberattack and that "the issue has been identified, isolated and a fix has been deployed." Soon after, a visibly tired Kurtz appeared on Today to say he was "deeply sorry" for the disruptions and that the company was working with clients to get systems back online. Host Hoda Kotb noted that computers at NBC's studios had been affected. Austin-based CrowdStrike was founded in Sunnyvale, California, in 2012 by Kurtz, Gregg Marston, and Dimitri Alperovich. Kurtz and Alperovich had previously worked together at antivirus software company McAfee; Marston had been CFO of Foundstone, an IT company Kurtz co-founded that McAfee acquired. At the time, cybersecurity software was focused on detecting viruses and malware, but CrowdStrike's founders took the then-novel approach of tracking the hackers behind the intrusions. Their system was "based on robust machine-learning infrastructure and artificial intelligence that looks for behavioral attack patterns and indicators of attack to identify bad actors," Kurtz told Inc. in 2016. Systems like McAfee's were also slow because the software scanned a person's machine each time they turned on the computer -- a process that could take 15 minutes. CrowdStrike's system was cloud-based, meaning it was "lightweight and nimble" and didn't slow down a user's computer, Kurtz said. Today, CrowdStrike's signature product is the cloud-based Falcon platform that works across a company's IT systems and continuously monitors for threats such as malware or unauthorized access. "Always staying ahead of the adversary is a tall task," Kurtz said on Today. To respond to new threats, CrowdStrike regularly sends out software updates. Clearly, something went awry in the most recent update -- it was a "weird interaction" with Windows systems as Kurtz called it. Mac and Linux users were not affected. CrowdStrike was No. 144 on the Inc. 5000 list of the fastest-growing companies in America in 2016, and appeared on Inc.'s list of the best-led companies in America in 2021. It went public on Nasdaq in 2019. Major corporations and governments often call in CrowdStrike for incident response after they've been hacked. The company made headlines when it was tapped to investigate the hacks of Sony Pictures in 2014 and the Democratic National Committee in 2016. By mid-morning on Friday, systems were coming back online, but the reputational damage to CrowdStrike may be hard to shake. The incident raises questions about how a routine software update could cause so much havoc. "This is a very, very uncomfortable illustration of the fragility of the world's core internet infrastructure," Ciaran Martin, the former chief executive of Britain's National Cyber Security Center, told The New York Times.

Friday, July 26, 2024

5 Steps That OpenAI Thinks Will Lead to Artificial Intelligence Running a Company

Earlier this month, Bloomberg reported that OpenAI had defined five distinct stages of innovation in AI, from rudimentary chatbots to advanced systems capable of doing the work of an entire organization. These stages could inform OpenAI's future plans as it works toward its ultimate goal of building artificial general intelligence, an AI smart and capable enough to perform all of the same work as a human. According to the Bloomberg report, OpenAI's leaders shared the following five stages internally to employees in early July during an all-hands meeting: Stage 1: "Chatbots, AI with conversational language" Stage 2: "Reasoners, human-level problem solving" Stage 3: "Agents, systems that can take actions" Stage 4: "Innovators, AI that can aid in invention" Stage 5: "Organizations, AI that can do the work of an organization" On July 23, the company posted briefly about the topic on X: "We are developing levels to help us and stakeholders categorize and track AI progress. This is a work in progress and we'll share more soon." Olivier Toubia, the Glaubinger Professor of Business at Columbia Business School, believes the five steps more closely resemble a plan to make human workers obsolete than a roadmap to artificial general intelligence. With the exception of reasoning, he says, all of the outlined stages are more focused on business uses than they are on the actual science. Toubia broke down what entrepreneurs need to know about OpenAI's five stages: Stage 1: Chatbots Bloomberg reported that OpenAI told employees that the company is still currently on the first stage, dubbed Chatbots. This stage is best exemplified by OpenAI's own ChatGPT, which shocked the world with its ability to converse in natural language when it was released in late 2022. Many organizations are using chatbots to enhance their internal productivity, Toubia says, while others are using the tech to power outward-facing customer service bots. While these chatbots may seem superhumanly smart at first glance, they're smoother talkers than they are operators. Chatbots will often make up and present false information with full confidence, and unless they've been set up to retrieve info from a businesses' data center, they don't have much commercial utility. Even Sam Altman has referred to the current iteration of ChatGPT as "incredibly dumb." Stage 2: Reasoners OpenAI told employees that it is close to creating AI models that could be classified in its second stage: Reasoners. According to Bloomberg, Reasoners are systems "that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn't have access to any tools." Last week, Reuters reported that OpenAI is currently at work on a new "reasoning" AI model, code-named Strawberry, focused on capabilities like being able to plan ahead and work through difficult problems with multiple steps. Reuters reported that leaders in the AI space believe that by improving reasoning, their models will be empowered to handle a wide variety of tasks, "from making major scientific discoveries to planning and building new software applications." Stage 3: Agents OpenAI doesn't believe that innovation in artificial intelligence has reached the Agent stage, which it refers to as "systems that can take actions on a user's behalf." Outside of OpenAI, much has been made of the potential value of digital workers who can operate autonomously, but few companies have wholeheartedly embraced the concept of AI Agents. Lattice, a popular HR software provider, recently announced plans to onboard AI Agents directly into a company's org chart, but scuttled the idea after online backlash. "From what I understand," says Toubia, AI Agents "could replace you for a few days when you go on vacation." Such an Agent would act as a proxy for vacationing employees, picking up the slack, completing simple tasks, and keeping vacationers updated on what happened while they were away. "I have a bit of a cynical view on this one," Toubia says, "people will welcome an Agent that's going to let them go on vacation more often, but based on the next steps, the goal is not just to replace you when you go on vacation, it's to replace you altogether." Stage 4: Innovators According to Bloomberg, Innovators refers to "AI that can aid in invention." In some ways, Toubia says, AI Innovators are already here. They're helping people generate ideas, write code, and create art. "With a bit of guidance," he says, "you can get ChatGPT to come up with ideas for a new app or a new digital product, and then create code and promotional materials." Because of this, Toubia predicts that Innovators, as defined by OpenAI, will mostly come in the form of AI systems specifically developed to help prototype, build, and manufacture physical products. Stage 5: Organizations In OpenAI's proposed final stage of artificial intelligence innovation, AI systems will become advanced and smart enough to do the work of an entire organization. Toubia says this should be a wake-up call for managers, who may have previously considered themselves safe from being replaced by AI, adding that even a company's founders could be considered expendable if the system finds that they're standing in the way of true efficiency. Toubia worries that by classifying Organizations as the final step in its "roadmap to intelligence," OpenAI may be tipping its hand regarding its ambitions. "This really seems to be a roadmap toward taking over the world," he says, "replacing complete organizations and making humans obsolete in the process." Going forward, he says, it may be CEOs who need to justify their paychecks.

Thursday, July 25, 2024

AI in Business: Maximizing Gains and Minimizing Risks

Over half of CEOs recently surveyed by Fortune and Deloitte said that they have already implemented generative artificial intelligence in their business to increase efficiency. And many are now looking to generative AI to help them find new insights, reduce operational costs, and accelerate innovation. There can be a lot of relatively quick wins with AI when it comes to efficiency and automation. However, as you seek to embed AI more deeply within your operations, it becomes even more important to understand the downside risk. In part, because security has always been an afterthought. Security as an afterthought In the early days of technology innovation, as business moved from standalone personal computers to sharing files to enterprise networks and the internet, threat actors moved from viruses to worms to spyware and rootkits to take advantage of new attack vectors. The industrialization of hacking accelerated the trajectory by making it possible to exploit information technology infrastructure and connectivity using automation and evasion techniques. Further, it launched a criminal economy that flourishes today. In each of these phases, security technologies and best practices emerged to address new types of threats. Organizations added new layers of defense, often only after some inevitable and painful fallout. More recently, internet of things devices and operational technology environments are expanding the attack surface as they become connected to IT systems, out to the cloud, and even to mobile phones. For example, water systems, medical devices, smart light bulbs, and connected cars are under attack. What's more, the "computing as you are" movement, which is now the norm, has further fueled this hyperconnectivity trend. Organizations are still trying to understand their exposure to risk and how to build resilience as pathways for attackers continue to multiply and create opportunities for compromise. Risk versus reward The use of AI adds another layer of complexity to defending your enterprise. Threat actors are using AI capabilities to prompt users to get them to circumvent security configurations and best practices. The result is fraud, credential abuse, and data breaches. On the flip side, AI adoption within enterprises also brings its own inherent and potentially significant risks. Users can unintentionally leak sensitive information as they use AI tools to help get their jobs done. For instance, uploading proprietary code to an AI-enabled tool to help identify bugs and fixes, or company confidential information for assistance summarizing meeting notes. The root of the problem is that AI is a "black box," meaning there's a lack of visibility into how it works, how it was trained, and what you are going to get out of it and why. The black box problem is so challenging that even the people developing tools using AI may not fully understand all that it is doing, why it is doing things a certain way, and the tradeoffs. Business leaders are in a tough position of trying to decide what role AI should play in their business and how to balance the risk with the reward. Here are three best practices that can help. 1. Be careful what data you expose to an AI-enabled tool. Uploading your quarterly financial spreadsheet and asking questions to do some analysis might sound innocuous. But think about the implications if that information were to get into the wrong hands. Don't give anything to an AI tool that you don't want an unauthorized user accessing. 2. Validate the tool's output. AI hallucinates, meaning it confidently produces inaccurate responses. There have been numerous media reports and academic articles on the subject. I can point to dozens of examples personally as I've experimented with AI tools. When you ask an AI tool a question, it behooves you to have a notion of what the answer should be. If it's not at all what you expected, ask the question another way and, as an extra precaution, go to another source for validation. 3. Be mindful of which systems your AI-enable tool can hook up to. The opposite side of the first point is that if you have AI-enabled tools operating within your environment you need to be aware of what other systems you're hooking those tools up to, and, in turn, of what those systems have access to. Since AI is a black box, you may not know what is going on behind the scenes, including what the tool is connecting to as it performs its functions. There's a lot of optimism and excitement about the potential upside for enterprises that embrace AI. Fortunately, the past has shown that security is integral to reaping the positive impact of new technologies and processes that are brought into the enterprise. In the rush to capitalize on AI, get ahead of the security risks by committing yourself to understanding the tradeoffs and making informed decisions. EXPERT OPINION BY MARTIN ROESCH, CEO, NETOGRAPHY @MROESCH

Monday, July 22, 2024

A recent AI pivot at an accounting giant has some people thinking that AI's invasion of the job market is well underway.

Fears that AI will be stealing jobs were given fresh life on Wednesday, when accounting giant Intuit announced it would lay off 1,800 employees as part of an AI-centered reorganization. The cuts will affect 10 percent of workers at the company, which owns accounting software TurboTax and QuickBooks. In a memo sent to staff, Intuit CEO Sasan Goodarzi noted that aligning the business with AI will make it competitive as technological change sweeps the economy. "Companies that aren't prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist," he wrote. The layoffs, which will be completed in September, are not a result of economic hardship, according to Goodarzi, who maintained that Intuit is "in a position of strength" financially. (Laid off employees will receive at least 16 weeks' severance and a minimum of six months' health insurance coverage.) Rather, Goodarzi cited poor performance as the motivating factor in laying off 1,050 of the company's 1,800 employees. "We've significantly raised the bar on our expectations," he wrote. Goodarzi added that the company would replace departing staff at a rate of 1:1 by creating new roles bolstered by generative AI tools. "We will hire approximately 1,800 new people primarily in engineering, product, and customer-facing roles such as sales, customer success, and marketing," the CEO said in the memo. In an email to Inc., a company spokesperson said that the layoffs are "about increasing investment in key growth areas: Gen AI, money movement, mid-market expansion, and international growth." Intuit's shift to AI-oriented labor is happening amid fears that the technology could displace droves of workers. According to a June survey conducted by Duke University and the Federal Reserve Banks of Richmond and Atlanta, two-thirds of the American CFOs who responded said their companies are looking to replace human workers with some kind of automation. Over the last year, 60 percent of the 450 companies surveyed said they have "implemented software, equipment, or technology to automate tasks previously completed by employees." AI software, which is often used to produce text, audio, and images on demand, is increasingly viewed by company leaders as essential to competitiveness. Last August, Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI, spoke to the New York Times about a shift in thinking regarding AI capabilities. "To be brutally honest, we had a hierarchy of things that technology could do, and we felt comfortable saying things like creative work, professional work, emotional intelligence would be hard for machines to ever do," he said. "Now that's all been upended." Additional data shared with Inc. indicates that startups are turning to OpenAI's text generation tool, ChatGPT, in lieu of hiring freelancers on gig work sites, such as Fiverr, Upwork, and Toptal. According to an unpublished survey by the accounting firm Kruze Consulting, the number of startups paying for enterprise versions of ChatGPT has exploded since 2023. Nearly two-thirds of the companies on Kruze's client list of 550 VC-backed startups are paying for the service, Kruze reports, whereas, the average spend on freelance copywriters has plunged by 83 percent since November 2022. "Basically, startups aren't spending money on outsourced marketing -- mainly writing -- now that they can use AI," Healy Jones, VP of financial strategy at Kruze, said in an email to Inc. While recent news and figures make for grim reading for freelance copywriters, data indicate that AI's incursion into other roles has been less dramatic, at least for now. A recent survey by researchers at the Massachusetts Institute of Technology found that companies could only replace 23 percent of wages paid to human workers with AI tools performing the same jobs. Researchers determined this by assessing the current cost of using AI models to perform certain tasks, and then comparing that cost with compensation for human workers. "This is not something where all of the jobs are replaced immediately," Neil Thompson, director of MIT's FutureTech research project, said in a press release last month. Nonetheless, layoffs at high-profile companies like Intuit will signal that a technological tipping point is near -- and along with it, more fallout for workers.

Friday, July 19, 2024

We All Know AI Can't Code, Right?

If anyone is telling you that AI can code what you need coded and build what you need built, they are lying to you. This is not speculation. This is not bombast. This is not a threat. We know enough now about how AI works, and especially GenAI, to be able to say this with confidence. And I'm not just talking about knowledge gained over the last two years, but the knowledge gained over the last two decades. I was there at the beginning. I know. For a lot of you, I'm telling you something you already know as well. But your work here is far from over. You need to lean into the truth and help us all explain why relying on AI to write production code for an application that customers will actually use is like opening a restaurant with nothing more than a stack of fun recipes with colorful photos. They look great on paper, but paper doesn't taste very good. The Boring Structural Work Matters To put this into a perspective that everyone can understand, let me ask you a question: Q: How would you know if this article was written by AI? A: Because it would suck. Yeah, maybe the bots could imitate my vibe, adopt my writing tics, and lean into the rule of threes as I often do, but even then, the jury is still out on how closely it can replicate my style beyond a sentence or two. Banana. Screw you, AI. The thing I'm 100 percent sure AI can't do is take my decades of experience in the topics I choose -- topics that need to be timely across an ever-changing technical and entrepreneurial landscape -- and use my snarky words and questionable turns of phrase to put insightful, actionable thoughts into the heads of the maximum amount of people who would appreciate those thoughts. That's structure. It's foundational. It's boring. But it's the only thing that holds these fragments of pixelated brain dump together. Look, if you want to write about a technical or entrepreneurial topic, you either need to a) spend a lifetime doggedly nerding down those paths with real-world, real-life stakes and consequences, or b) read a bunch of articles written by people who have done just that and then summarize those articles as best you can without understanding half of what those people are actually talking about. Which one sounds more like AI, a) or b)? Now let's talk about how that relates to code, because hopefully you can already see the connection. AI Is Not an Existential Threat Real coders know. The threat that AI presents to your average software developer is not new. Raise your hand if you've ever used GitHub or Stack Overflow or any other kind of example code or library or whatever to help you get started on the foundational solution to the business problem that your code needs to solve. Now, put your hand down if you've never once had to spend hours, sometimes days, tweaking and modifying that sample code a million times over to make it work like you need it to work to solve your unique problem. OK. All of you who put your hands down. Get out of the room. Seriously. Go. We can't have a serious discussion about this. Cheap, flawed, technical-debt-inducing, easily breakable code has been a threat to software developers since they first started letting us kids bang on Basic -- let alone the threat of any technology solution that ends with the word "-shoring". The AI threat just seems existential because of the constant repetition of a few exaggerated truths. That it's "free," that it's "original," and that it "works." Here's why that's going to be a race to failure. Position yourself. "AI" "Can" "Code" That's the most judgy, snarky, douchey section header I've ever written. But in my defense, there's a reason why every word is in quotes. Because this is how the lie propagates. Yes, what we're calling AI today makes an admirable attempt at slapping syntax together in a way that compiles and runs. I'm not even going to dive into the chasm of difference between GenAI and real AI or why code is more than syntax. But I will point to the fact that -- even beyond those quibbles -- we're not at anything I'd call viable yet. Damning words from an IEEE study follow: [ChatGPT has] a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent -- depending on the difficulty of the task, the programming language, and a number of other factors. I'll let you determine how "difficulty," "programming language," and "other factors" impacts the success rate. Quotes again. Sorry. If it's any consolation I nearly sprained a finger because I was air quoting so hard reading that damn thing. A conclusion of the study (italics are mine): "ChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered." So much like my example of why AI-generated articles suck, if you're trying to solve new problems by inventing new solutions, AI has zero experience with this. OK, all you "ChatGPT-4o-is-Neo" bros can come at me now. But it isn't just the syntax where AI has problems. Aw, AI Came Up With This All by Itself Code in a vacuum is worthless. Every software developer reading this just went, "Yup." Beyond all the limitations that AI exposes when it creates syntax out of "thin air" (or to use the technical term, "other people's code"), deeper problems start to expose themselves when we try to get the results of that code into a customer's hands. Code without design, UI, UX, functional requirements, and business requirements is a classroom exercise in futility. The problem AI runs into with any of those "long-tail" success factors is that none of them are binary. Zero. So, for example, Figma had to temporarily pull back on its AI design feature when it was alleged that its AI is just copying someone else's design. "Just describe what you need, and the feature will provide you with a first draft," is how the company explained it when the feature launched. I can do that without AI. I can do that with cut and paste. Figma blamed poor QA. Which one sounds more true? AI Is Great at a Lot of Things But not elegance. If your code is not infused with a chain of elegance that connects the boring structural-solution work to the customer-facing design and UX, you can still call it "code" if you want to, but it will have all the value of an AI-generated avatar reading aloud AI-generated content over AI-generated images. Have you ever seen that? It'll stab you in the soul. There's a right way to do things and there's a way to do things well, and I'm not naive enough to rail against the notion that sometimes you just can't do both. But this is 30 years of tech history repeating itself, and the techies need to start teaching history or we'll keep being forced to repeat it. So I'd ask my software developer friends to raise your hand if you've ever had to come in and fix someone's poorly structured, often broken, debt-laden, and thoroughly inelegant code. OK. Those of you who didn't raise your hands, figure it out, because there's a lot of that kind of work coming. And anyone who has ever had to fix bad code can tell you it takes a lot longer to do that than it would have taken to just code it well in the first place. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO

Wednesday, July 17, 2024

Google's Gemini AI Is Making Robots Smarter

While skeptics may say the AI revolution looks like it's all about chatbots and amazing, if weird, digital artwork creation, there's actually a lot more going on behind the scenes. Google just delightfully demonstrated the depth of the technology's promise with its AI-powered robots. As part of a research project, some distinctly nonhuman-looking robots are roaming the corridors of Google's DeepMind AI division, busily learning how to navigate and interact with the Googler employees they encountered. The experiment gives us a tantalizing glimpse of what the machines that might make up a looming robot revolution will be capable of when they're put to work in our homes, factories and offices--in their millions and billions, if you believe some prominent futurists. In a research paper, Google explains it's been examining "Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs." Behind the tech jargon, it all boils down to using AI to move around the spaces in Google's offices and interact with humans using "long context" prompts. The long context bit is very important--it relates to how much information the AI model, Gemini 1.5 Pro, can take in and process in one input session using natural language. Essentially it's about giving the robot a sense of context, so it can remember lots of details about its interaction with people and what they've said or asked the robot to do. Think about how you ask a very simple AI like Amazon's Alexa a question, only to realize that a moment later she's "forgotten" it, and can't carry on a human-like conversation--this is part of what the Google experiment is tackling. In videos documenting the project, Google shows some examples of how the AI-powered robots function in the workplace, website TechCrunch notes. One example shows the robots being asked by a user to take him somewhere he can draw something--after a moment the robot matches up this request with what it knows about objects that can be drawn on, and where they are, and it leads the Googler to a whiteboard. Though it sounds simple, this is actually a higher level of reasoning that's much more human-like than many earlier AI/robot systems have been capable of. The Alexa example is good here again: Alexa is clever, but only understands very specific commands, and if you've used her natural language system you'll have encountered Alexa's very limited reasoning when she complains she doesn't understand, until you tweak your wording. Another part of the Google project involved teaching the robots about the environment they were going to be navigating. While earlier robot systems may have been trained using very precisely input maps of office or factory floors, or even being initially tele-operated around the space by a human so their sensors learn the layout of their surroundings, the new Google robots were trained by having their AI "watch" a walkthrough video made on a smartphone. The video showed that the AI bot could identify objects, like furniture or electrical sockets, remember where they are, and then reason what a user meant when it asked the robot to, for example, help them charge their smartphone. Or, demonstrating even more smarts, they knew what to do when a user asked for more of "this," pointing to soda cans on the person's desk, and knowing it should go and check the office fridge for supplies. While Google's robots in the video are very artificial looking (the machines themselves were actually left over from an earlier research project, TechCrunch noted) and there is a definite delay issue, with up to a minute of "thinking time" between being the robot receiving the request and then acting on it, Google's project is still an exciting potential preview of what's to come. It tracks with recent news that a different startup, Skild, has raised $300 million in funding to help build a universal AI brain for all sorts of robots. And it supports thinking by robot tech enthusiasts like Bill Gates, Jeff Bezos and Elon Musk who are certain that we'll all be buying AI-powered humanoid robots pretty soon, ready to welcome them into our homes and workspaces. That has been a promise made every year since the mid-20th century, though. Remember Robbie the Robot? He'd have some pithy things to say about Google's spindly, slow-thinking robots. BY KIT EATON @KITEATON

Monday, July 15, 2024

Warnings About an AI Bubble Are Growing. When Could It Burst?

As long as the AI goldrush, or arms race, or revolution -- whatever you'd like to call it -- has been surging, so too has speculation that the billions in investment are fueling a massive bubble on par with the dot-com bust. Those warnings are growing louder. On Tuesday, James Ferguson, founding partner of the MacroStrategy Partnership, a macroeconomic research firm based in the U.K., offered a grim assessment of where the biggest thing in tech is headed: "Anyone who's sort of a bit long in the tooth and has seen this sort of thing before is tempted to believe it'll end badly," he said on Bloomberg's Merryn Talks Money podcast. Sequoia Capital, bullish on AI since its breathless early days, is also sounding the alarm. Last month, one of the firm's partners, David Cahn, wrote that the industry needs to generate $600 billion in annual revenue to sustain itself. Last September, he estimated the number at $200 billion. (OpenAI, by far the biggest player in the sector, had annualized revenues of $3.4 billion, an Information report found in June.) Goldman Sachs has also cast doubts on generative AI: Recently, the investment bank published a report called "Gen AI: Too Much Spend, Too Little Benefit?" Ironically, the report followed the rollout of generative AI tools across the company's workforce. The stock price of chipmaker Nvidia has surged more than 200 percent in the past year, boosting the value of the company to over $3 trillion. Tech stocks also rallied in 2023, largely based on the AI hype wave. Total venture investment in AI startups neared $50 billion in 2023, even as broader investment slumped to its lowest level in five years, at $285 billion globally. As Greg Hill, managing partner at Parkway Ventures told Inc. earlier this year: "The majority of companies are incorporating AI into their pitch decks" in an effort to attract venture dollars, even if AI isn't their core product. If AI is in a bubble, then the obvious next questions are how the bubble will burst and how many casualties there will be. While definitive answers are in short supply, Gayle Jennings-O'Bryne, CEO and general partner at VC firm Wocstar, offers an assessment on how the AI market got here and the issues pointing to inevitable fallout. Observing the cash thrown at various AI startups, Jennings-O'Bryne believes that some venture capitalists don't have "a real appreciation of the capital intensive nature of the AI technology that is being built right now." Large Language Model development, which is made possible by energy-draining server farms, is massively expensive. So far, many of the startups dependent on the process are short of viable business models. "The mindset of VCs, versus the reality of what these business models and companies are going to look like, [is] just going to propel what we're calling a bubble," she explains to Inc. The financial disconnect was further put in concrete terms by Ferguson of the MacroStrategy Partnership. He notes that Nvidia can't sustain the entire industry's growth on its own, especially when generative AI is still prone to hallucination: "Forget Nvidia charging more and more and more for its chips, you also have to pay more and more and more to run those chips on your servers. And therefore, you end up with something that is very expensive and has yet to prove [itself] anywhere really, outside of some narrow applications." The AI space is saturated with many startups that don't actually build their own AI technologies. "People are jumping on the AI bandwagon thinking that money will come because they somehow have incorporated AI into their business model. But in reality, what they may have done is just put a bit of AI functionality as a wrapper to a more traditional business model," Jennings-O'Bryne argues. While there is funding for AI startups, many of the recently developed tools and chatbots seem redundant. "What's the last [AI] product that wasn't a Q&A, or a chatbot, or coding based?" Phil Calçado, founder of the NYC-based AI coding startup Outropy, recently asked Inc. As VCs race to fund AI startups, Jennings-O'Bryne argues that other companies with even more compelling technologies could be ignored, and therefore languish. "What's happening is that all the other non-AI companies... they're not the darling of the market right now. So they're not getting the attention or the capital to grow. But there's some really good technology and really good businesses being built," she explains. So when could the bubble burst and what might it look like? You can start with the death of many startups and investors losing out on their bets. Jennings-O'Bryne believes the picture will only become crystal clear in around four to five years. "Within two years, we'll start to see some pressure, because investors have gotten more comfortable with asking for profits and returns and revenue and seeing those traction metrics," she says. Eventually, investors will want to see sustainable business models that yield profits. Jennings-O'Bryne says, "I'm thinking that [the] bust, if you will, is probably going to be four or five years out."

Friday, July 12, 2024

AI Can Do Much More Than Automate Resume Review for HR

As a CEO who interacts with artificial intelligence daily, I've experienced firsthand how, when thoughtfully implemented, AI amplifies human strengths. For me, AI unlocks new perspectives, improves communication, and increases productivity. Sometimes it feels like a collaborative partner that helps me consider angles I'd miss alone. The promise of AI is vast, but so are the apprehensions. And, as an HR tech founder, I understand the concerns on a human level. While AI helps my work, its value for leaders and HR professionals depends wholly on implementation. The "how" is always more important than the "what." AI will revolutionize the way HR professionals do their jobs--but perhaps not in the way you think. Filling in the knowledge gaps My company, Oyster, helps businesses access talent across borders, and the biggest questions we get are centered around a need for data and insights. Customers often ask questions like, "What should we pay this senior engineer in Morocco?" or "I'm a U.S. company, what do I need to know about payroll in France?" or even "What should my contract include to stay compliant when hiring in Brazil?" There's a lot to think about when engaging a multinational workforce, with compensation and compliance-related questions likely to be at the top of that list. Getting this right used to mean in-house professionals compiling data from many sources. Now, AI can effectively complement human intelligence by filling in knowledge gaps through things like data analysis and aggregation, information retrieval, and natural language processing. AI technology can cull through vast amounts of information quickly and efficiently, identifying patterns, trends, and insights that may not be immediately apparent to humans. This capability helps inspire new approaches and understanding of complex phenomena from region to region by empowering humans with the time to think strategically, unencumbered by the burden of rote data entry and analysis. Think average time off, bonus pay, and other region-specific nuances. Benchmarking salary, for example, is the next era in something Oyster calls compensation intelligence--the ability to tap into salary-related data sets from all over the world to create fair and competitive compensation packages based on market compensation data and job level. Scalable compliance AI will also increase efficiency and unlock a new frontier of scalable, HR compliance. Compliance at scale can be tricky. With each country and jurisdiction having unique laws and regulations, the best compliance use case for AI is one that helps companies navigate labor policies, employee benefits, taxes, insurance, and more. Companies with a cross-border workforce will benefit from technology and partners that leverage technology to thoughtfully analyze up-to-date employment rules and regulations and apply that intelligence to processes like contract review and compliance validation. Offering a level of protection for organizations that might not have the funds to staff large in-house counsel teams or the bandwidth to engage external firms, AI can enable HR teams to focus more on the human side of their work. Making more informed decisions It's all too easy for workers to assume that the future of AI in HR will be largely based on making the important decisions of which applicants get interviews and which interviews turn into hires. HR pros are already deploying AI to help with job descriptions, document generation, interview note-taking, and employee-facing chatbots. While these use cases demonstrate some key benefits for improving process and efficiency, they're not quite revolutionary. The potential of AI in HR will be much more than automating applicant tracking systems and putting robots in the seats of hiring managers to review resumes. The future of HR is one that's powered by data and insights that allow leaders to make better, more informed employment and management decisions. Because when it comes to the business of people, every decision matters, and AI can help innovators keep people at the front of every decision armed with more strategic intelligence. EXPERT OPINION BY TONY JAMOUS, CEO AND CO-FOUNDER, OYSTER @JAMINGO

Wednesday, July 10, 2024

Need a Coder? ChatGPT Can Do the Job. Mostly

If you're not already using AI chatbots at work, there's a ton of evidence to show you're missing out on increased efficiency and productivity. One of the smartest ways AI can help your company is with coding tasks. Trained on billions of lines of existing software code, AIs like ChatGPT can cover gaps in your developer team's experience, or help them solve really tricky problems. Now researchers find that ChatGPT really is successful at producing working code--but not 100 percent reliably. And it helps if that thorny coding problem your dev team is wrestling with has been tackled by other developers a few years ago. ChatGPT can code, just not as reliably as some human coders The new study examined how well ChatGPT could write code, and measured its functionality, complexity and security, reports the IEEE Spectrum news site, run by the Institute of Electrical and Electronics Engineers. Researchers found that when it came to functionality, ChatGPT could spit out working code with success rates as low as 0.66 percent or as high as 89 percent. This is a massive range of success, perhaps more than you might have thought reasonable, but as you may expect, the difficulty of the problem at hand, programming language, and other factors played a part in its success--just as is the case with human coders. That's not surprising, since generative AIs like ChatGPT work off of data that's put into them. Typically that means the AI algorithm has seen billions of lines of existing human-written code--a data repository that was built up over decades. To explain some of the variability of ChatGPT's results, researchers showed that when the AI faced "hard" coding problems it succeeded about 40 percent of the time, but it was much better at medium and easy problems--scoring 71 percent and then 89 percent reliability. In particular, the study says ChatGPT is really good at solving coding problems if they appeared on the LeetCode software platform before 2021. LeetCode is a service that helps developers prepare for coding job interviews by providing coding and algorithm problems and their solutions. A researcher involved in the study, Yutian Tang, explained to Spectrum that if coders asked ChatGPT for help on an algorithm problem set after 2021, it struggled more to produce working code, and sometimes even failed to "understand the meaning of questions, even for easy level problems." This 2021 date isn't rooted in trickiness of code problems however. Developers continually encounter coding difficulties, and it's just that some will have already been encountered and solved by people before. So the AI's coding expertise is influenced by time: A long-solved coding issue will have appeared more often in the AI's training database. Even more interestingly, the study found that when ChatGPT was asked to fix errors in its own code, it was generally pretty bad at correcting itself. In some cases, that shortcoming included putting security vulnerabilities in the code the AI model spewed out. Yet again, this is a reminder that while AIs are incredibly exciting, and can definitely provide a big boost for small companies whose coding teams may lack diverse expertise, ChatGPT isn't going to replace them anytime soon, simply because its results can't be relied on every time. Rather, an AI assist is best used as a tool that developers can consult to help their output. And all AI-generated output probably should be double-checked by human experts before it's run live--to make sure its hasn't left any security loopholes open, for example. Coders condemning ChatGTP come across a copyright snag Meanwhile, coders who sued OpenAI, Microsoft, and GitHub over the issue of AI training data suffered a setback Friday when a judge overseeing their $1 billion class-action suit dismissed their claims. The coders alleged the AI companies had "scraped" their code to train the AI algorithms without permission, violating open-source licensing agreements. They were trying to leverage the Digital Millennium Copyright Act, a law you might know about from so-called takedown notices against user-uploaded content on sites like YouTube. It's invoked when a music publisher says a publisher of web content shouldn't have used a particular track without proper permission, for example. But the ruling said their claims were without merit since they failed to show that Copilot, Microsoft's version of ChatGPT, could replicate the code "identically," Bloomberg law reports. AI critics will note a subtle issue here. Other content creators, ranging from recording labels to big-name newspapers, have pursued legal action against AI companies like OpenAI on broadly the same grounds, but generative AIs tend not to 100 percent "reproduce" data they've been trained on, simply because of the statistical nature of the way their algorithms work. Contrary to how it may sometimes appear, coding is more like an art mixed with a science, and code doesn't have to be "exact"--it can be as creative as a painted artwork or a hand-written newspaper article. Developers can use different techniques to solve the same problem, and, having been trained on lots of this sort of different code, it seems like now AIs are churning out their own solutions based on the original material. And with Microsoft's AI chief showing his cards last week, alleging that your content is fair game for AI scraping if it's ever been uploaded to the open web, it seems that this sort of AI intellectual property issue, and the lawsuits that then follow, is only going to get more complicated. Your big takeaway from this tussle: Keep your company's secrets well hidden from the internet and its hungry AI data bots. BY KIT EATON @KITEATON

Monday, July 8, 2024

Is That AI Safe? Startup Anthropic Will Pay to Check

As the battle between the AI giants heats up, the topic of AI safety is always hovering around in the background--because these ever-smarter tools can be both powerful and incredibly dangerous. It's for this reason that one of the leading AI makers, Anthropic, which makes the AI system Claude, is starting a program that will fund the creation of AI benchmarks, so that we'll all be able to more accurately measure both the smarts and the potential impact of AI systems. Making sure AIs are safe In a blog post, Anthropic explains that "developing high-quality, safety-relevant" evaluations of AI quality and impact "remains challenging, and the demand is outpacing the supply." Essentially as more and more AI systems come online, and the pressure to measure them so that we understand their value and riskiness rises, there aren't enough tools available. To help solve this, Anthropic believes its investment could "elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem." Anthropic's post goes into great detail about the exact qualities it's trying to encourage third-party evaluators to measure. It mentions specifics like the risk AI may pose to cybersecurity, social manipulation (critically important in an election year), national security risks like "defense, and intelligence operations of both state and non-state actors," and even the chance that AIs could "enhance the abilities of non-experts or experts" to create chemical, biological, radiological and nuclear threats. It also says it wants the ability to measure "misalignment," a situation where AIs "can learn dangerous goals and motivations, retain them even after safety training, and deceive human users about actions taken in their pursuit." AI safety is a tricky problem This is high-level stuff, addressing a very difficult problem that has troubled even OpenAI, the industry's current market leader. To keep its own AIs safe, OpenAI formed a superalignment team a while back, after the brief 2023 scandal that saw CEO Sam Altman temporarily removed as some board members worried about the direction he was taking the company. However, the leaders of that team recently left the organization--sparking fresh concerns. One of those executives, Ilya Sustkever, subsequently launched his own startup with the express goal of building safe AIs in a development environment insulated from the financial pressures faced by other AI startups. Anthropic's program to tackle AI safety will involve third parties who've submitted their plans to the company and been selected to develop the relevant AI-measuring tools. Anthropic will then "offer a range of funding options tailored to the needs and stage of each project." As news site TechCrunch points out, the expectation is that these third parties will be building whole AI-assessing platforms that should allow experts to craft their own AI safety assessments, and also involve "large-scale trials of models involving 'thousands' of users." Safety first! But are AIs really that much of a threat? TechCrunch also points out that some of the scenarios Anthropic illustrates in its blog post are a little far-fetched, especially since some high-profile experts, including futurist guru Ray Kurzweil, have suggested that fears that AI represents an existential threat to humans are somewhat overblown. Clearly the better move is to err on the side of caution, though, especially when players with skin in the game--like OpenAI's Altman and entrepreneurial AI-maker Elon Musk--are loudly voicing their concerns about the potential risks from the same AIs they're spending billions to make. The news that even a leading AI maker is worried about the threat the technology poses should give business users pause. We know how useful AI can be to a company--but it's worth reminding your staff that it poses certain risks too, and its output shouldn't be trusted without at least a double-check. Meanwhile, when Inc. asked OpenAI's ChatGPT how easy it was to measure AI safety, it was pretty candid: It admitted it was a tricky job, but then it added "as for me, I'm designed to be a helpful tool. I operate under strict guidelines to ensure I provide accurate, safe, and useful information." It also noted that it was supposed to be an assistant, "not to pose any threat." We're not entirely sure how we feel about the fact it said "me" in that statement. BY KIT EATON @KITEATON

Friday, July 5, 2024

The Biggest Problem With Apple Intelligence Is That It Won't Run on the Best AI Device Ever Made

I've spent a lot of time thinking about Apple Intelligence over the past few weeks. I say thinking because you can't actually use any of the features Apple demonstrated during its WWDC keynote earlier this month. You can't create your own Genmoji or ask Siri to remind you when your mom's flight is supposed to arrive. You also can't look at a PDF and have Siri send it to ChatGPT to answer questions about whether you're allowed to have a pet lizard. But, all of those things are impressive demos, and I'm excited to try them once they are available. If they work the way Apple promises, your next iPhone is going to be a lot more interesting. Of course, we've seen a lot of impressive demos over the past year. What we haven't seen are any impressive products. The Humane Ai Pin is basically a flop. The Rabbit R1 is not just a failure, it's not even really an AI gadget, it turns out. To be fair, there have been some impressive features announced by Microsoft and Google, and ChatGPT is obviously a thing. Google's Magic Erase feature in its Photos app is both cool and practical. As far as devices that use AI to dramatically improve the way we interact with computers, however, there's basically nothing. That's a shame because a wearable device that you use to interact with a smart assistant capable of doing more than just set timers or showing you "results from the web," would be a step change in personal computing. That's the entire premise behind the Ai Pin and R1--create a device that serves as a way of accessing an always-present assistant that can interact with your own personal information and apps. The problem is, none of them work. They don't have access to your personal information, they don't have apps, and the hardware isn't up to the task. For example, the Ai Pin gets only a few hours of battery life--at best--and that's when it doesn't shut down because it's too hot. Do you know what gets incredible battery life and has very capable hardware? An Apple Watch. Look, I've been saying for a while now that the perfect AI wearable gadget is the Apple Watch. If I'm going to wear a device that I can interact with, the Apple Watch is already the right form factor--it just needs to be smarter. That's the entire premise of Apple Intelligence--make Siri smarter. The problem is, Apple Intelligence doesn't run on the Apple Watch. For that matter, it won't run on any but the most high-end recent iPhone. Even if you bought an iPhone 15 in the last year, you're out of luck. Presumably, anything with an iPhone 16 in the name will be capable of running Apple Intelligence, but there are like 1.5 billion iOS devices in the world, and most of them are not going to be able to run Apple Intelligence. For Apple's effort to be successful, that needs to change. One way is for the company to get a version of it running on the Watch. Or, at least, make it possible for your Watch to interact with a capable iPhone. After all, it's basically an accessory for your iPhone. It's a very capable accessory, but for most people, it's a way to get notifications or information from your iPhone, without having to actually use your iPhone. Which, to be honest, is great. But, it would be better if you could ask Siri a question on your Watch and it would either send the query to Apple's cloud service, or it would just feed the query to your iPhone. It's great that you'll be able to do all kinds of AI things on your iPhone, but the reason a wearable device seems so appealing is because it's more accessible. You don't have to pull out your iPhone just to ask a question or to get information. Imagine if you could just ask Siri, via your Watch, the question about, "What time does my mom's flight arrive?" Or "Will we have enough time to get from the airport to our dinner reservation?" Your Watch would interact with the services on your iPhone to find out when the flight is supposed to arrive, whether it's still on time or delayed, where and when you made a dinner reservation, and whether you'll be able to get there based on directions and current traffic conditions. Presumably, you'll be able to do most of that at some point on your iPhone, but Apple's real killer move would be to make all of this possible on the best AI device form factor--the Watch. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Wednesday, July 3, 2024

Microsoft's AI Chief Says Your Content Is Fair Game If It's on the Open Web

The warning used to be that anything you put on the internet stays there--somewhere--forever. The advent of artificial intelligence models has put a twist on this, and now it's that anything you post online will end up in an AI, whether you want it to or not. That is, at least, what Mustafa Suleyman, co-founder of Google's DeepMind AI unit and now CEO of AI at Microsoft, thinks. According to Suleyman, it's fine for AI companies to scour every corner of the open web--which is, arguably, anything on any website that's not protected behind a paywall or login interface--and to use what they find to train their algorithms. In an era of rapid growth by data-hungry generative AI services, it's a stark reminder that you or your company should never publish anything on your website or a social media service that you don't want to be publicly accessible. Fair use or abuse? Speaking to CNBC at the Aspen Ideas Festival last week, Suleyman referred to the widely accepted idea that whatever content is "on the open web" is freely available to be used by someone else under fair use principles, news site Windows Central reports. This is the notion that if you quote, review, reimagine, or remix small parts of someone else's content, that's OK. Specifically Suleyman said that for fair use content, "anyone can copy it, recreate with it, reproduce it." Note his use of the word "copy," rather than simply "reference" or "remix." Suleyman is implying that if someone has published text, imagery, or any other material on a website, it's fine for companies like his to access it wholesale. This is already somewhat questionable: Fair use isn't designed to enable outright copying, and one of the big no-noes is copying someone else's work for your own financial gain. But Suleyman's next words will worry critics who think big AI's powers are already too vast. Suleyman acknowledged that a publisher or a news organization can explicitly mark its content so that systems like Google's web-indexing bots--the code robots that tell Google's algorithm where everything is online--either can or cannot access the info. He also noted that some also mark content as OK for bots to access only for search engines, but not "for any other reason," such as AI data training. He said this is a gray area, and one that he thinks is "going to work its way through the courts." Suleyman may be hinting that he thinks sites simply shouldn't be able to bar their content from being looked at by AI. This is timely, since the high-profile AI firm Perplexity is in the spotlight for allegedly ignoring exactly this sort of online content marking, and then scraping website's data without permission. Data, data everywhere, for any AI to "drink" The big problem is that generative AI needs tons and tons of data to work; without data, it's just a big box of complicated math. With data, AI algorithms are shaped to reply with real-world information when you type in a query. In the hunt for this data, some companies, like Apple, are partnering with content publishing services, like Reddit, to gain access to billions of pieces of text, photos, and more that users have uploaded over the years. But other companies have been accused of questionably or even illegally snaffling up data that they really shouldn't take. The New York Times and other newspapers have launched lawsuits based on this, and three big record labels just sued some music-generating AI companies that they claim have illegally copied their music archives. As more and more companies, from one-person businesses to giant enterprises, embrace AI technology, this is yet another reminder that you need to be careful when you use answers spit out from a chatbot: Someone needs to check they're not violating another company's intellectual property rights before you use it. It's also a stark warning that you should be very careful that your website, social media posts, or any other content you publish online isn't giving away proprietary data you'd prefer to keep private, because it could simply end up being used to train someone's smart AI model. BY KIT EATON @KITEATON

Monday, July 1, 2024

We Should Talk About Why the Gen-Z Workforce Is So Angry

Hey Google, Apple, Amazon. All you tech companies. Got a second? We need to have a little sidebar about Gen Z because you've got a major problem bubbling. Here's the thing. The way you've always approached work -- the 9-to-5, the org structure, the career ladder, the assumed authority -- they don't want any of it. I wrote a post last week about the deterioration in tech company culture, in which I said: "We're seeing a generational shift in how and where we work. I feel like maybe the boomers established modern-day' work, Gen X rebelled against it, Millennials are borderline revolting against it, and Gen Z sees it as completely foreign." Holy smoke, that line resonated. You're essentially asking Gen Z to do what you've done your whole life, but do it wearing clown shoes. Man, I love a good/terrible metaphor. Look. I'm not talking to Gen Z in this article, a group that's catching a lot of strays these days. I'm also not talking for them. I did not appoint myself the spokesman for a generation. I tried that with Gen X and I couldn't get them to rebel, so I know that won't work this time either. Honestly, I don't even want to be here. This time, I'm talking directly to the companies themselves. And I'm not here to cause a scene, I'm just giving you a heads-up. Gen Z is pissed. At you. Not "them." You. It Runs Deeper Than Generational Differences This is not just an issue of sitcom-ready generational differences between age groups. That's the first mistake the tech companies make when thinking about it, which is why I'm not surprised that you don't have answers. This is more than malaise. This is not about anarchy. It's not a lack of generational understanding on either side. But this is the first place where Gen Z is catching strays. You're blaming natural generational differences, which skims over the real issues and just makes it worse. It's not about quiet quitting or presenteeism or creative loafing or -- honestly, I can't keep up with the buzzwords anymore. All those articles with the splashy headlines are wrong. It's deeper than that. Gen Z fully knows what you're "about." They're just rejecting it. What the Tech Companies Think Gen Z Is About Is Irrelevant I'm not a big believer in generational dynamics. I mean, everyone says "Remember when Saturday Night Live was actually funny?" I get it. There are some obvious generation-defining touchstones: References, trends, and especially technical advancements. I'm more inclined to point to the PC generation, the internet generation, the mobile generation, and the social generation, because those labels have more of an impact on group behavior and dynamics than a somewhat arbitrary milestone like age. But this still isn't that. Gen Z is not angry because you're keeping them out of their TikToks and their text chats. And this is the second place where Gen Z is getting maligned. You're blaming the victim here. And what's more, you're doing it without actually doing it. You're not blaming Jane, you're blaming a person roughly the same age as Jane who isn't Jane but then also connecting every misdiagnosed personality trait from that unspecified person to Jane. Which, obviously, just makes it worse. OK, so what is their problem then? When You Started Work, It Was Work With every generation that came before Gen Z, when we became a part of the tech workforce, you gave us actual work to do. It had meaning. Gen X didn't invent the internet, but we worked with it to make it into something that no business, and eventually no person, could live without. Apologies. But even then, I saw the technical evolution start to create a divide. The generations that came before us needed us to work with the technology, because they didn't really understand it. But make no mistake, they decided how that technology would be harnessed to build things and run businesses. They still do. And then, when our jobs didn't feel like jobs, which was like 25 percent of the time, they came to us and said "Well, I know you're bored and unsatisfied, here's a program. Here's a policy. Let's get you out of your cubicle and give you something else that isn't typing commands into beige boxes." They didn't say exactly that. They weren't that clever. But this is when our workdays started filling up with TPS reports and other garbage to keep us occupied but didn't let us accomplish anything. There's an entire movie about it called Office Space. Highly recommend. Oh, also, this is what pushed me into entrepreneurship. Each Generation Has Had Less to Do The Millennials didn't invent mobile or its platform, but you gave them enough work to do with it to occupy, let's say 40 percent of their time, and the other 60 percent was just filled with fluff. This could be anything from "running Hubspot" to "Slack moderator" to ... literally nothing. Speaking of art imitating life, one of the funnier aspects of a workday at Hooli, Silicon Valley's stand-in for Google or Yahoo, was not how little most employees did, but how accepted it was. Same guy that did Office Space. Mike Judge. Visionary. Now here we are with Gen Z, and work isn't work anymore. It's all fluff. Man, it's not that they don't want to come back to the office. They don't want any part of what they do when they get there. The reason there isn't any work-life balance isn't because life has gone away, it's because work has gone away. It all just blends now. Now work is all programs and policies and TPS reports. Work is literally connect-the-dots and paint-by-numbers and fill-in-the-(Agile)-blanks and check-the-(Jira)-boxes. The only place it isn't like this is in the very early days of tech startups, which is why so many GenZers want to be entrepreneurs. So the anger comes out as against BS programs, office culture, corporate culture, tech culture, jobs, careers, capitalism, and democracy, and then you get your anarchy. Ultimately Gen Z wants satisfaction, not just participation. They see through the fluff because it's all fluff. And by the way, you're also telling them that AI is already making them redundant anyway. Not helping. Also not true. As I mentioned earlier when I saw this devolution of work begin to happen, I lucked into startups and entrepreneurism, and I never looked back. I'm not rich. I don't have a vacation home or a boat or a country club membership. But I don't wake up angry every morning. Ultimately, that's the thing. That's all Gen Z wants. I think. Anyway, I'm sure they'll tell me if I'm wrong. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO