Wednesday, July 17, 2024

Google's Gemini AI Is Making Robots Smarter

While skeptics may say the AI revolution looks like it's all about chatbots and amazing, if weird, digital artwork creation, there's actually a lot more going on behind the scenes. Google just delightfully demonstrated the depth of the technology's promise with its AI-powered robots. As part of a research project, some distinctly nonhuman-looking robots are roaming the corridors of Google's DeepMind AI division, busily learning how to navigate and interact with the Googler employees they encountered. The experiment gives us a tantalizing glimpse of what the machines that might make up a looming robot revolution will be capable of when they're put to work in our homes, factories and offices--in their millions and billions, if you believe some prominent futurists. In a research paper, Google explains it's been examining "Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs." Behind the tech jargon, it all boils down to using AI to move around the spaces in Google's offices and interact with humans using "long context" prompts. The long context bit is very important--it relates to how much information the AI model, Gemini 1.5 Pro, can take in and process in one input session using natural language. Essentially it's about giving the robot a sense of context, so it can remember lots of details about its interaction with people and what they've said or asked the robot to do. Think about how you ask a very simple AI like Amazon's Alexa a question, only to realize that a moment later she's "forgotten" it, and can't carry on a human-like conversation--this is part of what the Google experiment is tackling. In videos documenting the project, Google shows some examples of how the AI-powered robots function in the workplace, website TechCrunch notes. One example shows the robots being asked by a user to take him somewhere he can draw something--after a moment the robot matches up this request with what it knows about objects that can be drawn on, and where they are, and it leads the Googler to a whiteboard. Though it sounds simple, this is actually a higher level of reasoning that's much more human-like than many earlier AI/robot systems have been capable of. The Alexa example is good here again: Alexa is clever, but only understands very specific commands, and if you've used her natural language system you'll have encountered Alexa's very limited reasoning when she complains she doesn't understand, until you tweak your wording. Another part of the Google project involved teaching the robots about the environment they were going to be navigating. While earlier robot systems may have been trained using very precisely input maps of office or factory floors, or even being initially tele-operated around the space by a human so their sensors learn the layout of their surroundings, the new Google robots were trained by having their AI "watch" a walkthrough video made on a smartphone. The video showed that the AI bot could identify objects, like furniture or electrical sockets, remember where they are, and then reason what a user meant when it asked the robot to, for example, help them charge their smartphone. Or, demonstrating even more smarts, they knew what to do when a user asked for more of "this," pointing to soda cans on the person's desk, and knowing it should go and check the office fridge for supplies. While Google's robots in the video are very artificial looking (the machines themselves were actually left over from an earlier research project, TechCrunch noted) and there is a definite delay issue, with up to a minute of "thinking time" between being the robot receiving the request and then acting on it, Google's project is still an exciting potential preview of what's to come. It tracks with recent news that a different startup, Skild, has raised $300 million in funding to help build a universal AI brain for all sorts of robots. And it supports thinking by robot tech enthusiasts like Bill Gates, Jeff Bezos and Elon Musk who are certain that we'll all be buying AI-powered humanoid robots pretty soon, ready to welcome them into our homes and workspaces. That has been a promise made every year since the mid-20th century, though. Remember Robbie the Robot? He'd have some pithy things to say about Google's spindly, slow-thinking robots. BY KIT EATON @KITEATON

Monday, July 15, 2024

Warnings About an AI Bubble Are Growing. When Could It Burst?

As long as the AI goldrush, or arms race, or revolution -- whatever you'd like to call it -- has been surging, so too has speculation that the billions in investment are fueling a massive bubble on par with the dot-com bust. Those warnings are growing louder. On Tuesday, James Ferguson, founding partner of the MacroStrategy Partnership, a macroeconomic research firm based in the U.K., offered a grim assessment of where the biggest thing in tech is headed: "Anyone who's sort of a bit long in the tooth and has seen this sort of thing before is tempted to believe it'll end badly," he said on Bloomberg's Merryn Talks Money podcast. Sequoia Capital, bullish on AI since its breathless early days, is also sounding the alarm. Last month, one of the firm's partners, David Cahn, wrote that the industry needs to generate $600 billion in annual revenue to sustain itself. Last September, he estimated the number at $200 billion. (OpenAI, by far the biggest player in the sector, had annualized revenues of $3.4 billion, an Information report found in June.) Goldman Sachs has also cast doubts on generative AI: Recently, the investment bank published a report called "Gen AI: Too Much Spend, Too Little Benefit?" Ironically, the report followed the rollout of generative AI tools across the company's workforce. The stock price of chipmaker Nvidia has surged more than 200 percent in the past year, boosting the value of the company to over $3 trillion. Tech stocks also rallied in 2023, largely based on the AI hype wave. Total venture investment in AI startups neared $50 billion in 2023, even as broader investment slumped to its lowest level in five years, at $285 billion globally. As Greg Hill, managing partner at Parkway Ventures told Inc. earlier this year: "The majority of companies are incorporating AI into their pitch decks" in an effort to attract venture dollars, even if AI isn't their core product. If AI is in a bubble, then the obvious next questions are how the bubble will burst and how many casualties there will be. While definitive answers are in short supply, Gayle Jennings-O'Bryne, CEO and general partner at VC firm Wocstar, offers an assessment on how the AI market got here and the issues pointing to inevitable fallout. Observing the cash thrown at various AI startups, Jennings-O'Bryne believes that some venture capitalists don't have "a real appreciation of the capital intensive nature of the AI technology that is being built right now." Large Language Model development, which is made possible by energy-draining server farms, is massively expensive. So far, many of the startups dependent on the process are short of viable business models. "The mindset of VCs, versus the reality of what these business models and companies are going to look like, [is] just going to propel what we're calling a bubble," she explains to Inc. The financial disconnect was further put in concrete terms by Ferguson of the MacroStrategy Partnership. He notes that Nvidia can't sustain the entire industry's growth on its own, especially when generative AI is still prone to hallucination: "Forget Nvidia charging more and more and more for its chips, you also have to pay more and more and more to run those chips on your servers. And therefore, you end up with something that is very expensive and has yet to prove [itself] anywhere really, outside of some narrow applications." The AI space is saturated with many startups that don't actually build their own AI technologies. "People are jumping on the AI bandwagon thinking that money will come because they somehow have incorporated AI into their business model. But in reality, what they may have done is just put a bit of AI functionality as a wrapper to a more traditional business model," Jennings-O'Bryne argues. While there is funding for AI startups, many of the recently developed tools and chatbots seem redundant. "What's the last [AI] product that wasn't a Q&A, or a chatbot, or coding based?" Phil Calçado, founder of the NYC-based AI coding startup Outropy, recently asked Inc. As VCs race to fund AI startups, Jennings-O'Bryne argues that other companies with even more compelling technologies could be ignored, and therefore languish. "What's happening is that all the other non-AI companies... they're not the darling of the market right now. So they're not getting the attention or the capital to grow. But there's some really good technology and really good businesses being built," she explains. So when could the bubble burst and what might it look like? You can start with the death of many startups and investors losing out on their bets. Jennings-O'Bryne believes the picture will only become crystal clear in around four to five years. "Within two years, we'll start to see some pressure, because investors have gotten more comfortable with asking for profits and returns and revenue and seeing those traction metrics," she says. Eventually, investors will want to see sustainable business models that yield profits. Jennings-O'Bryne says, "I'm thinking that [the] bust, if you will, is probably going to be four or five years out."

Friday, July 12, 2024

AI Can Do Much More Than Automate Resume Review for HR

As a CEO who interacts with artificial intelligence daily, I've experienced firsthand how, when thoughtfully implemented, AI amplifies human strengths. For me, AI unlocks new perspectives, improves communication, and increases productivity. Sometimes it feels like a collaborative partner that helps me consider angles I'd miss alone. The promise of AI is vast, but so are the apprehensions. And, as an HR tech founder, I understand the concerns on a human level. While AI helps my work, its value for leaders and HR professionals depends wholly on implementation. The "how" is always more important than the "what." AI will revolutionize the way HR professionals do their jobs--but perhaps not in the way you think. Filling in the knowledge gaps My company, Oyster, helps businesses access talent across borders, and the biggest questions we get are centered around a need for data and insights. Customers often ask questions like, "What should we pay this senior engineer in Morocco?" or "I'm a U.S. company, what do I need to know about payroll in France?" or even "What should my contract include to stay compliant when hiring in Brazil?" There's a lot to think about when engaging a multinational workforce, with compensation and compliance-related questions likely to be at the top of that list. Getting this right used to mean in-house professionals compiling data from many sources. Now, AI can effectively complement human intelligence by filling in knowledge gaps through things like data analysis and aggregation, information retrieval, and natural language processing. AI technology can cull through vast amounts of information quickly and efficiently, identifying patterns, trends, and insights that may not be immediately apparent to humans. This capability helps inspire new approaches and understanding of complex phenomena from region to region by empowering humans with the time to think strategically, unencumbered by the burden of rote data entry and analysis. Think average time off, bonus pay, and other region-specific nuances. Benchmarking salary, for example, is the next era in something Oyster calls compensation intelligence--the ability to tap into salary-related data sets from all over the world to create fair and competitive compensation packages based on market compensation data and job level. Scalable compliance AI will also increase efficiency and unlock a new frontier of scalable, HR compliance. Compliance at scale can be tricky. With each country and jurisdiction having unique laws and regulations, the best compliance use case for AI is one that helps companies navigate labor policies, employee benefits, taxes, insurance, and more. Companies with a cross-border workforce will benefit from technology and partners that leverage technology to thoughtfully analyze up-to-date employment rules and regulations and apply that intelligence to processes like contract review and compliance validation. Offering a level of protection for organizations that might not have the funds to staff large in-house counsel teams or the bandwidth to engage external firms, AI can enable HR teams to focus more on the human side of their work. Making more informed decisions It's all too easy for workers to assume that the future of AI in HR will be largely based on making the important decisions of which applicants get interviews and which interviews turn into hires. HR pros are already deploying AI to help with job descriptions, document generation, interview note-taking, and employee-facing chatbots. While these use cases demonstrate some key benefits for improving process and efficiency, they're not quite revolutionary. The potential of AI in HR will be much more than automating applicant tracking systems and putting robots in the seats of hiring managers to review resumes. The future of HR is one that's powered by data and insights that allow leaders to make better, more informed employment and management decisions. Because when it comes to the business of people, every decision matters, and AI can help innovators keep people at the front of every decision armed with more strategic intelligence. EXPERT OPINION BY TONY JAMOUS, CEO AND CO-FOUNDER, OYSTER @JAMINGO

Wednesday, July 10, 2024

Need a Coder? ChatGPT Can Do the Job. Mostly

If you're not already using AI chatbots at work, there's a ton of evidence to show you're missing out on increased efficiency and productivity. One of the smartest ways AI can help your company is with coding tasks. Trained on billions of lines of existing software code, AIs like ChatGPT can cover gaps in your developer team's experience, or help them solve really tricky problems. Now researchers find that ChatGPT really is successful at producing working code--but not 100 percent reliably. And it helps if that thorny coding problem your dev team is wrestling with has been tackled by other developers a few years ago. ChatGPT can code, just not as reliably as some human coders The new study examined how well ChatGPT could write code, and measured its functionality, complexity and security, reports the IEEE Spectrum news site, run by the Institute of Electrical and Electronics Engineers. Researchers found that when it came to functionality, ChatGPT could spit out working code with success rates as low as 0.66 percent or as high as 89 percent. This is a massive range of success, perhaps more than you might have thought reasonable, but as you may expect, the difficulty of the problem at hand, programming language, and other factors played a part in its success--just as is the case with human coders. That's not surprising, since generative AIs like ChatGPT work off of data that's put into them. Typically that means the AI algorithm has seen billions of lines of existing human-written code--a data repository that was built up over decades. To explain some of the variability of ChatGPT's results, researchers showed that when the AI faced "hard" coding problems it succeeded about 40 percent of the time, but it was much better at medium and easy problems--scoring 71 percent and then 89 percent reliability. In particular, the study says ChatGPT is really good at solving coding problems if they appeared on the LeetCode software platform before 2021. LeetCode is a service that helps developers prepare for coding job interviews by providing coding and algorithm problems and their solutions. A researcher involved in the study, Yutian Tang, explained to Spectrum that if coders asked ChatGPT for help on an algorithm problem set after 2021, it struggled more to produce working code, and sometimes even failed to "understand the meaning of questions, even for easy level problems." This 2021 date isn't rooted in trickiness of code problems however. Developers continually encounter coding difficulties, and it's just that some will have already been encountered and solved by people before. So the AI's coding expertise is influenced by time: A long-solved coding issue will have appeared more often in the AI's training database. Even more interestingly, the study found that when ChatGPT was asked to fix errors in its own code, it was generally pretty bad at correcting itself. In some cases, that shortcoming included putting security vulnerabilities in the code the AI model spewed out. Yet again, this is a reminder that while AIs are incredibly exciting, and can definitely provide a big boost for small companies whose coding teams may lack diverse expertise, ChatGPT isn't going to replace them anytime soon, simply because its results can't be relied on every time. Rather, an AI assist is best used as a tool that developers can consult to help their output. And all AI-generated output probably should be double-checked by human experts before it's run live--to make sure its hasn't left any security loopholes open, for example. Coders condemning ChatGTP come across a copyright snag Meanwhile, coders who sued OpenAI, Microsoft, and GitHub over the issue of AI training data suffered a setback Friday when a judge overseeing their $1 billion class-action suit dismissed their claims. The coders alleged the AI companies had "scraped" their code to train the AI algorithms without permission, violating open-source licensing agreements. They were trying to leverage the Digital Millennium Copyright Act, a law you might know about from so-called takedown notices against user-uploaded content on sites like YouTube. It's invoked when a music publisher says a publisher of web content shouldn't have used a particular track without proper permission, for example. But the ruling said their claims were without merit since they failed to show that Copilot, Microsoft's version of ChatGPT, could replicate the code "identically," Bloomberg law reports. AI critics will note a subtle issue here. Other content creators, ranging from recording labels to big-name newspapers, have pursued legal action against AI companies like OpenAI on broadly the same grounds, but generative AIs tend not to 100 percent "reproduce" data they've been trained on, simply because of the statistical nature of the way their algorithms work. Contrary to how it may sometimes appear, coding is more like an art mixed with a science, and code doesn't have to be "exact"--it can be as creative as a painted artwork or a hand-written newspaper article. Developers can use different techniques to solve the same problem, and, having been trained on lots of this sort of different code, it seems like now AIs are churning out their own solutions based on the original material. And with Microsoft's AI chief showing his cards last week, alleging that your content is fair game for AI scraping if it's ever been uploaded to the open web, it seems that this sort of AI intellectual property issue, and the lawsuits that then follow, is only going to get more complicated. Your big takeaway from this tussle: Keep your company's secrets well hidden from the internet and its hungry AI data bots. BY KIT EATON @KITEATON