Monday, April 29, 2024

WHY SAM ALTMAN IS BETTING ON SOLAR TO HELP POWER AI

AI needs more energy and Sam Altman is investing. The OpenAI CEO joined big name VC firms Andreessen Horowitz and Atomic in a $20 million seed round of funding for solar energy company Exowatt. The Miami-based startup is developing a modular energy platform to power data centers at a time when AI is expected to substantially drive up the power needs of global data centers. AI advocates are on the hunt for cheap and rapidly scalable energy systems as the energy needs of the technology explode. Exowatt's technology is a three-in-one, modular energy system roughly the size of a 40-foot shipping container. It uses what Exowatt CEO and co-founder Hannan Parvizian calls "a specially developed lens" to collect solar energy in the form of heat and store it in a heat battery. The stored heat is then run through an engine to convert it to electricity. Exowatt anticipates its solution could cut the cost of electricity down to $0.01 per kilowatt-hour once it hits scale. "What I think is unique about Exowatt is that we can provide a power solution that's dispatchable--that means you can have access to it throughout the day without any intermittencies--it's modular, you can scale it from small projects to large, it's built in the U.S. of course, and, most importantly, it's available today," Parvizian says. The amount of electricity that data centers around the world need could jump 50 percent by 2027 as a result of AI, according to one estimate from Vrije Universiteit Amsterdam's School of Business and Economics. Big tech companies and investors have begun eyeing nuclear power to meet the need. Altman, for example, is also invested in two nuclear power companies called Helion and Oklo, according to the Wall Street Journal. Exowatt offers another solution. "I do think we will have nuclear and other forms of energy on the grid that will also help support data centers, but if you think about it in practical terms, none of those technologies will be able to be deployed in the next year or even the next five years," Parvizian says. "I think Exowatt has a unique advantage here in being able to offer solution that can be deployed immediately." Parvizian says support from big name investors including Altman will position the company to serve "big tech customers building data centers or hyperscalers. Atomic CEO and founder Jack Abraham is also a co-founder of Exowatt.

Friday, April 26, 2024

"THE WORD PEOPLE" WILL BE HARDER TO REPLACE IN THE FUTURE, WHY?

As coverage of the AI tech tsunami and its potential impact on the world proliferates, it's now become a "Will they-or-won't they?" Bachelorette-style question of whether or not AI will steal people's jobs. So many different people have such differing opinions, from the catastrophically doomy to the more upbeat. The whole debate got another spin yesterday when billionaire, PayPal cofounder and tech entrepreneur Peter Thiel spoke up on a popular podcast. AI, Thiel believes, will prove to be really bad for all the "math people" in businesses the world over. Thiel spoke on the popular education chat podcast Conversations with Tyler, which attracts diverse A-list guests like, writer Neal Stephenson, and NBA legend Kareem Abdul-Jabbar. The conversation with Thiel ranged from topics like Roman Catholicism to the philosophy of politics, but when asked about the impact of AI on creative jobs like writers, Thiel took a somewhat surprising position. Typically, AI critics worry that the popular text-based chatbots everyone seems to be experimenting with right now are squarely aimed at replacing people in wordy, creative professions. "My intuition would be it's going to be quite the opposite, where it seems much worse for the math people than the word people," Thiel explained. People have told him that "they think within three to five years, the AI models will be able to solve all the US Math Olympiad problems," which will really "shift things quite a bit." He then dug into the history of math study and usefulness to the world, noting "if we prioritized math ability, it had this meritocratic but also egalitarian effect on society." But fast-forward to the 21st century, and narrow your focus in on Silicon Valley, then it's become "way too biased toward the math people," according to Thiel. And Thiel thinks math is doomed: "Why even do math? Why not just chess? That got undermined by the computers in 1997," he argued, before concluding "Isn't that what's going to happen to math? And isn't that a long overdue rebalancing of our society?" Arguably Thiel's assertion on Silicon Valley and math is true, though somewhat simplified: a lot of the technology innovations coming out of Silicon Valley are driven by science, which relies on math at its core. One very math-centric profession is now undergoing an AI-driven revolution. When touting the advances in his next-generation Grok AI system recently, Elon Musk made an effort to point out how much better it was at writing code, and calculating math, than the earlier version. Last month CEO of leading AI chip-making company Nvidia Jensen Huang predicted the "death of coding," with AI so capable of developing code that kids shouldn't learn how to code in school. With innovations like Microsoft's integration of its Copilot AI deeply into the coding social network Github, where AI is already helping coders craft code, it's easy to see Huang's point. Conversely, as any small business startup owner knows, innovation--even in a tech company--often requires a very non-mathematical, flying-by-the-seat-of-the-pants human touch. Boiling all of Thiel's words down to a summary, we get this: AI is very capable of replacing some highly logical, mathematical jobs--like some of the coding, or basic analysis and simulation tools that help technology companies achieve breakthroughs. If AI really is coming for math nerds, as Thiel asserts, then maybe accountants, business analysts and other professions may also be under threat. But he thinks that for really creative roles, including word-centric creative professions, and, arguably, inventing new ideas, human kind is probably safe for a while. Thiel dodged another question about AI's impact on more manual work by suggesting that a better way to worry about the impact of AI is to ask different questions about it--a trick that cofounder of Google's AI research division DeepMind Mustafa Suleyman also recently suggested. Questions like "how much will it increase GDP versus how much will it increase inequality?" Unsettlingly, Thiel added, "Probably it does some of both."

Wednesday, April 24, 2024

META'S AI MODEL AGENTS GET WEIRED ON SOCIAL MEDIA

Facebook parent Meta Platforms unveiled a new set of artificial intelligence systems Thursday that are powering what CEO Mark Zuckerberg calls "the most intelligent AI assistant that you can freely use." But as Zuckerberg's crew of amped-up Meta AI agents started venturing into social media this week to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. One joined a Facebook moms' group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France's Mistral, have been churning out new AI language models and hoping to persuade customers they've got the smartest, handiest or most efficient chatbots. While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it's now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta's newest models were built with 8 billion and 70 billion parameters--a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. "The vast majority of consumers don't candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant," said Nick Clegg, Meta's president of global affairs, in an interview. He added that Meta's AI agent is loosening up. Some people found the earlier Llama 2 model--released less than a year ago--to be "a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions," he said. But in letting down their guard, Meta's AI agents also were spotted this week posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. "Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group. One group member who also happens to study AI said it was clear that the agent didn't know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. "An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it," said Aleksandra Korolova, an assistant professor of computer science at Princeton University. Clegg said Wednesday he wasn't aware of the exchange. Facebook's online help page says the Meta AI agent will join a group conversation if invited, or if someone "asks a question in a post and no one responds within an hour." The group's administrators have the ability to turn it off. In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a "gently used" Canon camera and an "almost-new portable air conditioning unit that I never ended up using." Meta said in a written statement Thursday that "this is new technology and it may not always return the response we intend, which is the same for all generative AI systems." The company said it is constantly working to improve the features. In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced some 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. They may eventually hit a limit--at least when it comes to data, said Nestor Maslej, a research manager for Stanford's Institute for Human-Centered Artificial Intelligence. "I think it's been clear that if you scale the models on more data, they can become increasingly better," he said. "But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet." More data--acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits--will continue to drive improvements. "Yet they still cannot plan well," Maslej said. "They still hallucinate. They're still making mistakes in reasoning." Getting to AI systems that can perform higher-level cognitive tasks and commonsense reasoning-- here humans still excel--might require a shift beyond building ever-bigger models. For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights and summarize long documents. "You're seeing companies kind of looking at fit, testing each of the different models for what they're trying to do and finding some that are better at some areas rather than others," said Todd Lohr, a leader in technology consulting at KPMG. Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers--those using its advertising-fueled social networks. Joelle Pineau, Meta's vice president of AI research, said at a London event last week the company's goal over time is to make a Llama-powered Meta AI "the most useful assistant in the world." "In many ways, the models that we have today are going to be child's play compared to the models coming in five years," she said. But she said the "question on the table" is whether researchers have been able to fine tune its bigger Llama 3 model so that it's safe to use and doesn't, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. "It's not just a technical question," Pineau said. "It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands."

Monday, April 22, 2024

HOW MICROSOFT'S NEW AI MODEL WORKS

The Mona Lisa can now do more than smile, thanks to new artificial intelligence technology from Microsoft. Last week, Microsoft researchers detailed a new AI model they’ve developed that can take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking. The videos — which can be made from photorealistic faces, as well as cartoons or artwork — are complete with compelling lip syncing and natural face and head movements. In one demo video, researchers showed how they animated the Mona Lisa to recite a comedic rap by actor Anne Hathaway. Outputs from the AI model, called VASA-1, are both entertaining and a bit jarring in their realness. Microsoft said the technology could be used for education or “improving accessibility for individuals with communication challenges,” or potentially to create virtual companions for humans. But it’s also easy to see how the tool could be abused and used to impersonate real people. It’s a concern that goes beyond Microsoft: as more tools to create convincing AI-generated images, videos and audio emerge, experts worry that their misuse could lead to new forms of misinformation. Some also worry the technology could further disrupt creative industries from film to advertising. For now, Microsoft said it doesn’t plan to release the VASA-1 model to the public immediately. The move is similar to how Microsoft partner OpenAI is handling concerns around its AI-generated video tool, Sora: OpenAI teased Sora in February, but has so far only made it available to some professional users and cybersecurity professors for testing purposes. “We are opposed to any behavior to create misleading or harmful contents of real persons,” Microsoft researchers said in a blog post. But, they added, the company has “no plans to release” the product publicly “until we are certain that the technology will be used responsibly and in accordance with proper regulations.” Making faces move Microsoft’s new AI model was trained on numerous videos of people’s faces while speaking, and it’s designed to recognize natural face and head movements, including “lip motion, (non-lip) expression, eye gaze and blinking, among others,” researchers said. The result is a more lifelike video when VASA-1 animates a still photo. For example, in one demo video set to a clip of someone sounding agitated, apparently while playing video games, the face speaking has furrowed brows and pursed lips. The AI tool can also be directed to produce a video where the subject is looking in a certain direction or expressing a specific emotion. When looking closely, there are still signs that the videos are machine-generated, such as infrequent blinking and exaggerated eyebrow movements. But Microsoft said it believes its model “significantly outperforms” other, similar tools and “paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.”

Saturday, April 20, 2024

PREDICTIONS 2024: SECURITY

The AI promises of today may become the cybersecurity perils of tomorrow. Discover the emerging opportunities and obstacles Splunk security leaders foresee in 2024: Talent: AI will alleviate skills gaps while creating new functions, such as prompt engineering. Data privacy: With AI and the use of large language models introducing new data privacy concerns, how will businesses and regulators respond? Cyberattacks: As cybercriminals look to leverage AI, expect to see new forms of attacks, such as commercial and economic disinformation campaigns. Collaboration: Security, IT and engineering functions will work more closely together to survive new attack vectors and more sophisticated threats made possible by AI.

Monday, April 15, 2024

OPENAI'S UPGRADED MODEL IS IMPRESSIVE, BUT FACEBOOK PARENT META IS ANGLING TO STEAL THE SPOTLIGHT

Artificial intelligence company OpenAI is rolling out an upgraded version of its flagship generative AI model, GPT-4 Turbo. The new version, GPT-4 Turbo With Vision, can process images, meaning users can upload photos and videos to the model. For example, one could upload a photo of a chessboard and ask the AI to recommend the next move. Companies with early access to the tool have already demonstrated how it can be used to assist with tasks like coding or to glean insights from visual imagery. In a series of tweets from the official OpenAI Developers account, OpenAI cited three companies that are using GPT-4 Turbo With Vision. AI startup Cognition Labs recently introduced Devin, an AI chatbot capable of developing code from natural language prompts. For example, a Devin user asked the tool to make a small fix to a webpage. Not only did the coding tool work, but it also opened an internet browser to view the webpage and visually confirm the changes. OpenAI also shared a new vision-enabled tool from the weight loss and nutrition startup HealthifyMe. The tool, named Healthify Snap, allows users to take a picture of their meal and get AI-driven advice and nutritional details from the company's AI-powered chatbot, Ria. For example, a user took a photo of their chicken and rice bowl and received feedback from Ria that the white rice could raise the user's blood sugar. The user was then encouraged to go for a 15-minute walk and to try brown rice or quinoa next time. The final example came from tech startup Tldraw, which has developed Make Real, a tool that enables users to draw up a concept for a website and then automatically develop and edit it. For example, a user created a feedback page for a website. The user drew a simple text box meant for customers to leave feedback about a hypothetical product. In seconds, the sketch was converted into a working webpage, complete with a title, an interactive text box, and a "submit" button. Facebook parent company Meta will soon begin a staggered release of Llama 3, the new version of its flagship open-source large language model, according to a report from The Information. And next week, Meta is expected to release two small versions of Llama 3, designed specifically to handle tasks that don't require high levels of cognition, like translating languages or generating emails. Meta will begin rolling out the next-generation models "within the next month," according to the company, and over the summer it is expected to release the full-size version of Llama 3, which will have multimodal capabilities like GPT-4 Turbo with Vision. OpenAI is also starting to tease what's next after GPT-4 Turbo With Vision: GPT-5. In an interview with the Financial Times, OpenAI chief operating officer Brad Lightcap said that future versions of the model will have enhanced reasoning capabilities, enabling them to handle more complex tasks.

Friday, April 12, 2024

LOOKING TO FUTURE-PROOF YOUR CAREER IN THE AGE OF AI? A SHOWDOWN BETWEEN KIDS AND MACHINES POINTS THE WAY

With a steady drumbeat of studies and surveys suggesting AI may soon replace a great many human workers, it's easy to feel panicked about how artificial intelligence might impact your business. And it's not just entrepreneurs -- many professionals are worried about how AI might impact their careers at the moment. But before you lose too much sleep over whether a robot might come for your livelihood, I point you to a fun, fascinating, and reassuring recent study out of the University of California, Berkeley. The one skill where the kids crushed the machines The research, recently published in the journal Perspectives on Psychological Science, wasn't done by a computer science lab or an engineering department. Instead, it was carried out in the lab of psychologist Alison Gopnik, who is well known for her research and books on child development. Why was this lab getting involved in AI? Like a far more scientific version of the game show Are You Smarter Than a 5th Grader, the team pitted kids aged 3 to 7 against several AI models, including GPT-4, to figure out who was the better performer. The contest consisted of two rounds. In the first, focused on recall and application of existing knowledge, both the bots and the kids were asked to select from a group of objects the one that best matched a particular tool. There were no big surprises here. All the pairings were conventional: A nail goes with a hammer, for example. The second test was focused on innovation rather than recall. For this task, the kids and bots were presented with a group of everyday objects and were asked which one they could use to complete a task. None of the objects was directly associated with the task (if they were trying to bang a nail, no hammer was available). But one object was similar enough to existing tools in some essential way that it could get the job done. For example, if subjects were asked to draw a circle, they could trace the bottom of a round teapot. Who performed better? With vast training libraries and huge computing power behind them, the AI models outperformed the grade schoolers when it came to retrieving correct information about well-known scenarios. But when it came to thinking creatively, the kids crushed the machines. In the teapot example above, for instance, a recent version of ChatGPT only figured out to use the teapot 8 percent of the time. Four-year-olds got it right 85 percent of the time. How to future-proof your career The long-term aim of this research is to figure out how parents teach their kids to think creatively so that maybe, one day, scientists can teach AI to think this way too. But in the meantime, this story is useful for entrepreneurs -- and others -- in more immediate ways. While tools like image generation engines and chatbots perform amazingly well at tasks that involve retrieving and reorganizing existing information, they remain pretty useless when it comes to actually innovative ideas. The researchers suggest we may want to update our mental models of this technology accordingly. "A lot of people like to think that large language models are these intelligent agents like people," the study's first author, Eunice Yiu, told Psyche. "But we think this is not the right framing." Instead, the authors suggest we think of these tools more like a very fancy card catalog or Google search box. They're exceptional information-retrieval machines. Humans remain uniquely good at understanding the deeper properties of the world around them and using that information to come up with new ideas or unique combinations. Previously, a report from the University of Oxford and comments from Harvard experts both suggested it's this childlike ability to engage with the physical world and dream up new connections (as well as empathy and EQ) that will set humans apart for a long time yet. This new study just underlines this advice. If you're looking to future-proof your business and career, those are the skills you should probably lean into.

Wednesday, April 10, 2024

CHILL OUT: AI WON'T STEAL JOBS, SAYS CONSORTIUM OF AI-BUILDING TECH GIANTS

Scanning through the all the technology news headlines focused on artificial intelligence, it's hard to know what to think. Some people may warm to the ideas espoused by an MIT professor who thinks AI will boost the labor market, though at heart, they may have a sneaky suspicion that AI really will steal plenty of people's jobs--just like the International Monetary Fund warned. Skeptics may find a jolt of support when considering recent statements from a new consortium formed by top tech companies and consulting firms to tackle the impact of AI in the workplace. Microsoft, Google, IBM, Intel, network hardware company Cisco, job-finding website Indeed, plus the global consulting firm Accenture and a few other entities have formed what they call the "AI-Enabled Information and Communication Technology," or ICT, "workforce consortium." IBM's business-speak heavy press release says the group's plans are all about "exploring AI's impact on ICT job roles, enabling workers to find and access relevant training programs, and connecting businesses to skilled and job-ready workers." The goals of the consortium appear wholesome, since it wants to help "build an inclusive workforce with family-sustaining opportunities." But underlying these words is a tacit admission that AI really is going to replace some humans in the workplace--soon. This much is made plain by the first phase of the group's plans, which will evaluate how "AI is changing the jobs and skills workers need to be successful," and culminate in a report "with actionable insights for business leaders and workers." Speaking to website TechCrunch, a spokesperson for the group explained that this phase will look at 56 different information technology job roles (it hasn't yet disclosed which ones) that include "strategic" jobs and roles that offer "promising entry points" for lower-skilled workers. IBM's press release quotes Cisco's executive vice president and chief "people, policy, and purpose" officer, Francine Katsoudas, who said that as AI speeds up the "pace of change for the global workforce" it also presents "a powerful opportunity for the private sector to help upskill and reskill workers for the future." That may indeed ring true: When a sea change hits an industry on a large scale, there will indeed be plenty of opportunity for third-party companies to make money retraining some of the displaced workforce to give them new skills. Consider the arrival of the word processor in the 1970s and '80s, and all the "powerful opportunity" seized by educational consultancies to retrain typists to work on computers. The workforce become more computer literate, but only at the expense of the typing or "secretarial" pools that once occupied whole floors of big companies. Also quoted is U.S. Secretary of Commerce Gina Raimondo, who says she's "grateful to the Consortium members for joining in this effort to confront the new workforce needs that are arising in the wake of AI's rapid development." But what exactly is the plan that the consortium group--made up partly of tech companies that are busy building ever-more-clever AIs--has in mind that pleases Raimondo? It's all about an effort to train and "reskill" people on a massive scale. The training programs consortium members have in mind will attempt to "positively impact" more than "95 million individuals around the world over the next 10 years." That, assuming it's backed by billions of dollars of investment from big tech names and government bodies around the world, seems admirable. But AI critics will question if 95 million reskilled jobs is enough, especially given the massive and ongoing layoff rounds that are hitting multiple job sectors at the moment. The other question, of course, is if millions of people will actually want to "reskill," even though their employment prospects in an AI-dominated world may depend on it. BY KIT EATON @KITEATON

Monday, April 8, 2024

DEEPMIND CO-FOUNDER WARNS OF AI HYPE AND GRIFTING

Is artificial intelligence overhyped? Demis Hassabis, co-founder and CEO of Google's AI research lab DeepMind, says the answer is yes. Hassabis told the Financial Times the science and research around the technology is "phenomenal," but the investor frenzy is bringing the type of attention and potential scams that plagued the cryptocurrency space. Substantial investment into generative AI startups "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas," he told FT. "In a way, AI's not hyped enough but in some senses it's too hyped." Investors have raced to get in on what they perceive to be an AI goldrush, particularly on the back of the launch of OpenAI's ChatGPT in 2022. Venture capital investment in generative AI surged about 270 percent to $29.1 billion in 2023, according to Pitchbook. Regulators have begun to scrutinize companies making misleading AI claims. Securities and Exchange Commission Chair Gary Gensler, for example, said at a December conference that companies "shouldn't AI wash." The agency is reportedly examining whether publicly traded companies are incorrectly claiming products use AI even as investors rush to funnel their dollars into publicly traded AI leaders including Nvidia, Microsoft and Google parent company Alphabet. Concerns about grift in the cryptocurrency space to which Hassabis drew a parallel with AI, were well-founded, as shown through the rise and dramatic collapse of crypto hedge fund and exchange FTX. As for the actual technology underpinning generative AI, Hassabis said it is well deserving of the excitement. "I think we're only scratching the surface of what I believe is going to be possible over the next decade-plus," he told FT. In terms of its application for businesses, Vanguard economists anticipate it could take time for companies to take full advantage of AI, but that the technology could potentially boost productivity in 80 percent of occupations by the second half of the decade, The New York Times reported. DeepMind, which is responsible for Google's generative AI model Gemini and other recent projects, aims to achieve artificial general intelligence, a goal shared by ChatGPT-maker OpenAI and its CEO Sam Altman. DeepMind recently announced a new organization focused on AI safety, according to TechCrunch. Hassabis was recently knighted in the UK for "services to AI," he confirmed in a tweet. BY CHLOE AIELLO, @CHLOBO_ILO

Friday, April 5, 2024

IS GENERATIVE AI WORTH THE INVESTMENT? WHAT LEADERS ARE SAYING

Will generative AI destroy humanity or make everyone rich and happy? Business leaders ask a different question: Can generative AI deliver a return on investment? CEOs are spending money to find out the answer. Virtually all--97 percent of leaders according to a KPMG survey released March 22 of 220 business leaders in U.S. companies with at least $1 billion in revenue--are investing in GenAI over the next 12 months. Some 43 percent of leaders plan to invest $100 million or more. Leaders gauge generative AI's ROI in different ways. Roughly half--51 percent--currently measure the technology's ROI through productivity gains, 48 percent track employee satisfaction, and 47 percent monitor revenue the AI chatbots help generate, noted the KPMG survey. To be sure, leaders are guarding against generative AI's business risks. To that end, they are investing in "data security, governance frameworks, and workforce preparedness to enable long-term business value," KPMG wrote. Business leaders seeking to fast-forward to the outcomes--e.g., which generative AI applications produce a significant ROI--could be frustrated. My forthcoming book, Brain Rush: How to Invest and Compete in the Real World of Generative AI, includes in-depth case studies of such applications. The highest-payoff generative AI applications do the following: They deliver a quantum value leap--enabling a big uptick in revenue and productivity--in a company's critical business processes They attract new customers and keep current customers buying They are difficult for rivals to replicate Below are two examples of high-payoff applications of generative AI that share many of these attributes. Bullhorn's president uses AI to better match candidates to jobs Bullhorn, a 1,520 employee Boston-based provider of "all the technology needed to place temporary workers" according to PitchBook, uses AI in many ways. Bullhorn's highest-payoff AI application helps the company's customers grow faster and boost productivity. In many cases, "basic generative AI doesn't add value in and of itself, but combined with more sophisticated use cases, there is definitely opportunity to drive holistic value," Bullhorn president Matt Fischer told me in a March 26 interview. "For example, you're not going to charge for one prompt, but once generative AI is integrated into the entire workflow, it becomes very valuable. We are monetizing machine learning to help recruiters match candidates to jobs more effectively," he added. Bullhorn's AI application analyzes the most successful temporary worker placements and uses the resulting model to help recruiting firms match candidates to jobs more effectively and efficiently. Because Bullhorn built its matching model using a large number of successful placements, the company's AI boosts recruiters' revenue and profitability and is difficult for rivals to replicate. "Our models are outcome-based--successful placements," said Fischer. "We track 54 vectors of more than 4.5 million successful job placements. We are planning to enhance this model with call transcripts, SMSs to candidates and clients, and emails." The model pays off for Bullhorn's clients. "Recruiters increase their placement rate from our model's recommendations--to 25 percent or 35 percent. We help reduce candidate acquisition costs by increasing the redeployment rate of talent at the end of their contracts from 5 percent to 30 percent. We increase recruiter productivity from submission to placement by 68 percent." To be sure, Bullhorn offers other generative AI applications that do not add to the company's revenue because all industry players are offering them. For example, Bullhorn provides generative AI Copilots to help recruiters draft customized communications. These communications help cut recruiters' time and increase their effectiveness. "Our clients recruit candidates, pitch the company on Johnny, and pitch Johnny on the company," Fischer told me. "Our Copilots help recruiters customize the email to the candidate and the opportunity and set the right tone. However, this service has become commoditized. We do not charge for it." Dynatrace's CEO encounters generative AI's upside and downside "A couple of preliminary killer apps will emerge for generative AI," according to my February 2024 interview with Rick McConnell, CEO of Dynatrace, a Waltham, Massachusetts-based provider of software observability services. One such killer app will be customer service. "I was trying to fix a billing issue with a cellular provider and the chatbot solved the problem fast," he noted. "The second one went so badly that I will never do business with the company again. I was trying to correlate the contact lenses I received with the prescription. The contact lens provider's chatbot couldn't get me a solution. After three different segments, I never got it resolved." Be sure your company's killer generative AI app is the kind that wins you customers for life. EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Wednesday, April 3, 2024

ADOBE'S NEW GENSTUDIO TOOLS IS DESIGNED TO GIVE MARKETERS A BOOST BY CREATING CONTENT WITH GENERATIVE AI

Adobe wants to give your marketing team an artificial intelligence upgrade. At Adobe's two-day summit in Las Vegas this week, CEO Shantanu Narayen announced new generative AI-powered tools designed to help companies mass-produce digital marketing content. Narayen said that the tools would empower marketers and creative teams to dramatically speed up the process of producing new content. To demonstrate how, Ann Rich, Adobe's senior director of design, platform, monetization, and GenStudio, pointed to a fictional example from a company that was given early access to the software: Coca-Cola. Using Adobe GenStudio, a new portal application designed to help enterprises rapidly create new content for marketing campaigns with generative AI, Rich played the part of a Coca-Cola marketer, tasked with using GenStudio to create ads for a campaign. First, Rich showed how the company had uploaded key brand information to the portal, such as logos, fonts, colors, and copy examples. This established a baseline for what kind of content was considered on-brand. She then searched through a library of the company's assets to find a few specific products for Coca-Cola Dreamworld, the limited-edition drink the campaign was advertising in this example. Finally, Rich applied a customized AI model that had been trained on the art for the Dreamworld campaign, selected Gen Z as the target demographic, and added the prompt "highlight the power of Coca-Cola to transport you to dream-like worlds." Instantly, she generated four distinct, on-brand ads, complete with copy. Beyond just creating content, GenStudio is also useful for analyzing existing content and using the best-performing parts to create even better content. By visiting the portal's analytics tab, Rich viewed which of the Dreamworld ads were generating the most clicks. She went even further by analyzing individual aspects of each piece of content. For example, GenStudio found that images classified as "surreal" had a 0.89 percent click-through rate. Rich selected a particularly high-performing Facebook ad and generated multiple variants of the art in the form of ads for LinkedIn, Instagram, Pinterest, and email. "This is a dream come true" said Rich. "I went from one channel to four channels in seconds." Adobe GenStudio isn't available to the general public yet, but is expected to release in full later this year.

Monday, April 1, 2024

WHY CYBER-FRAUD TEAMS ARE THE NEXT BIG THING IN PAYMENTS SECURITY

The growing interconnectedness of digital systems, combined with the alarming ingenuity of financial criminals, has led to a convergence between payment fraud, cybercrime, and AML. As financial transactions increasingly occur online and real-time payments have expanded to over seventy countries, cybercriminals exploit these trends by developing sophisticated schemes to target vulnerabilities in digital payment systems. As a result, payment fraud has become more prevalent and more challenging to detect. A profusion of new tools available on the Dark Web makes it easier than ever for cybercriminals to steal millions through a combination of social engineering, malware, cyberattacks, identity theft, stolen credentials, and mule accounts. These attacks expand vectors beyond traditional payment fraud methods, including cybersecurity breaches and money laundering techniques. For example, a typical attack may include: The theft of a bank employee’s credentials. Malware is installed on the bank’s network. Funds are routed from the bank’s account to a third bank in another country. Withdrawals are made through multiple transactions. Millions of dollars are stolen. This is not a new problem. As far back as 2013, the Carbanak crime group launched sophisticated attacks that showcased the merging threat vectors of cyberattack, payment fraud, and money laundering. The organization infiltrated a bank employee’s computer via phishing and infected the video monitoring system with malware. The infiltration enabled them to capture all activities on the screens of personnel handling money transfer systems. The criminals successfully manipulated international e-payment systems to move funds to offshore bank accounts and make withdrawals. In a separate attack, the gang hacked into banks in Kazakhstan and stole over US$ 4 million. They transferred the funds to 250 payment cards that were distributed throughout Europe. The stolen money was then cashed out at ATMs in a dozen countries. By the time the gang was finally caught by Europol in 2018, their thefts had approached US$ 1 billion. The Carbanak modus operandi is an excellent example of an advanced persistent threat (APT). These threats are notoriously sophisticated, characterized by their stealthy tactics and long-term presence in a network. Unlike ordinary cyber threats focusing on quick gains, APTs are used by patient fraudsters, often lurking undetected in networks for months or even years. They carefully mine valuable data or set the stage for a large-scale, potentially ruinous attack. They get into financial systems by installing malware on a banking system, using social engineering to secure login credentials, or buying them on the dark web. Insider fraud or spear phishing attacks can also install network malware. It could be as simple as a bad actor leaving a USB device on a table at a workplace with an executable virus on it. Even though we all know better than to plug in a random USB device, people, being people, will make mistakes and plug in them anyway. Highly-skilled, well-funded criminal organizations or state-sponsored actors often orchestrate this sort of multi-pronged attack. Fraudsters using APTs often have access to significant resources, allowing them to innovate their attack strategies continually. The primary goal of these sophisticated attacks is to penetrate the network without detection, maintain access over a long period, and siphon off sensitive data related to financial transactions. Their approach is leisurely. Over time they collect data, redirect funds, and create fake beneficiaries. Once they infiltrate a network, they establish a strong foothold, employ various techniques to maintain their presence, and continually evolve methods to bypass security measures. They don’t initiate actions that could alert cybersecurity teams to their presence until the final attack when it’s often too late to detect them or prevent the loss of funds. Removing them can be difficult if you can find them at all. When the attack is eventually launched, it can include the theft of customer and financial information, the launching of ransom attacks, making fraudulent transactions, and laundering the funds. Another example of a multi-vector attack occurred at a large bank in Africa. A spear-fishing email inserted malware into the bank’s ATM switch. Transactions then bypassed the host and were automatically approved. The crooks forged the bank’s credit cards and distributed them to over one hundred people in Tokyo, who then used them to withdraw money from 1400 ATMs in convenience stores. Social engineering, cyber attacks, and payment fraud vectors converged to steal US$19 million in just three hours. Once the criminals are ready to extract the data or cash out, whether that is after a few days or a couple of years, fraudsters will often employ a diversion tactic, such as a DDoS attack, then proceed with the main attack while IT and cyber security teams are distracted by the diversionary attack. Over time, the finance industry has seen the sophistication of attacks continue to increase, and there is no reason to expect that this trend will slow down. Early forms of attack were blunt and brute force, so organizations took the mentality of protecting the perimeter. But as attacks have become sophisticated, this approach isn’t sufficient. Today’s threats are advanced, persistent, polymorphic, and evade detection. They span all levels of the OSI stack, in particular at the network and application levels, and they result in ever-increasing losses. New forms of old attacks, such as Distributed Denial of Service attacks (DDOS), are increasingly driven by bots, with AI that mimics humans and evades detection. Traditionally, AML is about compliance, cybersecurity focuses on preventing IT threats, and fraud programs are for detecting and preventing payment fraud. Within these organizational silos, a card-skimming fraud event would not ordinarily capture the attention of a CISO, while a fraud manager doesn’t make decisions about firewalls. These traditional organizational silos within companies make tackling this convergence a challenge. Fraudsters exploit the gaps between information security, fraud, and risk teams. For example, in an e-commerce setting, a fraudster could run a credential-stuffing campaign using leaked data, take over accounts, check for stored payment information or add a stolen credit card, and purchase expensive luxury items. This type of fraud affects both the retailer and its customers. The fraudster transfers stolen funds to mule accounts, which are often used for money laundering. The fraud and risk team is alerted to the situation through customer complaints or monitoring system alerts. Still, by the time the fraud, cybersecurity, and anti-money laundering (AML) teams have come together to compare notes on the attack, the fraudster has already achieved his objectives and absconded with the funds. Given the prevalence of these converged threat vectors, the boom in digital transactions, and the growth of real-time payments, it should come as no surprise that organizations are starting to leverage the synergies to be had by eliminating organizational silos. The idea of converging cyber intelligence, AML, and fraud prevention activities to eliminate gaps in financial crime risk management has been discussed for years. Still, increasingly, organizations are moving to make this a reality. Leading financial institutions are establishing robust financial crimes centers that bring together cybersecurity, anti-fraud, and AML teams to converge their data and processes for a more holistic view of the threat landscape. This helps financial institutions identify financial crimes across the spectrum and stay agile in their preventive operations and response. Some large banks have already implemented a fraud fusion center to identify and defend against financial crimes and ever-evolving threats. For example, the Bank of Montreal established a fraud fusion center in January 2019, while TD Bank opened its fusion center in October of the same year. But as criminals introduce new, sophisticated techniques, banks are revamping their fusion centers and looking for improved technology to keep up. Gartner anticipates an increase in the number of organizations implementing cyber-fraud teams over the next several years. As the initial step in the convergence program, PwC recommends that financial institutions examine their existing enterprise-wide structure and identify points where streamlining it will give senior management a centralized view of financial crime risk. Clearly documented structure with roles and responsibilities will help detect and eliminate duplicate tasks and will ensure better data visibility across departments. McKinsey & Company suggests that strategic prevention should be key to improving the protection of the bank and its customers when working on convergence. To achieve their goals, financial institutions need to think like the criminals. Cybercriminals are looking for systems’ weak points, so when planning the defense, organizations should trace the flow of crime in order to come up with an optimized internal structure. Access to the right data at the right time is the foundation of efficient convergence programs. Instead of collecting data and tackling crimes in the silos of compliance, fraud, and cybercrime, data fusion provides a single source of data to multiple teams. This enables a complete view of the payment transactions journey and enables faster, more effective responses to threats. Criminals don’t make a distinction between AML, fraud, or cybercrime departments. They act based on whatever gaps in the system they can find. Information fusion is the best weapon against fraudsters. If fusion centers leverage raw payment data in real-time, captured at the network level to avoid data loss, they can derive trends and patterns that let them distinguish legitimate customer transactions from fraudulent ones. Artificial intelligence and machine learning (ML) also support financial institutions in their privacy compliance by helping prevent data breaches. They can cut through the noise by flagging suspicious activity with precision, blocking fraudulent activities, and letting legitimate transactions complete. Faster payments and open banking require organizations to quickly identify and respond to emerging fraud and cyberattack patterns without creating negative friction for their real customers. At INETCO, we’ve anticipated these needs and designed INETCO BullzAI, a real-time, ML-powered software solution that addresses the converged attack vectors of payment fraud, cyberattacks, and money laundering. It provides the real-time data that fusion teams need and gives them the power to prevent cyber- and fraud attacks while reducing false positives. Get in touch to find out how we can help you implement your fusion strategy. Christene Best, VP, Marketing & Channel Development, INETCO.