Friday, May 3, 2024

MODERNA'S CEO SAYS STAFF SHOULD CONSULT CHATGPT 20 TIMES A DAY

OpenAI is seemingly everywhere now--its ChatGPT system is in the vanguard of bringing AI to the masses. It's also laced through the workings of pharma giant Moderna, thanks to a deal weaving OpenAI tech deeply into the fabric of the company. So much so that Moderna CEO Stéphane Bancel says his staff should be making the most of the investment in AI and using it a lot. More than "a lot." Bancel's suggestion: More than 20 times a day. Assuming a typical eight-hour workday, that means Bancel expects his staff to ask OpenAI's chatbots questions at least two, maybe three times every hour--an AI interaction rate that could easily, with some back-of-the-napkin estimates, eat up 10 to 15 minutes of work time each work hour. Maybe more. So what will all that employee-AI interaction do for Moderna? The Wall Street Journal quotes Bancel saying AI is going to be used to "reinvent all of Moderna's business processes, in science, in legal, in manufacturing--everywhere." So far, Moderna's staff have built many different custom GPTs using OpenAI's tech, the WSJ says. These are specially trained versions of the chatbot that are separate from the main ask-it-anything open-access ChatGPT system that most people have tinkered with online. Of the 750 or so custom Moderna GPTs, some are being used to help decide drug doses for clinical trials, and have presumably been trained on proprietary data from previous Moderna trials, while others have more business-specific uses, like helping Moderna deal with government regulators. An official statement from Moderna underlines Bancel's enthusiasm for the technology. He explained that AI is as impactful as the "introduction of the personal computer in the 1980s" which "changed the way we work and live." The goal for such widespread company adoption of AI tech is to support Moderna's "ambitious plan to launch multiple products over the next few years." This gives us a deeper clue as to what Bancel thinks AI can offer for his company: as a multiplier, enhancing the productivity and efficiency of his staff, in any division of the company. The company statement also quotes OpenAI CEO Sam Altman: Moderna's simply "leading the way by empowering all of its employees to use AI to tackle complex problems," he said. The WSJ says Altman also explained that right now, ChatGPT may not be used to advance Moderna's scientific progress too much, though it will be able to tackle "more and more" scientific tasks "eventually." Right now, the best way for Moderna to advance its scientific objectives is "to enhance the productivity of people and accelerate what they can do," Altman said. What can your company do with this AI-embracing approach? Until now, it's been easy to see that content generated by AIs can boost simple business processes like building presentations or preparing marketing material, but Moderna's example shows that by embracing new technologies like custom-trained GPTs, AI could actually be used in almost every part of your business. It's just a question of working out where. If you're hesitant to embrace AI, or your staff themselves are stressed out by the tech's implications or are worried about simply using it, then maybe it's time to take some AI training for you, your management team, and your frontline staff.

Wednesday, May 1, 2024

3 KEY BUSINESS AREAS CYBERSECURITY FALLS SHORT

Putting your finger on exactly what drives business success is impossible because success is driven by a combination of factors. However, there is one thing that can compromise business resilience and your trajectory--cyber risk. Here are three key areas that drive business success and are at risk without an approach to network security that is designed for modern networks and operating models. 1. Innovation According to the most recent numbers available from the National Science Board, on a global basis, the U.S. leads in research and development with U.S. businesses spending over $608 billion on innovation in 2021--a 12 percent increase from the prior year. And these investments pay off. The top 50 companies in BCG's 2023 Most Innovative Companies report outperform on shareholder return by 3.3 percent per year. However, these days work environments are so diverse and dispersed that R&D teams are working on different clouds or even on-premises. Collaborating via any configuration in the modern network makes it particularly challenging to ensure only the people who are supposed to be working on an R&D project are in fact on the R&D project. This lack of visibility can put your intellectual property and trade secrets at risk. 2. Third-party ecosystem The average organization does business with 11 third parties and each of those third parties has a pathway into your organization--whether through technology integrations or supply chain processes or access into your environment as part of a service they deliver to the business. To mitigate exposure to risk from your third-party ecosystem, verifying that your suppliers have achieved SOC 2 compliance and have implemented a supply chain integrity process and a notification process if their supply chains are compromised are great places to start. Even still, 98 percent of companies do business with at least one third-party partner who has been breached. From a compliance, security, legal, and procurement perspective there are many reasons for concern. 3. Customer relationships The movement to meet customers where they are emerged nearly 15 years ago and now, depending on your industry, customers may expect to connect with you online, virtually, and through social media channels, in addition to in-person, phone, and email. Digital strategies are mostly cloud-based which means you are working with your cloud service providers and technology vendors to enable these services. Still, you are responsibile for protecting the data that flows through these channels and wherever it is stored, which can be overwhelming, particularly in today's multi-cloud environments. Think about cybersecurity differently The traditional approach to network security has been to focus on securing the network as well as possible and then detecting particular, known attacks. But there should never be a quantifiable threat coming from any of these vectors. It's how we collaborate in R&D or interoperate with partners and customers that can create opportunities for compromise. Beyond the known threats to every network, some activities should never occur and the ability to detect the behaviors that are known to be operationally out of bounds is incredibly powerful. Unfortunately, most organizations cannot detect that activity. When users, applications, data, and devices are spread across your multi-cloud and on-premises environment, how do you know what you've got, what it's doing, and what's happening to it? You need comprehensive visibility of all the participants across your environment and the ability to apply policies to enforce behavior that is normal or expected and alert you to activity that is not compliant. For example, when it comes to innovation, there's always the risk of IP being stolen and exfiltrated. R&D teams need to be segmented off from the rest of the organization using zero trust best practices of both identity-based access control and network segmentation to keep unauthorized users from accessing what's being worked on. These same best practices are also essential to have in place to mitigate risk from your third-party ecosystem or your customer-facing touchpoints. And since cloud misconfiguration issues are a major cause of data security breaches, it's also important to validate that your cloud infrastructure is configured and running properly. Rethinking network security to focus on comprehensive real-time observability of the activities of the users, applications, data, and devices across the entire multi-cloud and hybrid environment lets us see when things go awry. We can detect signs of abuse, misuse, or compromise to build resilience and continue on a trajectory to business success. EXPERT OPINION BY MARTIN ROESCH, CEO, NETOGRAPHY @MROESCH

Monday, April 29, 2024

WHY SAM ALTMAN IS BETTING ON SOLAR TO HELP POWER AI

AI needs more energy and Sam Altman is investing. The OpenAI CEO joined big name VC firms Andreessen Horowitz and Atomic in a $20 million seed round of funding for solar energy company Exowatt. The Miami-based startup is developing a modular energy platform to power data centers at a time when AI is expected to substantially drive up the power needs of global data centers. AI advocates are on the hunt for cheap and rapidly scalable energy systems as the energy needs of the technology explode. Exowatt's technology is a three-in-one, modular energy system roughly the size of a 40-foot shipping container. It uses what Exowatt CEO and co-founder Hannan Parvizian calls "a specially developed lens" to collect solar energy in the form of heat and store it in a heat battery. The stored heat is then run through an engine to convert it to electricity. Exowatt anticipates its solution could cut the cost of electricity down to $0.01 per kilowatt-hour once it hits scale. "What I think is unique about Exowatt is that we can provide a power solution that's dispatchable--that means you can have access to it throughout the day without any intermittencies--it's modular, you can scale it from small projects to large, it's built in the U.S. of course, and, most importantly, it's available today," Parvizian says. The amount of electricity that data centers around the world need could jump 50 percent by 2027 as a result of AI, according to one estimate from Vrije Universiteit Amsterdam's School of Business and Economics. Big tech companies and investors have begun eyeing nuclear power to meet the need. Altman, for example, is also invested in two nuclear power companies called Helion and Oklo, according to the Wall Street Journal. Exowatt offers another solution. "I do think we will have nuclear and other forms of energy on the grid that will also help support data centers, but if you think about it in practical terms, none of those technologies will be able to be deployed in the next year or even the next five years," Parvizian says. "I think Exowatt has a unique advantage here in being able to offer solution that can be deployed immediately." Parvizian says support from big name investors including Altman will position the company to serve "big tech customers building data centers or hyperscalers. Atomic CEO and founder Jack Abraham is also a co-founder of Exowatt.

Friday, April 26, 2024

"THE WORD PEOPLE" WILL BE HARDER TO REPLACE IN THE FUTURE, WHY?

As coverage of the AI tech tsunami and its potential impact on the world proliferates, it's now become a "Will they-or-won't they?" Bachelorette-style question of whether or not AI will steal people's jobs. So many different people have such differing opinions, from the catastrophically doomy to the more upbeat. The whole debate got another spin yesterday when billionaire, PayPal cofounder and tech entrepreneur Peter Thiel spoke up on a popular podcast. AI, Thiel believes, will prove to be really bad for all the "math people" in businesses the world over. Thiel spoke on the popular education chat podcast Conversations with Tyler, which attracts diverse A-list guests like, writer Neal Stephenson, and NBA legend Kareem Abdul-Jabbar. The conversation with Thiel ranged from topics like Roman Catholicism to the philosophy of politics, but when asked about the impact of AI on creative jobs like writers, Thiel took a somewhat surprising position. Typically, AI critics worry that the popular text-based chatbots everyone seems to be experimenting with right now are squarely aimed at replacing people in wordy, creative professions. "My intuition would be it's going to be quite the opposite, where it seems much worse for the math people than the word people," Thiel explained. People have told him that "they think within three to five years, the AI models will be able to solve all the US Math Olympiad problems," which will really "shift things quite a bit." He then dug into the history of math study and usefulness to the world, noting "if we prioritized math ability, it had this meritocratic but also egalitarian effect on society." But fast-forward to the 21st century, and narrow your focus in on Silicon Valley, then it's become "way too biased toward the math people," according to Thiel. And Thiel thinks math is doomed: "Why even do math? Why not just chess? That got undermined by the computers in 1997," he argued, before concluding "Isn't that what's going to happen to math? And isn't that a long overdue rebalancing of our society?" Arguably Thiel's assertion on Silicon Valley and math is true, though somewhat simplified: a lot of the technology innovations coming out of Silicon Valley are driven by science, which relies on math at its core. One very math-centric profession is now undergoing an AI-driven revolution. When touting the advances in his next-generation Grok AI system recently, Elon Musk made an effort to point out how much better it was at writing code, and calculating math, than the earlier version. Last month CEO of leading AI chip-making company Nvidia Jensen Huang predicted the "death of coding," with AI so capable of developing code that kids shouldn't learn how to code in school. With innovations like Microsoft's integration of its Copilot AI deeply into the coding social network Github, where AI is already helping coders craft code, it's easy to see Huang's point. Conversely, as any small business startup owner knows, innovation--even in a tech company--often requires a very non-mathematical, flying-by-the-seat-of-the-pants human touch. Boiling all of Thiel's words down to a summary, we get this: AI is very capable of replacing some highly logical, mathematical jobs--like some of the coding, or basic analysis and simulation tools that help technology companies achieve breakthroughs. If AI really is coming for math nerds, as Thiel asserts, then maybe accountants, business analysts and other professions may also be under threat. But he thinks that for really creative roles, including word-centric creative professions, and, arguably, inventing new ideas, human kind is probably safe for a while. Thiel dodged another question about AI's impact on more manual work by suggesting that a better way to worry about the impact of AI is to ask different questions about it--a trick that cofounder of Google's AI research division DeepMind Mustafa Suleyman also recently suggested. Questions like "how much will it increase GDP versus how much will it increase inequality?" Unsettlingly, Thiel added, "Probably it does some of both."

Wednesday, April 24, 2024

META'S AI MODEL AGENTS GET WEIRED ON SOCIAL MEDIA

Facebook parent Meta Platforms unveiled a new set of artificial intelligence systems Thursday that are powering what CEO Mark Zuckerberg calls "the most intelligent AI assistant that you can freely use." But as Zuckerberg's crew of amped-up Meta AI agents started venturing into social media this week to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. One joined a Facebook moms' group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France's Mistral, have been churning out new AI language models and hoping to persuade customers they've got the smartest, handiest or most efficient chatbots. While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it's now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta's newest models were built with 8 billion and 70 billion parameters--a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. "The vast majority of consumers don't candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant," said Nick Clegg, Meta's president of global affairs, in an interview. He added that Meta's AI agent is loosening up. Some people found the earlier Llama 2 model--released less than a year ago--to be "a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions," he said. But in letting down their guard, Meta's AI agents also were spotted this week posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. "Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group. One group member who also happens to study AI said it was clear that the agent didn't know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. "An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it," said Aleksandra Korolova, an assistant professor of computer science at Princeton University. Clegg said Wednesday he wasn't aware of the exchange. Facebook's online help page says the Meta AI agent will join a group conversation if invited, or if someone "asks a question in a post and no one responds within an hour." The group's administrators have the ability to turn it off. In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a "gently used" Canon camera and an "almost-new portable air conditioning unit that I never ended up using." Meta said in a written statement Thursday that "this is new technology and it may not always return the response we intend, which is the same for all generative AI systems." The company said it is constantly working to improve the features. In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced some 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. They may eventually hit a limit--at least when it comes to data, said Nestor Maslej, a research manager for Stanford's Institute for Human-Centered Artificial Intelligence. "I think it's been clear that if you scale the models on more data, they can become increasingly better," he said. "But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet." More data--acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits--will continue to drive improvements. "Yet they still cannot plan well," Maslej said. "They still hallucinate. They're still making mistakes in reasoning." Getting to AI systems that can perform higher-level cognitive tasks and commonsense reasoning-- here humans still excel--might require a shift beyond building ever-bigger models. For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights and summarize long documents. "You're seeing companies kind of looking at fit, testing each of the different models for what they're trying to do and finding some that are better at some areas rather than others," said Todd Lohr, a leader in technology consulting at KPMG. Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers--those using its advertising-fueled social networks. Joelle Pineau, Meta's vice president of AI research, said at a London event last week the company's goal over time is to make a Llama-powered Meta AI "the most useful assistant in the world." "In many ways, the models that we have today are going to be child's play compared to the models coming in five years," she said. But she said the "question on the table" is whether researchers have been able to fine tune its bigger Llama 3 model so that it's safe to use and doesn't, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. "It's not just a technical question," Pineau said. "It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands."

Monday, April 22, 2024

HOW MICROSOFT'S NEW AI MODEL WORKS

The Mona Lisa can now do more than smile, thanks to new artificial intelligence technology from Microsoft. Last week, Microsoft researchers detailed a new AI model they’ve developed that can take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking. The videos — which can be made from photorealistic faces, as well as cartoons or artwork — are complete with compelling lip syncing and natural face and head movements. In one demo video, researchers showed how they animated the Mona Lisa to recite a comedic rap by actor Anne Hathaway. Outputs from the AI model, called VASA-1, are both entertaining and a bit jarring in their realness. Microsoft said the technology could be used for education or “improving accessibility for individuals with communication challenges,” or potentially to create virtual companions for humans. But it’s also easy to see how the tool could be abused and used to impersonate real people. It’s a concern that goes beyond Microsoft: as more tools to create convincing AI-generated images, videos and audio emerge, experts worry that their misuse could lead to new forms of misinformation. Some also worry the technology could further disrupt creative industries from film to advertising. For now, Microsoft said it doesn’t plan to release the VASA-1 model to the public immediately. The move is similar to how Microsoft partner OpenAI is handling concerns around its AI-generated video tool, Sora: OpenAI teased Sora in February, but has so far only made it available to some professional users and cybersecurity professors for testing purposes. “We are opposed to any behavior to create misleading or harmful contents of real persons,” Microsoft researchers said in a blog post. But, they added, the company has “no plans to release” the product publicly “until we are certain that the technology will be used responsibly and in accordance with proper regulations.” Making faces move Microsoft’s new AI model was trained on numerous videos of people’s faces while speaking, and it’s designed to recognize natural face and head movements, including “lip motion, (non-lip) expression, eye gaze and blinking, among others,” researchers said. The result is a more lifelike video when VASA-1 animates a still photo. For example, in one demo video set to a clip of someone sounding agitated, apparently while playing video games, the face speaking has furrowed brows and pursed lips. The AI tool can also be directed to produce a video where the subject is looking in a certain direction or expressing a specific emotion. When looking closely, there are still signs that the videos are machine-generated, such as infrequent blinking and exaggerated eyebrow movements. But Microsoft said it believes its model “significantly outperforms” other, similar tools and “paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.”

Saturday, April 20, 2024

PREDICTIONS 2024: SECURITY

The AI promises of today may become the cybersecurity perils of tomorrow. Discover the emerging opportunities and obstacles Splunk security leaders foresee in 2024: Talent: AI will alleviate skills gaps while creating new functions, such as prompt engineering. Data privacy: With AI and the use of large language models introducing new data privacy concerns, how will businesses and regulators respond? Cyberattacks: As cybercriminals look to leverage AI, expect to see new forms of attacks, such as commercial and economic disinformation campaigns. Collaboration: Security, IT and engineering functions will work more closely together to survive new attack vectors and more sophisticated threats made possible by AI.

Monday, April 15, 2024

OPENAI'S UPGRADED MODEL IS IMPRESSIVE, BUT FACEBOOK PARENT META IS ANGLING TO STEAL THE SPOTLIGHT

Artificial intelligence company OpenAI is rolling out an upgraded version of its flagship generative AI model, GPT-4 Turbo. The new version, GPT-4 Turbo With Vision, can process images, meaning users can upload photos and videos to the model. For example, one could upload a photo of a chessboard and ask the AI to recommend the next move. Companies with early access to the tool have already demonstrated how it can be used to assist with tasks like coding or to glean insights from visual imagery. In a series of tweets from the official OpenAI Developers account, OpenAI cited three companies that are using GPT-4 Turbo With Vision. AI startup Cognition Labs recently introduced Devin, an AI chatbot capable of developing code from natural language prompts. For example, a Devin user asked the tool to make a small fix to a webpage. Not only did the coding tool work, but it also opened an internet browser to view the webpage and visually confirm the changes. OpenAI also shared a new vision-enabled tool from the weight loss and nutrition startup HealthifyMe. The tool, named Healthify Snap, allows users to take a picture of their meal and get AI-driven advice and nutritional details from the company's AI-powered chatbot, Ria. For example, a user took a photo of their chicken and rice bowl and received feedback from Ria that the white rice could raise the user's blood sugar. The user was then encouraged to go for a 15-minute walk and to try brown rice or quinoa next time. The final example came from tech startup Tldraw, which has developed Make Real, a tool that enables users to draw up a concept for a website and then automatically develop and edit it. For example, a user created a feedback page for a website. The user drew a simple text box meant for customers to leave feedback about a hypothetical product. In seconds, the sketch was converted into a working webpage, complete with a title, an interactive text box, and a "submit" button. Facebook parent company Meta will soon begin a staggered release of Llama 3, the new version of its flagship open-source large language model, according to a report from The Information. And next week, Meta is expected to release two small versions of Llama 3, designed specifically to handle tasks that don't require high levels of cognition, like translating languages or generating emails. Meta will begin rolling out the next-generation models "within the next month," according to the company, and over the summer it is expected to release the full-size version of Llama 3, which will have multimodal capabilities like GPT-4 Turbo with Vision. OpenAI is also starting to tease what's next after GPT-4 Turbo With Vision: GPT-5. In an interview with the Financial Times, OpenAI chief operating officer Brad Lightcap said that future versions of the model will have enhanced reasoning capabilities, enabling them to handle more complex tasks.

Friday, April 12, 2024

LOOKING TO FUTURE-PROOF YOUR CAREER IN THE AGE OF AI? A SHOWDOWN BETWEEN KIDS AND MACHINES POINTS THE WAY

With a steady drumbeat of studies and surveys suggesting AI may soon replace a great many human workers, it's easy to feel panicked about how artificial intelligence might impact your business. And it's not just entrepreneurs -- many professionals are worried about how AI might impact their careers at the moment. But before you lose too much sleep over whether a robot might come for your livelihood, I point you to a fun, fascinating, and reassuring recent study out of the University of California, Berkeley. The one skill where the kids crushed the machines The research, recently published in the journal Perspectives on Psychological Science, wasn't done by a computer science lab or an engineering department. Instead, it was carried out in the lab of psychologist Alison Gopnik, who is well known for her research and books on child development. Why was this lab getting involved in AI? Like a far more scientific version of the game show Are You Smarter Than a 5th Grader, the team pitted kids aged 3 to 7 against several AI models, including GPT-4, to figure out who was the better performer. The contest consisted of two rounds. In the first, focused on recall and application of existing knowledge, both the bots and the kids were asked to select from a group of objects the one that best matched a particular tool. There were no big surprises here. All the pairings were conventional: A nail goes with a hammer, for example. The second test was focused on innovation rather than recall. For this task, the kids and bots were presented with a group of everyday objects and were asked which one they could use to complete a task. None of the objects was directly associated with the task (if they were trying to bang a nail, no hammer was available). But one object was similar enough to existing tools in some essential way that it could get the job done. For example, if subjects were asked to draw a circle, they could trace the bottom of a round teapot. Who performed better? With vast training libraries and huge computing power behind them, the AI models outperformed the grade schoolers when it came to retrieving correct information about well-known scenarios. But when it came to thinking creatively, the kids crushed the machines. In the teapot example above, for instance, a recent version of ChatGPT only figured out to use the teapot 8 percent of the time. Four-year-olds got it right 85 percent of the time. How to future-proof your career The long-term aim of this research is to figure out how parents teach their kids to think creatively so that maybe, one day, scientists can teach AI to think this way too. But in the meantime, this story is useful for entrepreneurs -- and others -- in more immediate ways. While tools like image generation engines and chatbots perform amazingly well at tasks that involve retrieving and reorganizing existing information, they remain pretty useless when it comes to actually innovative ideas. The researchers suggest we may want to update our mental models of this technology accordingly. "A lot of people like to think that large language models are these intelligent agents like people," the study's first author, Eunice Yiu, told Psyche. "But we think this is not the right framing." Instead, the authors suggest we think of these tools more like a very fancy card catalog or Google search box. They're exceptional information-retrieval machines. Humans remain uniquely good at understanding the deeper properties of the world around them and using that information to come up with new ideas or unique combinations. Previously, a report from the University of Oxford and comments from Harvard experts both suggested it's this childlike ability to engage with the physical world and dream up new connections (as well as empathy and EQ) that will set humans apart for a long time yet. This new study just underlines this advice. If you're looking to future-proof your business and career, those are the skills you should probably lean into.

Wednesday, April 10, 2024

CHILL OUT: AI WON'T STEAL JOBS, SAYS CONSORTIUM OF AI-BUILDING TECH GIANTS

Scanning through the all the technology news headlines focused on artificial intelligence, it's hard to know what to think. Some people may warm to the ideas espoused by an MIT professor who thinks AI will boost the labor market, though at heart, they may have a sneaky suspicion that AI really will steal plenty of people's jobs--just like the International Monetary Fund warned. Skeptics may find a jolt of support when considering recent statements from a new consortium formed by top tech companies and consulting firms to tackle the impact of AI in the workplace. Microsoft, Google, IBM, Intel, network hardware company Cisco, job-finding website Indeed, plus the global consulting firm Accenture and a few other entities have formed what they call the "AI-Enabled Information and Communication Technology," or ICT, "workforce consortium." IBM's business-speak heavy press release says the group's plans are all about "exploring AI's impact on ICT job roles, enabling workers to find and access relevant training programs, and connecting businesses to skilled and job-ready workers." The goals of the consortium appear wholesome, since it wants to help "build an inclusive workforce with family-sustaining opportunities." But underlying these words is a tacit admission that AI really is going to replace some humans in the workplace--soon. This much is made plain by the first phase of the group's plans, which will evaluate how "AI is changing the jobs and skills workers need to be successful," and culminate in a report "with actionable insights for business leaders and workers." Speaking to website TechCrunch, a spokesperson for the group explained that this phase will look at 56 different information technology job roles (it hasn't yet disclosed which ones) that include "strategic" jobs and roles that offer "promising entry points" for lower-skilled workers. IBM's press release quotes Cisco's executive vice president and chief "people, policy, and purpose" officer, Francine Katsoudas, who said that as AI speeds up the "pace of change for the global workforce" it also presents "a powerful opportunity for the private sector to help upskill and reskill workers for the future." That may indeed ring true: When a sea change hits an industry on a large scale, there will indeed be plenty of opportunity for third-party companies to make money retraining some of the displaced workforce to give them new skills. Consider the arrival of the word processor in the 1970s and '80s, and all the "powerful opportunity" seized by educational consultancies to retrain typists to work on computers. The workforce become more computer literate, but only at the expense of the typing or "secretarial" pools that once occupied whole floors of big companies. Also quoted is U.S. Secretary of Commerce Gina Raimondo, who says she's "grateful to the Consortium members for joining in this effort to confront the new workforce needs that are arising in the wake of AI's rapid development." But what exactly is the plan that the consortium group--made up partly of tech companies that are busy building ever-more-clever AIs--has in mind that pleases Raimondo? It's all about an effort to train and "reskill" people on a massive scale. The training programs consortium members have in mind will attempt to "positively impact" more than "95 million individuals around the world over the next 10 years." That, assuming it's backed by billions of dollars of investment from big tech names and government bodies around the world, seems admirable. But AI critics will question if 95 million reskilled jobs is enough, especially given the massive and ongoing layoff rounds that are hitting multiple job sectors at the moment. The other question, of course, is if millions of people will actually want to "reskill," even though their employment prospects in an AI-dominated world may depend on it. BY KIT EATON @KITEATON

Monday, April 8, 2024

DEEPMIND CO-FOUNDER WARNS OF AI HYPE AND GRIFTING

Is artificial intelligence overhyped? Demis Hassabis, co-founder and CEO of Google's AI research lab DeepMind, says the answer is yes. Hassabis told the Financial Times the science and research around the technology is "phenomenal," but the investor frenzy is bringing the type of attention and potential scams that plagued the cryptocurrency space. Substantial investment into generative AI startups "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas," he told FT. "In a way, AI's not hyped enough but in some senses it's too hyped." Investors have raced to get in on what they perceive to be an AI goldrush, particularly on the back of the launch of OpenAI's ChatGPT in 2022. Venture capital investment in generative AI surged about 270 percent to $29.1 billion in 2023, according to Pitchbook. Regulators have begun to scrutinize companies making misleading AI claims. Securities and Exchange Commission Chair Gary Gensler, for example, said at a December conference that companies "shouldn't AI wash." The agency is reportedly examining whether publicly traded companies are incorrectly claiming products use AI even as investors rush to funnel their dollars into publicly traded AI leaders including Nvidia, Microsoft and Google parent company Alphabet. Concerns about grift in the cryptocurrency space to which Hassabis drew a parallel with AI, were well-founded, as shown through the rise and dramatic collapse of crypto hedge fund and exchange FTX. As for the actual technology underpinning generative AI, Hassabis said it is well deserving of the excitement. "I think we're only scratching the surface of what I believe is going to be possible over the next decade-plus," he told FT. In terms of its application for businesses, Vanguard economists anticipate it could take time for companies to take full advantage of AI, but that the technology could potentially boost productivity in 80 percent of occupations by the second half of the decade, The New York Times reported. DeepMind, which is responsible for Google's generative AI model Gemini and other recent projects, aims to achieve artificial general intelligence, a goal shared by ChatGPT-maker OpenAI and its CEO Sam Altman. DeepMind recently announced a new organization focused on AI safety, according to TechCrunch. Hassabis was recently knighted in the UK for "services to AI," he confirmed in a tweet. BY CHLOE AIELLO, @CHLOBO_ILO

Friday, April 5, 2024

IS GENERATIVE AI WORTH THE INVESTMENT? WHAT LEADERS ARE SAYING

Will generative AI destroy humanity or make everyone rich and happy? Business leaders ask a different question: Can generative AI deliver a return on investment? CEOs are spending money to find out the answer. Virtually all--97 percent of leaders according to a KPMG survey released March 22 of 220 business leaders in U.S. companies with at least $1 billion in revenue--are investing in GenAI over the next 12 months. Some 43 percent of leaders plan to invest $100 million or more. Leaders gauge generative AI's ROI in different ways. Roughly half--51 percent--currently measure the technology's ROI through productivity gains, 48 percent track employee satisfaction, and 47 percent monitor revenue the AI chatbots help generate, noted the KPMG survey. To be sure, leaders are guarding against generative AI's business risks. To that end, they are investing in "data security, governance frameworks, and workforce preparedness to enable long-term business value," KPMG wrote. Business leaders seeking to fast-forward to the outcomes--e.g., which generative AI applications produce a significant ROI--could be frustrated. My forthcoming book, Brain Rush: How to Invest and Compete in the Real World of Generative AI, includes in-depth case studies of such applications. The highest-payoff generative AI applications do the following: They deliver a quantum value leap--enabling a big uptick in revenue and productivity--in a company's critical business processes They attract new customers and keep current customers buying They are difficult for rivals to replicate Below are two examples of high-payoff applications of generative AI that share many of these attributes. Bullhorn's president uses AI to better match candidates to jobs Bullhorn, a 1,520 employee Boston-based provider of "all the technology needed to place temporary workers" according to PitchBook, uses AI in many ways. Bullhorn's highest-payoff AI application helps the company's customers grow faster and boost productivity. In many cases, "basic generative AI doesn't add value in and of itself, but combined with more sophisticated use cases, there is definitely opportunity to drive holistic value," Bullhorn president Matt Fischer told me in a March 26 interview. "For example, you're not going to charge for one prompt, but once generative AI is integrated into the entire workflow, it becomes very valuable. We are monetizing machine learning to help recruiters match candidates to jobs more effectively," he added. Bullhorn's AI application analyzes the most successful temporary worker placements and uses the resulting model to help recruiting firms match candidates to jobs more effectively and efficiently. Because Bullhorn built its matching model using a large number of successful placements, the company's AI boosts recruiters' revenue and profitability and is difficult for rivals to replicate. "Our models are outcome-based--successful placements," said Fischer. "We track 54 vectors of more than 4.5 million successful job placements. We are planning to enhance this model with call transcripts, SMSs to candidates and clients, and emails." The model pays off for Bullhorn's clients. "Recruiters increase their placement rate from our model's recommendations--to 25 percent or 35 percent. We help reduce candidate acquisition costs by increasing the redeployment rate of talent at the end of their contracts from 5 percent to 30 percent. We increase recruiter productivity from submission to placement by 68 percent." To be sure, Bullhorn offers other generative AI applications that do not add to the company's revenue because all industry players are offering them. For example, Bullhorn provides generative AI Copilots to help recruiters draft customized communications. These communications help cut recruiters' time and increase their effectiveness. "Our clients recruit candidates, pitch the company on Johnny, and pitch Johnny on the company," Fischer told me. "Our Copilots help recruiters customize the email to the candidate and the opportunity and set the right tone. However, this service has become commoditized. We do not charge for it." Dynatrace's CEO encounters generative AI's upside and downside "A couple of preliminary killer apps will emerge for generative AI," according to my February 2024 interview with Rick McConnell, CEO of Dynatrace, a Waltham, Massachusetts-based provider of software observability services. One such killer app will be customer service. "I was trying to fix a billing issue with a cellular provider and the chatbot solved the problem fast," he noted. "The second one went so badly that I will never do business with the company again. I was trying to correlate the contact lenses I received with the prescription. The contact lens provider's chatbot couldn't get me a solution. After three different segments, I never got it resolved." Be sure your company's killer generative AI app is the kind that wins you customers for life. EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Wednesday, April 3, 2024

ADOBE'S NEW GENSTUDIO TOOLS IS DESIGNED TO GIVE MARKETERS A BOOST BY CREATING CONTENT WITH GENERATIVE AI

Adobe wants to give your marketing team an artificial intelligence upgrade. At Adobe's two-day summit in Las Vegas this week, CEO Shantanu Narayen announced new generative AI-powered tools designed to help companies mass-produce digital marketing content. Narayen said that the tools would empower marketers and creative teams to dramatically speed up the process of producing new content. To demonstrate how, Ann Rich, Adobe's senior director of design, platform, monetization, and GenStudio, pointed to a fictional example from a company that was given early access to the software: Coca-Cola. Using Adobe GenStudio, a new portal application designed to help enterprises rapidly create new content for marketing campaigns with generative AI, Rich played the part of a Coca-Cola marketer, tasked with using GenStudio to create ads for a campaign. First, Rich showed how the company had uploaded key brand information to the portal, such as logos, fonts, colors, and copy examples. This established a baseline for what kind of content was considered on-brand. She then searched through a library of the company's assets to find a few specific products for Coca-Cola Dreamworld, the limited-edition drink the campaign was advertising in this example. Finally, Rich applied a customized AI model that had been trained on the art for the Dreamworld campaign, selected Gen Z as the target demographic, and added the prompt "highlight the power of Coca-Cola to transport you to dream-like worlds." Instantly, she generated four distinct, on-brand ads, complete with copy. Beyond just creating content, GenStudio is also useful for analyzing existing content and using the best-performing parts to create even better content. By visiting the portal's analytics tab, Rich viewed which of the Dreamworld ads were generating the most clicks. She went even further by analyzing individual aspects of each piece of content. For example, GenStudio found that images classified as "surreal" had a 0.89 percent click-through rate. Rich selected a particularly high-performing Facebook ad and generated multiple variants of the art in the form of ads for LinkedIn, Instagram, Pinterest, and email. "This is a dream come true" said Rich. "I went from one channel to four channels in seconds." Adobe GenStudio isn't available to the general public yet, but is expected to release in full later this year.

Monday, April 1, 2024

WHY CYBER-FRAUD TEAMS ARE THE NEXT BIG THING IN PAYMENTS SECURITY

The growing interconnectedness of digital systems, combined with the alarming ingenuity of financial criminals, has led to a convergence between payment fraud, cybercrime, and AML. As financial transactions increasingly occur online and real-time payments have expanded to over seventy countries, cybercriminals exploit these trends by developing sophisticated schemes to target vulnerabilities in digital payment systems. As a result, payment fraud has become more prevalent and more challenging to detect. A profusion of new tools available on the Dark Web makes it easier than ever for cybercriminals to steal millions through a combination of social engineering, malware, cyberattacks, identity theft, stolen credentials, and mule accounts. These attacks expand vectors beyond traditional payment fraud methods, including cybersecurity breaches and money laundering techniques. For example, a typical attack may include: The theft of a bank employee’s credentials. Malware is installed on the bank’s network. Funds are routed from the bank’s account to a third bank in another country. Withdrawals are made through multiple transactions. Millions of dollars are stolen. This is not a new problem. As far back as 2013, the Carbanak crime group launched sophisticated attacks that showcased the merging threat vectors of cyberattack, payment fraud, and money laundering. The organization infiltrated a bank employee’s computer via phishing and infected the video monitoring system with malware. The infiltration enabled them to capture all activities on the screens of personnel handling money transfer systems. The criminals successfully manipulated international e-payment systems to move funds to offshore bank accounts and make withdrawals. In a separate attack, the gang hacked into banks in Kazakhstan and stole over US$ 4 million. They transferred the funds to 250 payment cards that were distributed throughout Europe. The stolen money was then cashed out at ATMs in a dozen countries. By the time the gang was finally caught by Europol in 2018, their thefts had approached US$ 1 billion. The Carbanak modus operandi is an excellent example of an advanced persistent threat (APT). These threats are notoriously sophisticated, characterized by their stealthy tactics and long-term presence in a network. Unlike ordinary cyber threats focusing on quick gains, APTs are used by patient fraudsters, often lurking undetected in networks for months or even years. They carefully mine valuable data or set the stage for a large-scale, potentially ruinous attack. They get into financial systems by installing malware on a banking system, using social engineering to secure login credentials, or buying them on the dark web. Insider fraud or spear phishing attacks can also install network malware. It could be as simple as a bad actor leaving a USB device on a table at a workplace with an executable virus on it. Even though we all know better than to plug in a random USB device, people, being people, will make mistakes and plug in them anyway. Highly-skilled, well-funded criminal organizations or state-sponsored actors often orchestrate this sort of multi-pronged attack. Fraudsters using APTs often have access to significant resources, allowing them to innovate their attack strategies continually. The primary goal of these sophisticated attacks is to penetrate the network without detection, maintain access over a long period, and siphon off sensitive data related to financial transactions. Their approach is leisurely. Over time they collect data, redirect funds, and create fake beneficiaries. Once they infiltrate a network, they establish a strong foothold, employ various techniques to maintain their presence, and continually evolve methods to bypass security measures. They don’t initiate actions that could alert cybersecurity teams to their presence until the final attack when it’s often too late to detect them or prevent the loss of funds. Removing them can be difficult if you can find them at all. When the attack is eventually launched, it can include the theft of customer and financial information, the launching of ransom attacks, making fraudulent transactions, and laundering the funds. Another example of a multi-vector attack occurred at a large bank in Africa. A spear-fishing email inserted malware into the bank’s ATM switch. Transactions then bypassed the host and were automatically approved. The crooks forged the bank’s credit cards and distributed them to over one hundred people in Tokyo, who then used them to withdraw money from 1400 ATMs in convenience stores. Social engineering, cyber attacks, and payment fraud vectors converged to steal US$19 million in just three hours. Once the criminals are ready to extract the data or cash out, whether that is after a few days or a couple of years, fraudsters will often employ a diversion tactic, such as a DDoS attack, then proceed with the main attack while IT and cyber security teams are distracted by the diversionary attack. Over time, the finance industry has seen the sophistication of attacks continue to increase, and there is no reason to expect that this trend will slow down. Early forms of attack were blunt and brute force, so organizations took the mentality of protecting the perimeter. But as attacks have become sophisticated, this approach isn’t sufficient. Today’s threats are advanced, persistent, polymorphic, and evade detection. They span all levels of the OSI stack, in particular at the network and application levels, and they result in ever-increasing losses. New forms of old attacks, such as Distributed Denial of Service attacks (DDOS), are increasingly driven by bots, with AI that mimics humans and evades detection. Traditionally, AML is about compliance, cybersecurity focuses on preventing IT threats, and fraud programs are for detecting and preventing payment fraud. Within these organizational silos, a card-skimming fraud event would not ordinarily capture the attention of a CISO, while a fraud manager doesn’t make decisions about firewalls. These traditional organizational silos within companies make tackling this convergence a challenge. Fraudsters exploit the gaps between information security, fraud, and risk teams. For example, in an e-commerce setting, a fraudster could run a credential-stuffing campaign using leaked data, take over accounts, check for stored payment information or add a stolen credit card, and purchase expensive luxury items. This type of fraud affects both the retailer and its customers. The fraudster transfers stolen funds to mule accounts, which are often used for money laundering. The fraud and risk team is alerted to the situation through customer complaints or monitoring system alerts. Still, by the time the fraud, cybersecurity, and anti-money laundering (AML) teams have come together to compare notes on the attack, the fraudster has already achieved his objectives and absconded with the funds. Given the prevalence of these converged threat vectors, the boom in digital transactions, and the growth of real-time payments, it should come as no surprise that organizations are starting to leverage the synergies to be had by eliminating organizational silos. The idea of converging cyber intelligence, AML, and fraud prevention activities to eliminate gaps in financial crime risk management has been discussed for years. Still, increasingly, organizations are moving to make this a reality. Leading financial institutions are establishing robust financial crimes centers that bring together cybersecurity, anti-fraud, and AML teams to converge their data and processes for a more holistic view of the threat landscape. This helps financial institutions identify financial crimes across the spectrum and stay agile in their preventive operations and response. Some large banks have already implemented a fraud fusion center to identify and defend against financial crimes and ever-evolving threats. For example, the Bank of Montreal established a fraud fusion center in January 2019, while TD Bank opened its fusion center in October of the same year. But as criminals introduce new, sophisticated techniques, banks are revamping their fusion centers and looking for improved technology to keep up. Gartner anticipates an increase in the number of organizations implementing cyber-fraud teams over the next several years. As the initial step in the convergence program, PwC recommends that financial institutions examine their existing enterprise-wide structure and identify points where streamlining it will give senior management a centralized view of financial crime risk. Clearly documented structure with roles and responsibilities will help detect and eliminate duplicate tasks and will ensure better data visibility across departments. McKinsey & Company suggests that strategic prevention should be key to improving the protection of the bank and its customers when working on convergence. To achieve their goals, financial institutions need to think like the criminals. Cybercriminals are looking for systems’ weak points, so when planning the defense, organizations should trace the flow of crime in order to come up with an optimized internal structure. Access to the right data at the right time is the foundation of efficient convergence programs. Instead of collecting data and tackling crimes in the silos of compliance, fraud, and cybercrime, data fusion provides a single source of data to multiple teams. This enables a complete view of the payment transactions journey and enables faster, more effective responses to threats. Criminals don’t make a distinction between AML, fraud, or cybercrime departments. They act based on whatever gaps in the system they can find. Information fusion is the best weapon against fraudsters. If fusion centers leverage raw payment data in real-time, captured at the network level to avoid data loss, they can derive trends and patterns that let them distinguish legitimate customer transactions from fraudulent ones. Artificial intelligence and machine learning (ML) also support financial institutions in their privacy compliance by helping prevent data breaches. They can cut through the noise by flagging suspicious activity with precision, blocking fraudulent activities, and letting legitimate transactions complete. Faster payments and open banking require organizations to quickly identify and respond to emerging fraud and cyberattack patterns without creating negative friction for their real customers. At INETCO, we’ve anticipated these needs and designed INETCO BullzAI, a real-time, ML-powered software solution that addresses the converged attack vectors of payment fraud, cyberattacks, and money laundering. It provides the real-time data that fusion teams need and gives them the power to prevent cyber- and fraud attacks while reducing false positives. Get in touch to find out how we can help you implement your fusion strategy. Christene Best, VP, Marketing & Channel Development, INETCO.

Friday, March 29, 2024

AI-BASED SCHEDULING IN SAP FIELD SERVICE MANAGEMENT

AI-based scheduling optimization is a topic that many maintenance and service organizations depend on to optimize their operations, increase their productivity, and customer satisfaction. So, how exactly do we make use AI-Based Scheduling and what are the benefits? SAP Field Service Management (FSM) optimizes schedules by using advanced algorithms to analyze vast amounts of data and variables, such as job priority, technician skill sets, location, availability, and customer preferences. This technology enables automated decision-making, ensuring the right technician is dispatched to the right job at the right time, and prioritizing urgent tasks to minimize downtime and deliver maintenance/service precision. By automating the scheduling process, AI reduces the risk of human error, enhances operational efficiency, and increases productivity by allowing field service teams to complete more jobs in less time. Moreover, AI-based scheduling can also make use of predictive traffic routing functionality, to ensure that resources are utilized effectively, reducing unnecessary travel time and fuel emissions/costs. In the remainder of this blog, we will discuss some of the key features that bring our AI-based scheduling solution to life. Automated scheduling Fully automated scheduling in SAP Field Service Management is a game changer for businesses looking to streamline their field service operations. With the ability to define triggers for automated scheduling on predefined periods, companies can ensure that their resources are being utilized to their fullest potential. This means that tasks can be automatically assigned to the most appropriate field technician based on factors such as skill level, location, and availability. Additionally, the system can take into account customer preferences and SLAs, ensuring that service appointments are scheduled at the optimal time for both the customer and the technician. With automated scheduling, businesses can quickly adapt to changes in demand and respond to urgent requests with ease. By reducing the need for manual intervention, errors and delays are minimized, allowing for faster response times and increased customer satisfaction. Customers can also define specifics to how the work is scheduled - i.e. it automatically released, how is the technician or customer notififed, are certain checklists automatically linked, and so on. Best technician matching Many customers also leverage “assisted planning” using our best matching technician algorithms. The best matching technician algorithm uses advanced analytics and artificial intelligence to optimize technician dispatch and scheduling. This algorithm considers factors such as technician skills, availability, job proximity, and service level agreements (SLAs) to determine the best match for a service request. In addition, the algorithm is highly adaptable and can be customized to suit the specific needs of different industries and maintenance/service types. This scenario is particularly useful when a dispatcher needs to make a quick decision, and can use the system to assist in the decision making.

Wednesday, March 27, 2024

HOW TO NAVIGATE DATA PRIVACY LAWS WHILE ELEVATING CONSUMER CONNECTIONS

Steffen Schebesta, an Entrepreneurs' Organization (EO) member in Toronto, is chairman of the board and VP of corporate development at Brevo, an intuitive, all-in-one marketing solution for small businesses. We asked Steffen how businesses can capitalize on new data privacy laws to build consumer trust in their company. Here's what he shared: As consumers continue to advocate for stronger data privacy rights, the trend is gaining traction worldwide. As we move forward in 2024, navigating new laws in the U.S. and stricter surveillance in the E.U. is critical. However, when you run a business, complying with these regulations isn't just about avoiding fines -- it's an opportunity to build trust with consumers through more transparent relationships. Recap of Data Privacy Changes in 2023 Last year, California, Utah, Virginia, Colorado, and Connecticut rolled out or amended data privacy laws. These laws, such as the California Privacy Rights Act (CPRA) and Virginia's Consumer Data Protection Act (VCDPA), aim to strengthen consumer rights and impose stricter obligations on businesses regarding data collection, usage, and storage. The CPRA, an amendment to the California Consumer Privacy Act (CCPA) of 2018, introduced key provisions, including notice requirements, opt-out rights, access and deletion rights, and data minimization principles. Similarly, VCDPA focused on obtaining consumer consent for sensitive data collection and providing opt-out choices. What this taught us was that we need to let customers have a say in what data they share and how they share it. Third-party data is a thing of the past. Zero-party data -- data that customers provide voluntarily, often in exchange for a benefit, as with email signup forms and preference centers -- allows consumers to communicate their expectations with you. In short: When asking for consumer data, be transparent and give them control. The Outlook for Data Privacy Regulations As we look ahead, there are several trends that will reshape business practices and marketing strategies: Continued enforcement of privacy regulations: Past violations of the E.U.'s General Data Protection Regulation (GDPR) have led to hefty fines. Non-compliance is not taken lightly in the E.U. Companies worldwide must comply when handling European citizens' data. Expect enforcement efforts to intensify and severe consequences for mishandling data. Increased focus on children's privacy: Recent hearings in the U.S. and past GDPR fines for mishandling children's data highlight the need for protecting minors. Legislative measures dedicated to safeguarding children's privacy are gaining traction, reflecting growing concerns about online safety. AI policy and transparency: Artificial intelligence brings forth new challenges for data privacy as AI processes personal data. Expect calls for greater transparency in data collection and usage to mitigate privacy risks. Consumer awareness and rights: Consumers are becoming more privacy-conscious and more aware of their data rights. Businesses should anticipate increased awareness and demands for privacy rights from consumers and be prepared to adjust their strategies accordingly. State-level privacy laws: Several U.S. states are set to introduce or amend privacy laws in 2024, each with its own set of requirements and implications for businesses: California's Delete Act: Effective January 1, 2024, this legislation empowers Californians to control their data held by data brokers. Users can request to remove their browsing history, purchase records, and inferred personality traits derived from digital behavior. Oregon Consumer Privacy Act (OCPA): Effective July 1, 2024, OCPA grants rights similar to the CCPA, including the ability to opt out of data sales and review personal information used in automated decision-making. Texas Data Privacy and Security Act (TDPSA): Effective July 1, 2024, TDPSA applies to businesses with significant revenue or handling data of Texas residents, emphasizing data security measures to protect sensitive information. Delaware Personal Data Privacy Act (DDPA): Effective March 26, 2024, DDPA provides rights akin to the CCPA, reinforcing the importance of transparency and consumer control over personal data. Montana Consumer Data Privacy Act (MTCDPA): Effective October 1, 2024, MTCDPA grants consumers access, deletion, and correction rights akin to the CCPA, further solidifying individuals' control over their personal information. New laws will continue to emerge, affecting business models and practices in the coming year. Business leaders must work proactively to comply with stricter data privacy requirements and set the benchmark for responsible data management. Takeaways for Business Leaders Data privacy laws will continue to redefine the relationship between businesses and consumers and play an increasingly important role in business practices. In the coming year, companies should proactively follow the latest legislation. By being proactive and transparent in how and why they use consumer data, companies can turn a difficult topic into a chance to foster customer trust. To trust businesses, users need control over their data and to understand what the data is collected for and why. Informing customers of how their data will be used is also an opportunity for companies to lay out a purpose for more data collection down the road, especially when the exchange offers a shared benefit for both parties, such as providing a more personalized user experience. Finally, it's important to move away from third-party data. Third-party channels are prone to manipulation and monetization, which could lead to someone else controlling your customers' data. Instead, businesses should prioritize zero and first-party data collection through open and owned channels. Direct and transparent data collection allows businesses to actually own their data, its distribution, and communications. It also provides better insight into customer preferences and is essential for following new regulations. Following these regulations will not only keep your business safe but also provide the opportunity to build customer relationships on a whole new level. EXPERT OPINION BY ENTREPRENEURS' ORGANIZATION @ENTREPRENEURORG

Monday, March 25, 2024

THE iPHONE MAKER MAY SEEM A LITTLE LATE TO THE AI PARTY, BUT THAT IS TYPICAL APPLE STRATEGY

As AI becomes part of nearly every tech headline right now, Apple faces increased criticism that it's late to the game. CEO Tim Cook even faced questions on the matter at Apple's most recent earnings call, and uncharacteristically hinted at Apple's future plans when responding to an analyst's question. Cook said Apple had long been working on AI tech and noted he was "excited to share the details of our ongoing work in that space later this year." For Apple watchers, that was a giant hint that the company is bringing AI to the forefront of its products. That supposition got some serious support with a fresh rumor that Apple was entering into some form of partnership with Google to leverage its Gemini AI product. If Apple goes all-in on AI, that will dramatically overhaul many people's online experiences, and change many businesses too. According to a recent Bloomberg report, Apple is currently amid "active negotiations" with Google. The deal, if it's reached, would license some of Gemini's features to power certain AI features in the new versions of Apple's iPhone and iPad software later this year, Apparently Apple is looking to bolster its image and text-based AI generation capabilities as quickly as it can. Google's Gemini AI system has been touted for its "multimodal" powers, meaning it can accept and also produce both text-based and image-based information. Multimodal AIs are more useful than simple text-based chatbots because one app allows the user to prompt the AI with, say, favorite colors and or example images and text to get it to, for example, dream up a new logo for your company. Though the image-producing part of Gemini did recently get Google into ethical trouble, it's likely just a wrinkle. A deal with Google to provide key aspects of the iPhone experience is also not unprecedented: Since the iPhone launched, Google had a deal with Apple to be its featured internet search provider in a partnership said to cost Google $18 billion a year. Demonstrating the extent to which Apple is leaping into the AI game, reports earlier this year said it had also quietly purchased a Canadian AI startup called DarwinAI, known for its efforts to make AI systems smaller and faster. This news tracks with Apple's overall user-privacy-centric ethos. Apple's priority of keeping AI systems operating on-device, rather than in the cloud -- as many other AIs do, including Google's -- may allow Apple to keep users' AI data more under their own control. By keeping it safely locked inside their phones' chips rather than running on a server far away, it creates a fundamental difference, very much in line with Apple's privacy sensibility. It's also been reported that Apple has been working on its own multimodal large language AI models, the same general style as Google Gemini. The company recently revealed details on its so-called "MM1" AI system, including insights into exactly how these models work. This isn't necessarily a sign Apple is embracing open source for its overall AI effort, however. It's merely an academic paper produced to share insights with other researchers into novel AI systems, which is fairly common practice. Add the Google deal, its purchase of DarwinAI, and the MM1 research news together and it paints a very clear picture: Apple is going big on AI this year. Though Apple has been criticized for being "late" to the AI game, it has actually been working on AI for years -- the entire Apple designed and made range of chips powering its mobile devices and Mac computers contain dedicated sections for processing machine learning, the math algorithms that underpin lots of AI models. Apple tends not to enter a market until it has done its own background work, then swooping in with a cutting-edge offering, the recent Vision Pro launch being an excellent example. It's possible to make a few informed guesses about how this move will impact the millions of developers that write Apple apps, as well as the many business users of Apple systems. Developers keen to seize the zeitgeist and leverage AI systems into their apps may find Apple directly supporting their coding efforts with dedicated AI API features -- little fragments of code that let app writers hook up directly to Apple's special systems. Meanwhile, business users of iPhones and perhaps Macs will find their systems mirror rival AI efforts, with more business-enabling intelligence embedded into software and hardware. That's a parallel to Microsoft's efforts, of course, with the PC software maker even pushing to have dedicated AI keys on new PC keyboards. With billions in capital to spend, could Apple become the leading AI company? Ask Siri, maybe. BY KIT EATON @KITEATON

Friday, March 22, 2024

USING THE WRONG AI CAN CREATE PROBLEMS. IT'S IMPORTANT TO STUDY DIFFERENT MODELS AND MAKE THE RIGHT CHOICE FOR FOR YOUR BUSINESS

Incorporating artificial intelligence into daily operations can set a business apart. However, recent developments suggest both IBM's Watson and Amazon's Alexa have encountered challenges, underscoring the importance of a cautious approach to integrating AI into your business strategies.

Here's a closer look at the future of conversational AI in business analytics and why you must carefully evaluate its integration into these processes.

The revolution of conversational AI in business analytics

The underlying natural language processing in conversational AI allows businesses to interact with data and customers in a more intuitive and human-like manner. The upsides are obvious: from enhancing customer service with AI-driven chatbots to enabling more accessible data analytics through conversational interfaces.

As far back as 2016, IBM's Watson promised incredible leaps ahead in analyzing complex data sets, notably through initiatives like Watson Oncology at the Memorial Sloan Kettering Cancer Center. But although there was much promise and millions of dollars poured into its further development, it came up short. Watson was discontinued in several Genomic projects, as shared in a New York Times report in 2021.

As a recent CNBC article reports, IBM decided to bring Watson back, marketing it as a platform for training other machine learning models. With any evolving technology, there will be highs and lows, and this is a great example of the long arc of innovation when it comes to era-defining technologies like AI.

Similarly, Amazon's Alexa has been a leader in incorporating voice-activated technology into daily business and consumer environments through its incubator programs since 2020, offering a user-friendly interface that enhances operational efficiency and customer experience. More recently, Amazon announced it's combining Alexa with LLM technology to create a more compelling chatbot experience.

IBM's Watson and Amazon's Alexa exemplify the evolving impact of conversational AI in business and how it changes over time.

A reality check on conversational AI

Despite their advanced capabilities, potential benefits, and recent developments, both Watson and Alexa have faced their share of challenges. A 2024 Business Insider report reveals that Amazon has been thinking about introducing a subscription model for Alexa, an idea that signals possible difficulties in monetizing the technology source. Such developments raise questions about the long-term viability and cost implications of incorporating conversational AI systems into business operations. For entrepreneurs and startups operating on tight budgets, the prospect of additional subscriptions or unforeseen expenses can be a significant concern.

Businesses attracted by the promise of conversational AI and decision-making tools must navigate the gap between expectations and reality.

The lessons here are twofold: First, the importance of tempering enthusiasm for new technologies with a critical assessment of their current capabilities and limitations, and second, the need for ongoing evaluation of these tools as they evolve.

Navigating the pitfalls by overcoming AI challenges

The allure of conversational AI technologies like Watson and Alexa is undeniable. Many new NLPs are coming into the market all the time. At my company, Aigo, where we provide cognitive AI solutions, I preach the importance of careful implementation with thorough use cases that provide a foundation that can be leveraged for faster internal development and activation.

It's essential to approach integration with caution. You should keep several key factors in mind:

  1. Cost Versus Benefit Analysis: Understand the full scope of costs involved, including any potential subscriptions or additional fees. Evaluate whether the benefits of integrating AI technologies outweigh these costs, especially in the early stages of your business.
  2. Realistic Expectations: Set realistic expectations for what these technologies can achieve. While AI can offer valuable insights and efficiencies, it is not a silver bullet for all business challenges.
  3. Ethical and Privacy Concerns: Conversational AI involves the collection and processing of vast amounts of data, including potentially sensitive information. Businesses must ensure they adhere to data protection regulations and ethical standards, safeguarding customer privacy and trust.
  4. Technical Support and Expertise: The integration of AI technologies requires a certain level of technical know-how. Consider the availability of technical support and the need for in-house expertise to manage and optimize the use of these tools effectively.
  5. Scalability and Flexibility: As your business grows, your needs will change. Assess whether the AI solutions you're considering can scale with your business and if they offer the flexibility to adapt to evolving requirements.

The opportunities and challenges are multiplying

The journey of integrating conversational AI into business operations is fraught with both opportunities and challenges. By conducting thorough due diligence and maintaining a critical eye, businesses can harness the power of AI to drive innovation and growth while avoiding the pitfalls that have trapped others. In doing so, they can move closer to realizing the full potential of conversational AI in shaping the future of business analytics and customer engagement.


EXPERT OPINION BY SRINI PAGIDYALA, CO-FOUNDER, AIGO.AI 

Wednesday, March 20, 2024

THE OPENAI CEO's REMARKS MIGHT RAISE A FEW EYEBROWS, BUT NOT IN A WAY YOU MIGHT EXPECT

The most popular iteration of OpenAI's ChatGPT -- the generative AI chatbot that's taken the world by storm and amassed 100 million daily users -- "kinda sucks," according to Sam Altman, the company's CEO. 

Altman struck the critical tone on an episode of The Lex Fridman Podcast, released Monday. The conversation covered a wide area related to generative AI and the torrent of hype and gold rush that's followed since the commercial release of ChatGPT in November, 2022. 

Fridman called GPT-4 "amazing" and "historically impressive," and described the evolution of different iterations of the tech as fostering "a historic, pivotal moment" in the world. 

In response, Altman cut a pensive figure, stroking his chin, and said "I think it kinda sucks." He explained his thinking with a comparison to how some people might look back on past versions of the iPhone and think that they're useless compared to current models. "I think it is an amazing thing," Altman said, giving his company some credit for its first commercial product, which it released for free. GPT-4, by contrast, is available starting at $20 a month. 

Founders might empathize with Altman's chilly review of his company's technology. Every viable product has to start somewhere, and self-criticism can be harnessed in positive ways. 

The context around Altman's comments is crucial, particularly as generative AI technology evolves at a rapid clip, Altman emphasized. 

"At the time of GPT-3, people were like 'this is amazing, this is [a] marvel of technology,'... and it was. But now we have GPT-4, and you look at GPT-3 and you're like 'that's unimaginably horrible,' " Altman said. 

Altman addressed the next version of the ubiquitous chatbot, presumably called GPT-5, saying "I expect the delta between 5 and 4 will be the same as between 4 and 3. It's our job to live in the future and remember that our tools are going to kind of suck looking back at them." 

OpenAI's next big product doesn't have a release date, and the rumor-mill has been chugging along, with people on Reddit particularly buzzy with speculation about what the company will unleash and when. Asked by Fridman whether GPT-5 will be released this year, Altman said "I don't know. That's an honest answer." 

Though he did say, "We will release an amazing new model this year." It's unclear what the company will call it if not GPT-5

How much time will need to elapse after the release of this forthcoming model for Altman to believe it "sucks" is a question nobody can answer. 


Monday, March 18, 2024

AN ANALYSIS OF 5 MILLION JOB POSTINGS SHOWED THESE ARE THE 3 JOBS BEING REPLACED BY AI THE FASTEST

I've resisted writing about how AI will affect the job market because, frankly, I had no idea what to say. Since the explosion of generative AI tools on the scene, I've read reputable-sounding research saying everything from, "Don't worry, AI is leveling the playing field," to "Run for the hills, the robot apocalypse is nigh!" (OK, I might be paraphrasing slightly with that last one.)

These studies are not only often contradictory but also generally based on observations of small sets of carefully chosen workers in specific situations. They may tell you AI helps call center workers be more productive, or is causing one company to hire less customer service reps. But it seemed dangerous to draw wider conclusions on such an important subject from limited data.

But I just found one analysis that seems worth sharing, both because it looks at a very broad set of real-world jobs and because these particular jobs are the ones many self-employed Inc.com readers are likely to care about most -- freelance gigs. The news isn't good for three types of professionals in particular.

The jobs that are safe from AI (for now)

This analysis, from labor market trend publication Bloomberry, looks at publicly available data on more than 5 million jobs listed on freelancing site Upwork from a month before ChatGPT was released in November 2022 to just last month.

Researcher Henley Wing Chiu explains why they took this approach: "If there's going to be any impact to certain jobs, we'll probably see it first in the freelance market because large companies will be much slower in adopting AI tools."

Freelancers are essentially the canary in the scrappy, independently operated coal mine. What tune are they singing? That depends on what industry they're in. Wing Chiu observes that most freelance niches are doing just fine despite the ongoing generative AI revolution. Of 12 subcategories he looked at, the vast majority had actually seen the number of jobs listed increase since late 2022.

"Video editing/production jobs are up 39 percent, graphic design jobs are up 8 percent, and Web design jobs are up 10 percent. Software development jobs are also up, with backend development jobs up 6 percent and frontend/Web development jobs up 4 percent," he reports.

Unsurprisingly, postings looking for people with AI skills were also way up. "Jobs like generating AI content, developing AI agents, integrating OpenAI/ChatGPT APIs, and developing AI apps are becoming the rage," Wing Chiu says.

And those that need to worry

But there were three big exceptions. With apologies to my fellow word nerds, those were writing, translation, and customer service jobs. "The number of writing jobs declined 33 percent, translation jobs declined 19 percent, and customer service jobs declined 16 percent," the Bloomberry analysis found.

This is hardly the biggest shock, as some of the earliest and most developed use cases for AI are basic copywriting tasks and customer service chatbots. Swedish buy-now-pay-later startup Klarna just announced that its customer service chatbot is doing the work of 700 customer service reps, for instance, and the media has been full of stories of writers who have lost their jobs to AI replacements.

This data confirms what writers have already feared, but does it mean that video editors and graphic designers should rest easy? Wing Chiu isn't so sure. The uptick in these sorts of jobs, he warns, may be temporary, as companies figure out how to best use fast-improving video and image generation tools.

"I think there's several ways to interpret this data. One is that these generative AI tools are already good enough to replace many writing tasks, whether it's writing an article or a social-media post. But they're not polished enough for other jobs, like video and image generation," he writes.

It might also be that companies are still figuring out how best to use these tools. There was a lag of six months or so between the release of ChatGPT and the biggest decline in writing jobs. Companies might just need more time to figure the more complex case of video and image manipulation. If that's so, declines in many other fields just haven't quite arrived yet.

Whichever of these possible scenarios turns out to be correct, freelancers and entrepreneurs in fields likely to be touched by AI probably shouldn't be sitting around twiddling their thumbs and hoping it all works out.

Exactly how fast AI will come for rote and routine jobs in various sectors remains an open question no single research project can definitively answer. But whatever the exact contours of AI disruption, creativity, social savvy, agility, and dealing with ambiguity are likely to remain exclusively human domains for a long time yet. If you're worried about AI's impact on your industry, the time to make these skills central to what you offer is now.


EXPERT OPINION BY JESSICA STILLMAN, CONTRIBUTOR, INC.COM