Friday, December 29, 2023

TIM COOK WAS ASKED ABOUT AI. HIS 3-WORD RESPONSE WAS THE BEST I'VE HEARD YET

On Apple's earnings call on Thursday, the company reported mixed results. The short version is that the world's largest company is doing fine. While revenue was down from the same period last year, the company managed to beat expectations largely because of better iPhone performance, and a record quarter for services revenue. 

During the question and answer period, however, Tim Cook was asked about something far more interesting, and his response was maybe the best I've heard yet on the subject. Responding to a question from analyst Shannon Krauss from Credit Suisse, Cook talked about how he thinks about the power and potential of A.I.

"Can you talk a bit about A.I.?" Krauss asked. "It's, you know, obviously, the topic of the day--it seems like the topic of the year. Just how do you think about it through your products and services?"

"I do think it's very important to be deliberate and thoughtful in how you approach these things," Cook responded. "And there are several issues that need to be sorted ... but the potential is certainly very interesting." 

You have to have heard Cook's response to really grasp the emphasis he put on the word very in that second sentence. Cook is acutely aware that many of Apple's peers seem to be in a race to figure out how to incorporate generative A.I. into their products. 

I think we can all agree that generative A.I. is interesting, but Cook's response stands out because it goes against the grain of what almost every other tech company is doing right now. Three words really say it all. Cook says it's important to be "deliberate and thoughtful." 

Those words, as much as anything, are a hallmark of Cook's approach. Apple doesn't do things impulsively to respond to whatever happens to be the next big trend. It takes its time figuring out the best possible version of a given feature, and only then does it unleash it on more than two billion active devices.

By contrast, here's what Zuckerberg said just a few days earlier when talking about how Meta is thinking about adding A.I. features to Facebook, Instagram, and WhatsApp:

The specifics are going to come into focus as we ship more of these things. So, these are just themes for now. But first for our product, we're always focused on connection and expression. And I expect that our A.I. work will reflect that. I think that there's an opportunity to introduce A.I. agents to billions of people in ways that will be useful and meaningful.

Basically, Zuckerberg is telling investors that he's not really sure how A.I. fits into Meta's products, but the company will figure it out as "we ship more of these things." It sounds a lot like more of the classic "move fast and break things." Except this time you're breaking something with the potential to do a lot more damage. 

That doesn't mean Apple isn't thinking about what A.I. means for its products and, more importantly, for its business. Apple has been working on integrating A.I. in some form into its products for a while. The most obvious example is Siri, which uses machine learning to handle a range of tasks. Cook pointed out that the technology also has other uses. 

"We've obviously made enormous progress, integrating A.I. and machine learning throughout our ecosystem," Cook said. "And we've weaved it into products and features. For many years, as you probably know, you can see that in things like fall detection and crash detection, and ECG. These things are not only great features, they're saving people's lives out there. And so it's, it's absolutely remarkable. And so we view A.I. as huge. And, you know, we'll continue weaving it in our products on a very thoughtful basis."

Of course, Cook didn't specifically answer Krauss's question about generative A.I., which isn't surprising. Apple is notoriously cautious about saying anything about its plans for future products. However, Apple has already optimized its Core ML for Stable Diffusion, an A.I. image generator. It makes sense that Apple is continuing to think about what makes the most sense for its users on its products. 

What Cook didn't say is that Apple is making major changes to its core products, the way Microsoft has done with Bing and its Office suite, or like Google is rolling out with Bard. Apple is taking a more cautious and measured approach--which is what you would expect. It also happens to be the best way to think about A.I. that I've heard yet. 

Wednesday, December 27, 2023

COMCAST BREACH AND NSA REPORT HIGHLIGHT CYBERSECURITY THREATS FOR BUSINESSES

On the same day that Comcast disclosed a data breach affecting 36 million Xfinity accounts, the National Security Administration (NSA) issued its annual cybersecurity report, which reviews high-level efforts to improve A.I. security, Chinese cyberattacks on critical U.S. infrastructure, and how government, industry, and academia can coordinate efforts to improve highly advanced data protection efforts. 

The Comcast breach, which the company first discovered on October 25, exposed personal data, including usernames and scrambled passwords, through a vulnerability in a third-party software called Citrix. The cable giant made customers reset their passwords and urged them to use two-factor authentication to protect their accounts. While the company said it received no reports of problems from customers and the security loophole has been fixed, security breaches can snowball into problems that can force a business to close.

In fact, cybersecurity remains a vital issue for businesses of all sizes, from sole proprietors worried about data breaches and identity theft to giant defense contractors that enjoy the best protections the National Security Agency can provide.

At a macro level, cybersecurity issues remain a leading national security concern, as Chinese and Russian efforts to break into government and business networks continue unabated.

"Recently, we've seen the nature of conflict evolve: Cyberspace is contested space," NSA Director Gen. Paul Nakasone wrote. "It's become clear that the shift from competition to crisis to conflict can now occur in weeks, days, or even minutes."

Rapid changes in technology also remain a force in cybersecurity concerns, particularly the dizzying rise of artificial intelligence applications, which pose new forms of threat. 

The global landscape becomes ever more complex as the technology we use in cyberspace continues to advance," Nakasone's introduction read. "One such example is Artificial Intelligence (AI), which has the capacity to upend multiple sectors of society simultaneously. We must stay ahead of our global competitors in the race to understand and harness its potential, as well as protect ourselves from adversarial use."

Rob Joyce, the agency's director of cybersecurity, cited alerts about the discovery of Russian-backed Snake spyware and the agency's alert on how to respond to the security vulnerability found in Citrix, the third-party software linked to the Comcast breach.

"When we know something, it only provides value when net defenders can take real action with it. By sharing information bi-directionally in an unclassified environment with our partners, we improve both cybersecurity and national security. The combined talent of our partnerships is the greatest competitive advantage we have to confront the increasingly sophisticated threats we see today."

The NSA is also charged with protecting the Defense Industrial Base (DIB), a list of companies whose work on military equipment and technology requires added vigilance. 

"Although many people associate the DIB with large defense contractors, more than 70 percent of the DIB is made up of small businesses," the report said. "Upon signing a contract with DoD, these companies often become targets for nation state actors. Small businesses generally do not have the resources to defend against nation-state activity alone."

The agency said it has worked hard to add DIB companies to its protective cyber-net, quadrupling the number of businesses it protects. 

"NSA now provides cybersecurity assistance to more than 600 companies within the DoD supply chain, including suppliers who may lack adequate cybersecurity resources of their own."

Monday, December 25, 2023

MICROSOFT'S CO-FOUNDER FIGURES FUTURE BUSINESS SUCCESS DEPENDS ON THREE THINGS: AI, AI, and AI.

Bill Gates is still a dominant figure in technology, nearly a half-century after co-founding Microsoft. And when a foundational industry figure speaks--albeit one who shared ties with the now-deceased sex criminal Jeffrey Epstein--it's worth paying attention. A new blog post summarizes Gates's thoughts on "the road ahead" in 2024, an influential commentary with insights on artificial intelligence and innovation.

What makes Gates's thoughts important for tech entrepreneurs is that these posts provide peeks into the mind of someone with a finger on the pulse of bleeding-edge technology who can also still influence the entire industry.

AI

Gates thinks that the  AI explosion of 2023 is merely the beginning of a tech trend that will shape the near future to an even greater extent. The acronym appears 28 times across the six-page post.

He predicts AI will move from its current somewhat geeky corner niche into the limelight.

"In high-income countries like the United States, I would guess that we are 18-24 months away from significant levels of AI use by the general population," he writes.

Gates expects AI use in less-developed nations to soar, too, with a shorter lag time than had been seen for many other tech innovations. Gates admits that "2023 marked the first time I used artificial intelligence for work and other serious reasons, not just to mess around and create parody song lyrics for my friends." 

For business leaders directly involved in developing AI solutions, this will sound familiar, even if they haven't played at being Weird Al Yankovic with a chatbot. Small and midsize businesses are still exploring the many ways AI applications might help their companies. If a business isn't already considering incorporating AI into customer-facing uses (perhaps as a chatbot) or business decision-making, Gates' enthusiasm may be a nudge to embrace this tech.

Innovation

Though "innovation" runs as a theme through Gates's note, he also ties it to AI. That's because artificial intelligence is set to "supercharge the innovation pipeline." Citing advances in electric power, cars, planes, and the digital realm, Gates says "innovation is the reason our lives have improved so much over the last century." He also blows his own trumpet a little, noting "We are far more productive because of the IT revolution."

Gates, who retired as Microsoft CEO in 2000 to focus on his work with the Gates Foundation, suggests the next step in innovation will be the acceleration of development projects that impact people's lives materially, financially, and socially. 

The Gates Foundation focuses its work on improving health and reducing poverty globally, so the examples used in Gates's note are tailored toward these topics. The drug discovery process already uses AI applications to sift through massive amounts of data almost instantly, slashing new medicines' time to market. Gates muses that AI could help treat high-risk pregnancies. He also suggests that AI could bring personalized tutors to every learner: "The AI education tools being piloted today are mind-blowing because they are tailored to each individual." He also thinks AI could make it easy for every health worker to access medical info.

Some of these lessons resonate in the business world too. AI can already help companies learn more about their customers, reinforce their supply chains and help process raw data to understand business dynamics better. It can even help founders put together a business plan to start a new enterprise. And its use in content creation for business cases is already well known.

Essentially, Gates's note is a primer for how entrepreneurs should be thinking about how AI can help businesses innovate, particularly if it can "get game-changing technologies out to the people who need them faster than ever before."

It's Not All Easy

All of this sounds exciting and might be a shot in the arm for AI startups or leaders planning to use AI to boost their businesses. 

But Gates says figuring out AI isn't easy. The world is just starting to get an idea about how AI can replace some jobs, and act as a co-pilot for others. And though the tech is exciting, he concedes it's also confusing. He even admits that though he thought he'd use AI to help him sift through "hundreds of pages of briefing materials" to help craft strategy reviews for the Gates Foundation, expecting it could "accurately summarize" the data for him, he ended up doing it all himself.


Friday, December 22, 2023

JOBS ARE CHANGING IN 2024: HOW EXPERTS PREDICTS ROLES WILL FADE, SHIFT, AND GROW IN THE NEW YEAR

Roles are rapidly shifting--and company leaders will need to keep up in 2024.  

Changes in demand and technology have created and killed jobs throughout history, says Carl Benedikt Frey, a professor of AI and work at the Oxford Internet Institute, a research hub at the University of Oxford. For instance, jobs like sommeliers and hot yoga instructors arose from developing consumer desires, he says, while roles in the automobile and biotech industries sprouted from breakthrough innovations.  

But now, there's more movement in the labor market than usual, according to a McKinsey report released earlier this year. From 2019 to 2022, the labor market experienced 8.6 million occupational shifts. That's 50 percent more than in the prior three-year period, and the report projects that another 12 million shifts could come by 2030, thanks to increasing health care demand, digitization, and other factors.  

Here's what leaders and their teams might expect to see in the evolving job landscape in the year ahead, and how they can best prepare to meet those changes:  

New technology will make some roles redundant  

While technology and digitization are helping some jobs grow, they're expediting the decline of others, according to this year's Future of Jobs Report from the World Economic Forum. By 2027, the report forecasts, there could be 26 million fewer jobs in bookkeeping, executive secretarial work, administration, and similar areas of work.  

AI--and generative AI, in particular--is expected to play a key role here. In April, a Goldman Sachs report estimated that 300 million jobs could be "exposed to automation" thanks to recent AI advancements.  

But jobs don't disappear overnight, Frey cautions: Switchboard operators and elevator operators, for instance, faded over the course of many years. And while AI requires less physical infrastructure--which could speed up the uptake--Frey's "hunch is that it is not going to improve quite as fast as some people expect."  

So, employers should think in terms of tasks, not jobs  

Indeed, while entire occupations may not become obsolete in 2024, parts of roles might. The McKinsey report estimates that, because of AI and other factors, "activities that account for up to 30 percent of hours currently worked across the U.S. economy could be automated" by 2030.

As a result, company leaders should start to think about the jobs within their companies as "a combination of different tasks" to determine the best path forward, says Stephan Meier, a professor of business strategy at Columbia Business School.  

For instance, an analysis from Accenture earlier this year broke down a customer service role into 13 tasks and determined how each might be impacted by AI. It found that four tasks would continue to be performed mainly by humans, another four could be fully automated, and five could be augmented by AI technology.  

Separating a job into tasks "makes it a little bit less zero-one," Meier says. "It's more: What tasks are going to remain the same? And what are not?" 

Meanwhile, new roles will grow  

And yet, thanks to technology and other trends, other jobs will likely become even more crucial this year.  

The WEF identified "AI and machine learning specialists" as the fastest-growing job in this year's report. Sustainability is also driving job growth, with roles like sustainability specialists and renewable energy engineers gaining traction, pointing to the green transition as "both a significant and developing labor-market trend" heading into 2024, per the report. 

Specific changes in company workflows could also spark new roles next year. For instance, if a company introduces more AI technology, its leadership may need fewer junior software developers, but could realize a new need for someone to create prompts for generative AI or review AI output for errors, says Gartner analyst Emily Rose McRae. 

But along with new job titles, new skills will emerge as especially important in 2024, says Kweilin Ellingrud, senior partner and director of the McKinsey Global Institute. For instance, "as you look across different jobs, we're all going to need more social and emotional skills," she says, as in-person connection could become a bigger differentiator when other tasks are automated.  

How to meet the changes coming in the year ahead   

With the sheer amount of change that employers could start seeing in 2024, the path forward could be bumpy, Meier says: "There are going to be hard decisions. There [are] also going to be some employees who have to change." This could mean moving away from their current job or upskilling in their current role, Meier says, "and that's hard to communicate and facilitate."  

But developing a plan for shifting, upskilling, and reskilling is critical, especially as companies face a still-tight labor market, Ellingrud adds. And on this front, companies are underprepared: While six in 10 workers will need training before 2027, only half "are seen to have access to adequate training opportunities today," per the WEF report.  

To start, company leaders should take a hard look at their own workflows and roles, McRae says. On the AI front, for instance, there won't be a singular, universal impact on the workforce, she says: "Instead ... what is this going to mean for our workforce? How do we want to think through how we're going to use these tools?" 

From there, company leaders can create a plan for how to move employees from shrinking tasks to roles designed for the future, Ellingrud says. Instead of training administrative assistants at scale--or another job that could drastically change--employers should keep their eyes on "where the puck is going," she says, "so that we're training for the right jobs of the future and building those skills."  


Wednesday, December 20, 2023

A.I. FOR BRAND MARKETING: 4 TRUSTED SOURCES OF INFO

Just a year ago, AI wasn't really even on the radar for marketers or business owners. Today, the landscape of artificial intelligence is changing at a truly breakneck pace. 

Considering the firehose of information we're all encountering and the number of AI solutions launching daily, it's no wonder many of us are feeling a little (or a lot!) bewildered by it all.

Though 97 percent of business owners believe that ChatGPT and other AI tools will help their company (the market size of AI is projected to reach $407 billion by 2027), many are struggling to figure out precisely what they need to know--and which tools to try first.

At the start of 2023, I was definitely in that camp. I loved the idea of using AI to improve practices at our content marketing agency, but I wasn't quite sure where to get accurate, unbiased information that could actually help us--and our clients. That's all changed now that my team and I have done the research and found several high-quality resources that we trust.

These options have enabled us to embrace AI in a whole new way--and could do the same for you. 

As you probably already know, newsletters are a great way to ease yourself into any new industry landscape. This free weekly send from the Marketing AI Institute is dedicated to the intersection of both industries. With expert input, real use cases, interviews with industry leaders, and case studies of brands that are already trying out different AI tools, this publication gives you tips you can use in your day-to-day operations. Tune into Marketing AI Institute's podcast for additional insights.

Speaking of podcasts, if you're a more audio-driven learner, what better way to stay up to date with all things AI than a weekly show to add to your listening repertoire? Host Jeff Wilser goes beyond just marketing, giving you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business. 

"How Content Marketers Are (Actually) Using AI" Webinar from the Women in Content Marketing Awards

Remember how I said my team has embraced AI? We recently dove in headfirst and hosted a webinar on the topic. We spoke to leading content marketers about how they are harnessing the power of AI to create compelling, targeted, and data-driven content. Our experts gave real examples and tips on how you can begin creating AI-supported content for your brand. Check out the recording of the webinar. And sign up for our newsletter so you can be sure to tune in to the next one, as we'll continue to provide programming around this ever-evolving tool.

Coursera and Ed X Courses 

If you want to become a veritable AI expert, tons of courses are already available that can help you get there. Check out Coursera, which offers free and paid long- and short-term courses for learners of all levels on subjects like AI for Marketing or AI for BusinessEd X is another great resource for learn-at-your-own-pace options from some of the top schools in the United States.


BY AMANDA PRESSNER KREUSER, CO-FOUNDER AND MANAGING PARTNER, MASTHEAD MEDIA@MASTHEADMEDIA

Monday, December 18, 2023

CHATGPT IS SHOWING SIGNS OF LAZINESS. OPENAI SAYS A.I. MIGHT NEED A FIX

Can artificial intelligence be artificially lazy?

One of the most promising aspects of A.I. is its ability to handle dull, repetitive tasks, but some ChatGPT users say the chatbot isn't finishing the job like it used to, prompting questions about whether the technology is mirroring human laziness. In November, members of the subreddit r/ChatGPT began posting that the chatbot had become "unusably lazy," with one Redditor complaining that after asking ChatGPT to create a spreadsheet of 15 entries with eight columns each, the bot responded with this message: 

"Due to the extensive nature of the data, the full extraction of all products would be quite lengthy. However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed." 

In the post, the Redditor asked "Is this what AI is supposed to be? An overbearing lazy robot that tells me to do the job myself?"

Hundreds of Redditors chimed in with similar issues, including one who asked ChatGPT for the answer to a multiple-choice question, only for the bot to claim that four of the five possible answers were correct. Other users commented that the chatbot, which they'd been using just weeks earlier to write full code files, now would generate just a snippet of code before asking the user to do the rest themselves.

On December 7, the official ChatGPT Twitter/X account posted the following:

The account went on to say that "differences in model behavior can be subtle -- only a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns." 

So what gives for these "differences in model behavior?" Across the internet, people have theories. Some think that in an effort to save money, OpenAI has altered the model to initially use fewer tokens, the grammatical elements that allow language models to understand context, when attempting to respond to a prompt. By limiting the chatbot from generating lengthy responses, the company could theoretically spend less on computing power.

Another theory, deemed the "winter break hypothesis," is that ChatGPT's training data taught it that humans slow down in productivity around December because of the holiday season. Rob Lynch, head of product for court records SaaS service UniCourt, posted about an experiment he ran where he created two system prompts: one which tricked the model into thinking it was currently the month of May, and another that made the model believe it was December. He found that when the model believed it was answering questions in May, it delivered longer answers on average. Theia Vogel, a software developer for the SecureDNA Foundation, posted that they had recreated the experiment and also found that the May responses were on average longer than the December responses.  

As for what you can do if ChatGPT starts acting lazy, try your hand at some light prompt engineering. On Reddit, some have been able to get longer responses by claiming to ChatGPT that coding hurts their eyesight, or saying that they don't have fingers to input data. Vogel posted that they had recently conducted an experiment in which they asked the chatbot to create a simple sequence of code, and then offered to "tip" the chatbot either $20, $200, or not tip at all. Vogel found that responses to the $200 tip offer were 13 percent longer than the responses to the prompt in which Vogel said they wouldn't tip. Get creative, and you might be able to snap ChatGPT out of its end-of-year slump.

Friday, December 15, 2023

LEVERAGE THESE A.I. TECHNIQUES TO STAY AHEAD IN 2024

It's been a year since generative artificial intelligence (Gen AI) hit the mainstream, and public discourse on this AI subset has dominated ever since. These ranged from how the software would liberate us from digital drudge work and boost productivity to doomsday warnings about industries under serious threat. 

Yet whatever side of the fence these opinions sat on, there was a general consensus that the Gen AI would be so rapid and transformative that laggards and late adopters would be left in the dust of their competitors.  

For founders of startups and small ventures in particular, the prospect of saving resources and boosting output is undeniably attractive. Yet it would be wise to take note of the cautious approach taken by enterprises where Gen AI still sits below one percent of overall spend on cloud technologies. 

That's because 2023 has given us a deeper insight into both the good and bad realities of Gen AI adoption, helping us to move away from speculative predictions alone. 

For example, Harvard Business School conducted an illuminating study in association with Boston Consulting Group to examine the outcomes of using ChatGPT 4 in the workplace. The study found that specific tasks could be completed more quickly and to a higher quality with AI assistance. 

Yet in contrast, the long-term viability of such tools is being called into question as parent companies like OpenAI face a growing number of lawsuits related to copyright infringement and IP ownership. 

As the situation continues to evolve, here are four significant pitfalls that are already apparent to help founders understand the risks and rewards associated with the application of Gen AI technologies.  

How the 'black box' phenomenon affects consistency 

The 'black box' phenomenon is present across the spectrum of AI technologies and Gen AI is certainly not immune. It's a conundrum in which the algorithm performs an action or provides a response in a way that can't be explained by its programming. 

It's highly important that founders hoping to leverage the benefits of Gen AI are aware of this reality. The ambiguity highlights why placing too much trust in AI tools may be ill-advised. Every aspect of business operations needs to be managed to ensure product consistency is maintained and AI tools are no exception. 

Akin to when students are asked to 'show their workings' during exams, even if AI delivers a quality output for one response the replicability of this can't be assured without knowing how the model reached this result.

This is a sentiment reflected in comments from data expert Alex Karp where he notes "It is not at all clear -- not even to the scientists and programmers who build them -- how or why the generative language and image models work." In summary, founders need to temper any immediate wins against the reality that the product or client service may not be sustainable when it comes to business deliverables. 

The repercussions of model hallucinations  

It's also well documented that Gen AI is prone to hallucinations, meaning the algorithm provides an inaccurate or totally fabricated response. When ChatGPT was in beta stage the wild responses it sometimes delivered were shared as funny anecdotes, but when it comes to actual business use cases the repercussions of these hallucinations won't be amusing. 

For example, one company endured the wrath of The Hill's chief editor after they published an AI-produced article littered with errors, highlighting the importance of balancing the ability to produce content more quickly versus the actual quality being delivered. 

Yet the problem of model hallucination runs deeper. There have been numerous reports that ChatGPT completely fabricates source materials it uses to build an argument at times. A particularly worrying example of this in action can be found in this example where ChatGPT named a real US law professor as being accused of sexually harassing a student based on a Washington Post article that never existed.   

Here, Princeton computer scientist Arvind Narayanan offers an important perspective on why these hallucinations may creep in. He argues that the model is trained to produce plausible text. Many of its statuses are true by default, but the technology is really trying to persuade and deliver an output that meets the goal of the prompt by whatever means necessary, even if that involves spinning the truth. 

Said Lucas Bonatto of Semantix AI, "People need to understand that the models are not hallucinating on purpose. By the nature of how current large language model works, they are just trying to fill-in the blanks by using the most likely words given the relationship between words it learned during training, experts call it the context."

The risk of reputational damage 

Although issues such as hallucinations and consistency are being widely documented, Gen AI is still very appealing, particularly for lower-value content like social media outputs. Yet even if the content outputs are error-free and of a good standard, using AI is likely to tarnish your reputation in the long run. 

Let's imagine you're using AI to produce client deliverables without their knowledge while still charging a normal premium. Tools like ZeroGPT let users check the originality and legitimacy of any content in a matter of seconds meaning clients are very likely to cotton on. Trust and relationships take years to build but using AI in this manner can erase these bonds overnight. 

What's more, the way that providers like OpenAI train their algorithms is under increasing fire. 

The reputational damage is likely to go much further than just the clients who caught you out. Editors, collaborators, and stakeholders alike will all begin to rely on tools like ZeroGPT, so whether you're an agency selling content or a staffer trying to save some time in the day, using generative AI to do your work for you will stick to your name. 

A race to the bottom 

The models behind tools like ChatGPT are trained on public online materials. Yet if tools are responsible for a higher percentage of such material we risk moving towards a scenario akin to a snake eating its own tail, technically known as 'model collapse.' 

This will be fueled further by the increasing number of online news sites and content providers who are opting to restrict crawler activity from GPTBot to protect the value of their materials. In the future, it's very likely that public users and clients alike will be able to instinctively detect AI-generated content in the same way that blogs jammed with keywords for SEO purposes are.

Generative AI may save time in the short term but the value it delivers for companies is likely to offer increasingly diminished returns. 

Apply Gen AI with caution in 2024 

The benefits of AI, and the ease of its access, are clear. "You no longer need to be a data scientist, an engineer, or even a programmer. It is 1997 again; but instead of the Internet, the technology that is changing industry is AI," said AI entrepreneur Michael Puscar.

At the same time, while a defamation lawsuit may not be the most immediate threat for founders hoping to leverage the benefits of AI, the pressing need to measure the risk vs reward has become increasingly apparent over the course of 2023. 


BY KATIE KONYNCATALINA CARVAJALFABIO RICHTER