Friday, December 15, 2023

LEVERAGE THESE A.I. TECHNIQUES TO STAY AHEAD IN 2024

It's been a year since generative artificial intelligence (Gen AI) hit the mainstream, and public discourse on this AI subset has dominated ever since. These ranged from how the software would liberate us from digital drudge work and boost productivity to doomsday warnings about industries under serious threat. 

Yet whatever side of the fence these opinions sat on, there was a general consensus that the Gen AI would be so rapid and transformative that laggards and late adopters would be left in the dust of their competitors.  

For founders of startups and small ventures in particular, the prospect of saving resources and boosting output is undeniably attractive. Yet it would be wise to take note of the cautious approach taken by enterprises where Gen AI still sits below one percent of overall spend on cloud technologies. 

That's because 2023 has given us a deeper insight into both the good and bad realities of Gen AI adoption, helping us to move away from speculative predictions alone. 

For example, Harvard Business School conducted an illuminating study in association with Boston Consulting Group to examine the outcomes of using ChatGPT 4 in the workplace. The study found that specific tasks could be completed more quickly and to a higher quality with AI assistance. 

Yet in contrast, the long-term viability of such tools is being called into question as parent companies like OpenAI face a growing number of lawsuits related to copyright infringement and IP ownership. 

As the situation continues to evolve, here are four significant pitfalls that are already apparent to help founders understand the risks and rewards associated with the application of Gen AI technologies.  

How the 'black box' phenomenon affects consistency 

The 'black box' phenomenon is present across the spectrum of AI technologies and Gen AI is certainly not immune. It's a conundrum in which the algorithm performs an action or provides a response in a way that can't be explained by its programming. 

It's highly important that founders hoping to leverage the benefits of Gen AI are aware of this reality. The ambiguity highlights why placing too much trust in AI tools may be ill-advised. Every aspect of business operations needs to be managed to ensure product consistency is maintained and AI tools are no exception. 

Akin to when students are asked to 'show their workings' during exams, even if AI delivers a quality output for one response the replicability of this can't be assured without knowing how the model reached this result.

This is a sentiment reflected in comments from data expert Alex Karp where he notes "It is not at all clear -- not even to the scientists and programmers who build them -- how or why the generative language and image models work." In summary, founders need to temper any immediate wins against the reality that the product or client service may not be sustainable when it comes to business deliverables. 

The repercussions of model hallucinations  

It's also well documented that Gen AI is prone to hallucinations, meaning the algorithm provides an inaccurate or totally fabricated response. When ChatGPT was in beta stage the wild responses it sometimes delivered were shared as funny anecdotes, but when it comes to actual business use cases the repercussions of these hallucinations won't be amusing. 

For example, one company endured the wrath of The Hill's chief editor after they published an AI-produced article littered with errors, highlighting the importance of balancing the ability to produce content more quickly versus the actual quality being delivered. 

Yet the problem of model hallucination runs deeper. There have been numerous reports that ChatGPT completely fabricates source materials it uses to build an argument at times. A particularly worrying example of this in action can be found in this example where ChatGPT named a real US law professor as being accused of sexually harassing a student based on a Washington Post article that never existed.   

Here, Princeton computer scientist Arvind Narayanan offers an important perspective on why these hallucinations may creep in. He argues that the model is trained to produce plausible text. Many of its statuses are true by default, but the technology is really trying to persuade and deliver an output that meets the goal of the prompt by whatever means necessary, even if that involves spinning the truth. 

Said Lucas Bonatto of Semantix AI, "People need to understand that the models are not hallucinating on purpose. By the nature of how current large language model works, they are just trying to fill-in the blanks by using the most likely words given the relationship between words it learned during training, experts call it the context."

The risk of reputational damage 

Although issues such as hallucinations and consistency are being widely documented, Gen AI is still very appealing, particularly for lower-value content like social media outputs. Yet even if the content outputs are error-free and of a good standard, using AI is likely to tarnish your reputation in the long run. 

Let's imagine you're using AI to produce client deliverables without their knowledge while still charging a normal premium. Tools like ZeroGPT let users check the originality and legitimacy of any content in a matter of seconds meaning clients are very likely to cotton on. Trust and relationships take years to build but using AI in this manner can erase these bonds overnight. 

What's more, the way that providers like OpenAI train their algorithms is under increasing fire. 

The reputational damage is likely to go much further than just the clients who caught you out. Editors, collaborators, and stakeholders alike will all begin to rely on tools like ZeroGPT, so whether you're an agency selling content or a staffer trying to save some time in the day, using generative AI to do your work for you will stick to your name. 

A race to the bottom 

The models behind tools like ChatGPT are trained on public online materials. Yet if tools are responsible for a higher percentage of such material we risk moving towards a scenario akin to a snake eating its own tail, technically known as 'model collapse.' 

This will be fueled further by the increasing number of online news sites and content providers who are opting to restrict crawler activity from GPTBot to protect the value of their materials. In the future, it's very likely that public users and clients alike will be able to instinctively detect AI-generated content in the same way that blogs jammed with keywords for SEO purposes are.

Generative AI may save time in the short term but the value it delivers for companies is likely to offer increasingly diminished returns. 

Apply Gen AI with caution in 2024 

The benefits of AI, and the ease of its access, are clear. "You no longer need to be a data scientist, an engineer, or even a programmer. It is 1997 again; but instead of the Internet, the technology that is changing industry is AI," said AI entrepreneur Michael Puscar.

At the same time, while a defamation lawsuit may not be the most immediate threat for founders hoping to leverage the benefits of AI, the pressing need to measure the risk vs reward has become increasingly apparent over the course of 2023. 


BY KATIE KONYNCATALINA CARVAJALFABIO RICHTER

No comments: