Monday, October 23, 2023

THE A.I. TERMS EVERY BUSINESS OWNER SHOULD KNOW

In just the past year, advancements in artificial intelligence have introduced transformative new ways to automate complicated tasks. Your company won't be able to take full advantage of this new technology, however, if you don't understand how it works. 

To help you make sense of all things A.I., we're building a living document to explain all the hard-to-understand terms around A.I. We'll continue to update this guide with everything you need to know about A.I. These definitions were written with the help of Tiago Cardoso, principal product manager at digital transformation firm Hyland. 

1. A.I.

A field of science dedicated to creating machines and computer programs capable of recreating the cognitive functions of the human brain, such as making decisions through logical reasoning, recognizing and categorizing objects, and learning new things.

Think of it this way: A.I. is an umbrella term used to describe a wide range of technologies, but any program that processes information to perform a task can be considered A.I. 

2. Computer algorithm

A set of instructions that a computer follows to perform tasks and process data. Social media companies like Facebook use algorithms to analyze the type of content you interact with most often, and then use that information to score every post, video, and ad on the platform by how statistically likely you are to click on it. The top-scoring posts get pushed to the top of your feed.

Think of it this way: Any time you use an Excel formula to perform data analysis, like calculating a combined sum from hundreds of data points, you are creating a basic algorithm, complete with a set of instructions for how a computer program should process specific data. 

3. Machine learning

A branch of A.I. in which an algorithm is altered or enhanced by processing a dataset and identifying the underlying patterns and relationships hidden within the data. For example: A machine learning algorithm trained on thousands of images of your company's product would be able to identify how often it appears in social media posts. 

Think of it this way:  Your email's spam filter uses machine learning to identify keywords and patterns that often appear in unwanted messages. When you receive an email, an algorithm calls upon its training data to determine if the text of the email is statistically closer to its database of spam emails or safe emails, and sorts them accordingly. 

4. Model

A computer program that's been trained by a machine learning algorithm to perform a specific task. After being trained, the program is left with a "model" for how to process new input data, like a text prompt or a voice recording, into predictions and insights based on the patterns it has learned from the training data.

Think of it this way: ChatGPT is a language model. Your text prompts serve as the input data, which is processed by the model and converted into the chatbot's response. 

5. Generative A.I.

Artificial intelligence programs are capable of creating and generating "original" content. Recent advancements in A.I. have led to breakthroughs for image-generation models like Dall-E and large language models like ChatGPT, but the tech is also being used to create original music, video, and code. 

Think of it this way:  Generative A.I. is an extremely new technology, and the rules around its use are still being debated. As such, be careful about how you implement it in your business. The US Copyright Review Board recently determined that AI-generated art cannot be copyrighted, for example. 

6. Training Data

Sets of data that are processed by machine learning algorithms to improve their functionality.

Think of it this way: Datasets, which are often extremely large, are fed into machine learning algorithms to teach them how to respond to inputs. Once the data has been processed, it gets converted into a model. There are two main types of training for machine learning algorithms: supervised and unsupervised. 

7. Supervised learning

Training in which each piece of data is paired with a label, which helps the machine learning algorithm understand the meaning of the data. An algorithm being trained to make a diagnosis based on X-ray scans, for example, would be trained on images labeled with the correct diagnosis  

Think of it this way:  An object detection model designed to identify fruits would be trained with many different pictures of those fruits, all paired with the correct labels. Through training, the algorithm would learn to identify the unique characteristics that define each fruit. 

8. Unsupervised Learning

In unsupervised learning, the training data doesn't come paired with any descriptive labels. Rather, machine learning algorithms process large amounts of data, which are then grouped into "clusters" based on their similarities or differences. This style of learning is what allows ChatGPT to do all kinds of tasks, like holding conversations, writing stories, and answering questions. It wasn't trained to do any one thing specifically; it's been loaded up with a massive collection of text. 

Think of it this way:  Alpha-Go, the A.I. model that beat a world champion in the classic game Go, wasn't trained on any labeled information about gaming strategies; it just played the game enough times to master every possible winning pattern.

9. Neural networks/Deep learning

One of the oldest, and for the last decade most dominant, designs for A.I. programs, loosely modeled on the organization of neurons in the brain. A neural network consists of several layers of interconnected nodes, which act as the network's "neurons." Each node processes input data, performs calculations, and outputs the data to be reprocessed by the next layer of nodes. Deep learning is a class of especially large neural networks with hundreds of layers, which allows for even more connections.

Think of it this way:  Most generative A.I. models are built with deep learning, with the largest neutral networks being large language models like ChatGPT, which have billions of "neurons." 

10. Parameters

In a neural network, parameters are the settings and weights that control how each "neuron" or node processes and transforms input data. You can imagine parameters as knobs on an old radio. Just like you'd adjust the knobs to improve the frequency, volume, treble, and bass of the radio, parameters are automatically fine-tuned during training to create an optimal output. 

Think of it this way:  Imagine an A.I. model built to analyze images of license plates taken from a red light camera. Each "neuron"/node has a parameter responsible for turning the image's pixels into a sequence of text and numbers that the model can understand. 

11. Natural language processing (NLP)

A specific type of A.I. designed to understand and interpret everyday language. NLP models are trained to break down a piece of language, either written or spoken, into machine-readable data. 

Think of it this way:  NLP models can be used to analyze documents, turn speech into text, translate between languages, and create advanced chatbots. 

12. Transformer

A highly advanced type of A.I. architecture that has hastened the revolution in generative A.I., and in particular the field of natural language processing, since being introduced by Google in 2017. Transformers use a process called "tokenization" to convert a string of symbols like this sentence into data, and then analyze that data to identify patterns. 

Think of it this way:  Nearly all modern natural language processing models, like OpenAI's GPT (Generative Pre-trained Transformer) family of models, are built using transformers.  

13. Tokens

Grammar elements that have been converted into data by a transformer. When you submit a query to ChatGPT, for example, the transformer takes your sentence and turns it into a series of tokens. The transformer processes each token at the same time and is able to call upon its training to understand the semantic relationships between tokens. According to OpenAI, one token generally corresponds to around 4 characters of text, but they are often slightly shorter or longer, and special characters like punctuation marks are usually counted as their own token.

Think of it this way:  The sentence "Nowadays, I feel goodish." would be tokenized into eight tokens: "Now-adays-,-I-feel-good-ish-."

14. Hallucinations

Instances where an A.I., usually a large language model, produces something that sounds plausible but is untrue. The A.I. isn't technically lying, since it doesn't know that what it is saying is false, thus the term "hallucinations" 

Think of it this way:  New York attorney Steven Schwartz used ChatGPT to find cases for him to cite in a legal briefing. Schwartz didn't realize that the cases generated by ChatGPT were hallucinated until he was asked to provide copies of the cases. 

15. API (Application Program Interface)

A software component that allows you to integrate someone else's program into your own application without needing to understand the underlying code. A.I. models are deployed and released through an API so that companies can monetize their technology by providing outside parties with access to the tech's services and capabilities.

Think of it this way:  OpenAI has released APIs for nearly all its A.I. models, with users being charged depending on how many tokens are used to process and output a query. 

No comments: