- USDT(TRC-20)
- $0.0
This post is part of Lifehackerās āLiving With AIā series: We investigate the current state of AI, walk through how it can be useful (and how it canāt), and evaluate where this revolutionary tech is heading next. Read more here.
Artificial intelligence (AI) is the latest tech revolution. Just as the cryptocurrency boom introduced the world to a whole bunch of new jargon, the AI hype train has brought with it a set of terms that are frequently used, but not always explained. If youāre wondering about the difference between a chatbot and a LLM, or between deep learning and machine learning, youāre in the right place: Here is a glossary of 20 AI-related terms, along with newbie-friendly explanations of what it all means.
In simple terms, AI is intelligence in computers or machines, especially that which mimics human intelligence. AI is a broad term that covers many different types of machine intelligence, but the discourse around AI right now mostly centers around tools that create art, content, and summarize or transcribe content. To call these tools āintelligentā is up for debate, but AI is the term that has stuck.
An algorithm is a set of instructions that a program follows to give you a result. Common examples of algorithms include search engines, which show you a set of results based on your queries, or social media apps, which show content based on your interests. Algorithms allow AI tools to create predictive models, or create content or art based on your inputs.
In the context of AI, bias refers to erroneous results produced because the algorithm makes incorrect assumptions or lacks sufficient data. For example, speech recognition tools may not be able to understand certain English accents correctly because the tools were trained only with an American accent.
AI tools that you can talk to, such as chatbots or voice assistants, are called conversational AI. If you're asking the assistant something yourself, it's conversational AI.
The process of combing through large sets of data to find patterns or trends. Some AI tools use data mining to help you understand what makes people buy more items in a store or on a website, or how to optimize a business to cater to increased demand during peak hours.
Deep learning attempts to recreate the way the human brain learns, by utilizing three or more neural network ālayersā to process large volumes of data and learn by example. These layers each process their own view of the given data and come together to reach a final conclusion.
Software for self-driving cars uses deep learning to identify stop signs, lane markers, and traffic lights, through object recognition: This is achieved by showing the AI tool many examples of what a certain object looks like (e.g., a stop sign), and through repeated training, the AI tool will eventually be able to identify that object with as close to 100% accuracy as possible.
A large language model (LLM) is a deep-learning algorithm that is trained on a massive data set to generate, translate, and process text. LLMs (like OpenAIās GPT-4) allow AI tools to understand your queries and to generate text inputs based on them. LLMs also power AI tools that can identify the important parts of text or videos and summarize them for you.
Generative AI can generate art, images, text, or other results from your inputs, which are often powered by an LLM. It has become the catch-all term for the current AI tech many companies are now adding to their products. For example, a generative AI model can generate an image with a few text prompts, or turn a vertical photo into a wide-screen wallpaper.
When AI presents fiction as fact, we call that hallucinating. Hallucinations can happen when an AIās data set isnāt accurate or its training is flawed, so it outputs an answer itās confident on based on its available knowledge. That said, because AI is based on a complex web of networks, we donāt necessarily understand each example of hallucination. Lifehacker writer Stephen Johnson has great advice for spotting AI hallucinations.
The ability to identify specific subjects in an image. Computer programs can use image recognition to spot flowers in an image and name them, or to identify different species of birds in a photo.
When algorithms can improve themselves by learning from experience or data, itās referred to as machine learning. Machine learning is the general practice that other AI terms weāve discuss stem from: Deep learning is a form of machine learning, and large language models are trained through machine learning.
When a program can understand inputs written in human languages, it falls under natural language processing. Itās how your calendar app understands what to do when you write, āI have a meeting at 8 p.m. at the coffee shop on Fifth Avenue tomorrow,ā or when you ask Siri, āWhatās the weather like today?ā
The human brain has layers upon layers of neurons constantly processing information and learning from it. An AI neural network mimics this structure of neurons to learn from data sets. A neural network is the system that allows for machine learning and deep learning, and, at the end, allows machines to perform complex tasks such as image recognition and text generation.
The process of extracting text from images is done via OCR. Programs that support OCR can identify handwritten or typed text, and let you copy and paste it as well.
A prompt is any series of words that you use to get a response from a program, such as generative AI. In the context of AI, prompt engineering is the art of writing prompts to get chatbots to give you the most useful responses. Itās also a field where people are hired to come up with creative prompts to test AI tools and identify its limits and weaknesses.
RLHF is the process of training AI with feedback from people. When the AI delivers incorrect results, a human shows it what the correct response should be. This allows the AI to deliver accurate and useful results a lot faster than it would otherwise.
A programās ability to understand human speech. Speech recognition can be used for conversational AI to understand your queries and deliver responses, or for speech-to-text tools to understand spoken words and convert them to text.
When you feed a text query into an AI tool, it breaks down this text into tokens, common sequences of characters in text, which are then processed by the AI program. If you use a GPT model, for example, the pricing is based on the number of tokens it processes: You can calculate this number using the companyās tokenizer tool, which also shows you how words are broken down into tokens. OpenAI says one token is roughly four characters of text.
A training set or training data is the information that an algorithm or machine learning tool uses to learn and execute its function. For example, large language models may use training data by scraping some of the worldās most popular websites to pick up text, queries, and human expressions.
Alan Turing was the British mathematician known as the āfather of theoretical computer science and artificial intelligence.ā His Turing Test (or āThe Imitation Gameā) is designed to identify if a computerās intelligence is identical to that of a human. A computer is said to have passed the Turing Test when a human is tricked into thinking the machineās responses were written by a human.
Full story here:
Artificial intelligence (AI) is the latest tech revolution. Just as the cryptocurrency boom introduced the world to a whole bunch of new jargon, the AI hype train has brought with it a set of terms that are frequently used, but not always explained. If youāre wondering about the difference between a chatbot and a LLM, or between deep learning and machine learning, youāre in the right place: Here is a glossary of 20 AI-related terms, along with newbie-friendly explanations of what it all means.
Artificial intelligence (AI)
In simple terms, AI is intelligence in computers or machines, especially that which mimics human intelligence. AI is a broad term that covers many different types of machine intelligence, but the discourse around AI right now mostly centers around tools that create art, content, and summarize or transcribe content. To call these tools āintelligentā is up for debate, but AI is the term that has stuck.
Algorithm
An algorithm is a set of instructions that a program follows to give you a result. Common examples of algorithms include search engines, which show you a set of results based on your queries, or social media apps, which show content based on your interests. Algorithms allow AI tools to create predictive models, or create content or art based on your inputs.
Bias
In the context of AI, bias refers to erroneous results produced because the algorithm makes incorrect assumptions or lacks sufficient data. For example, speech recognition tools may not be able to understand certain English accents correctly because the tools were trained only with an American accent.
Conversational AI
AI tools that you can talk to, such as chatbots or voice assistants, are called conversational AI. If you're asking the assistant something yourself, it's conversational AI.
Data mining
The process of combing through large sets of data to find patterns or trends. Some AI tools use data mining to help you understand what makes people buy more items in a store or on a website, or how to optimize a business to cater to increased demand during peak hours.
Deep learning
Deep learning attempts to recreate the way the human brain learns, by utilizing three or more neural network ālayersā to process large volumes of data and learn by example. These layers each process their own view of the given data and come together to reach a final conclusion.
Software for self-driving cars uses deep learning to identify stop signs, lane markers, and traffic lights, through object recognition: This is achieved by showing the AI tool many examples of what a certain object looks like (e.g., a stop sign), and through repeated training, the AI tool will eventually be able to identify that object with as close to 100% accuracy as possible.
Large language model (LLM)
A large language model (LLM) is a deep-learning algorithm that is trained on a massive data set to generate, translate, and process text. LLMs (like OpenAIās GPT-4) allow AI tools to understand your queries and to generate text inputs based on them. LLMs also power AI tools that can identify the important parts of text or videos and summarize them for you.
Generative AI
Generative AI can generate art, images, text, or other results from your inputs, which are often powered by an LLM. It has become the catch-all term for the current AI tech many companies are now adding to their products. For example, a generative AI model can generate an image with a few text prompts, or turn a vertical photo into a wide-screen wallpaper.
Hallucination
When AI presents fiction as fact, we call that hallucinating. Hallucinations can happen when an AIās data set isnāt accurate or its training is flawed, so it outputs an answer itās confident on based on its available knowledge. That said, because AI is based on a complex web of networks, we donāt necessarily understand each example of hallucination. Lifehacker writer Stephen Johnson has great advice for spotting AI hallucinations.
Image recognition
The ability to identify specific subjects in an image. Computer programs can use image recognition to spot flowers in an image and name them, or to identify different species of birds in a photo.
Machine learning
When algorithms can improve themselves by learning from experience or data, itās referred to as machine learning. Machine learning is the general practice that other AI terms weāve discuss stem from: Deep learning is a form of machine learning, and large language models are trained through machine learning.
Natural language processing
When a program can understand inputs written in human languages, it falls under natural language processing. Itās how your calendar app understands what to do when you write, āI have a meeting at 8 p.m. at the coffee shop on Fifth Avenue tomorrow,ā or when you ask Siri, āWhatās the weather like today?ā
Neural networks
The human brain has layers upon layers of neurons constantly processing information and learning from it. An AI neural network mimics this structure of neurons to learn from data sets. A neural network is the system that allows for machine learning and deep learning, and, at the end, allows machines to perform complex tasks such as image recognition and text generation.
Optical character recognition (OCR)
The process of extracting text from images is done via OCR. Programs that support OCR can identify handwritten or typed text, and let you copy and paste it as well.
Prompt engineering
A prompt is any series of words that you use to get a response from a program, such as generative AI. In the context of AI, prompt engineering is the art of writing prompts to get chatbots to give you the most useful responses. Itās also a field where people are hired to come up with creative prompts to test AI tools and identify its limits and weaknesses.
Reinforcement learning from human feedback (RLHF)
RLHF is the process of training AI with feedback from people. When the AI delivers incorrect results, a human shows it what the correct response should be. This allows the AI to deliver accurate and useful results a lot faster than it would otherwise.
Speech recognition
A programās ability to understand human speech. Speech recognition can be used for conversational AI to understand your queries and deliver responses, or for speech-to-text tools to understand spoken words and convert them to text.
Token
When you feed a text query into an AI tool, it breaks down this text into tokens, common sequences of characters in text, which are then processed by the AI program. If you use a GPT model, for example, the pricing is based on the number of tokens it processes: You can calculate this number using the companyās tokenizer tool, which also shows you how words are broken down into tokens. OpenAI says one token is roughly four characters of text.
Training data
A training set or training data is the information that an algorithm or machine learning tool uses to learn and execute its function. For example, large language models may use training data by scraping some of the worldās most popular websites to pick up text, queries, and human expressions.
Turing Test
Alan Turing was the British mathematician known as the āfather of theoretical computer science and artificial intelligence.ā His Turing Test (or āThe Imitation Gameā) is designed to identify if a computerās intelligence is identical to that of a human. A computer is said to have passed the Turing Test when a human is tricked into thinking the machineās responses were written by a human.
Full story here: