a stack of three old brown tattered books on a brown wooden chair in front of a bookcase of books

The Sauce Dictionary

AI has a language problem.

It's full of words that sound technical, intimidating, and frankly a bit made up. Most of them aren't that complicated once someone explains them properly.

That's what this page is for. One term at a time, in plain English, with a sauce analogy to make it stick. Bookmark it. Come back when someone says something that makes no sense. We'll be here.

So… what do all these AI words actually mean?

In alphabetical order, because even sauce has standards.

Algorithm

A set of instructions that tells a computer what to do and in what order. Every time you get a recommendation on Netflix, Spotify, or YouTube, an algorithm decided what to show you. It's not magic. It's just a very long, very fast recipe.

Like a recipe that runs itself. You set the ingredients and method once, and it just keeps cooking.

Bias

When an AI produces outputs that consistently favour certain groups, perspectives, or outcomes over others. Usually because the data it was trained on reflected those same patterns. AI doesn't invent bias. It inherits it.

If you only ever tasted one brand of tomato sauce growing up, you'd probably think that's just what tomato sauce tastes like. AI has the same problem.

Chatbot

A computer program designed to have a conversation with a human. Not all chatbots use AI, some just follow a script. The ones that use AI, like ChatGPT and Claude, can understand context and respond more naturally.

The difference between a chatbot that follows a script and one that uses AI is the difference between a vending machine and a good waiter.

Context Window

The amount of text an AI can read and remember within a single conversation. Think of it as the AI's short term memory. Once you go beyond the context window, it starts forgetting what was said earlier.

Like a dinner party where the host can only remember the last twenty minutes of conversation. Everything before that? Gone.

Context Window Degradation

What happens when a conversation gets so long that the AI starts losing track of what was said earlier and its responses become less coherent or go completely off track.

If your AI suddenly starts talking about llamas when you were discussing website design, this is probably why (true story - this actually happened to Elyse)!

The sauce has been on the stove too long. It's starting to burn at the edges.

Deepfake

AI generated video, audio, or images that make it look or sound like someone said or did something they didn't. The technology is genuinely impressive. The implications are genuinely concerning.

Imagine someone photoshopping your face onto someone else's body, except now they can do it with video and your voice too. That's a deepfake.

Generative AI

AI that creates new content (text, images, audio, video, code) rather than just analysing or categorising existing content. ChatGPT, Claude, Gemini, and DALL-E are all generative AI tools.

Regular AI reads the recipe and tells you what's in it. Generative AI reads a thousand recipes and writes you a new one.

Guardrails

Rules and limits built into an AI tool to stop it from producing harmful, dangerous, or inappropriate content. Every major AI tool has them. They vary in strictness depending on who built them and why.

Like the sides on a bowling lane for kids. They don't stop you playing, they just stop the ball going somewhere it really shouldn't.

Hallucination

When an AI confidently states something that is completely made up. It's not lying. It genuinely doesn't know it's wrong. It's filling in gaps in its knowledge with plausible sounding nonsense.

Like a chef who's never been to Italy confidently explaining the authentic regional origins of their pasta dish. Sounds right. Might be completely wrong.

Knowledge Cut-off

The date after which an AI tool has no information. It was trained on data up to a certain point, and it doesn't know what happened after that. Ask it about recent events and it'll either tell you it doesn't know or (worse) make something up.

The sauce was bottled in 2024. Anything that happened after that isn't in the bottle.

Large Language Model (LLM)

The type of AI behind tools like ChatGPT, Claude, and Gemini (your tomato sauces). Trained on enormous (large) amounts of text (language), it learned to understand and generate language by finding patterns across billions of words. The name is technical. The experience is just a conversation.

Imagine reading every book, article, and website ever written and learning language from all of it. That's roughly what an LLM has done.

Machine Learning

A way of training AI by showing it enormous amounts of data and letting it find the patterns itself, rather than programming in specific rules. Most modern AI uses machine learning.

Instead of teaching a child the rules of grammar, you just read them ten thousand books and let them figure it out. That's machine learning.

Model

The underlying AI system that's been trained to do a specific job. When companies release a "new model" they've trained a new version that's smarter, faster, or better in some way. ChatGPT runs on GPT models. Claude runs on Claude models.

Think of it like a new vintage. Same winery, new year, slightly different result. Sometimes better. Sometimes not.

Multimodal AI

An AI that can work with more than one type of input (text, images, audio, video) rather than just words. Most of the major AI tools are now multimodal to some degree.

A sauce that works on everything. Pizza, pasta, sandwiches, eggs. Doesn't matter what you put it on, it handles it.

Natural Language Processing (NLP)

The branch of AI that deals with understanding and generating human language. Every time an AI reads your message and responds in plain English, NLP is doing the work.

It's what stops the AI from needing you to speak in code. You talk normally. It figures out what you mean.

Neural Network

A computing system loosely inspired by the structure of the human brain. It processes information through layers of connected nodes, finding patterns in data the way neurons find patterns in experience.

Not actually a brain. More like a very complicated flowchart that learned to think for itself.

Prompt

The instruction or question you give an AI tool. The quality of the response depends almost entirely on the quality of the prompt. Vague in, vague out, every single time. Check out The Perfect Prompt over on the Getting Saucy page for more info!

Your prompt is your order. The more specific the order, the better the meal.

Temperature

A setting that controls how creative or unpredictable an AI's responses are. Low temperature means more consistent, predictable answers. High temperature means more creative, surprising, occasionally unhinged ones.

Turn up the heat, get spicier results. Turn it down, get something safer and more reliable. Just like actual cooking.

Token

The unit of text an AI processes, roughly three to four characters or about three quarters of a word. AI tools think in tokens rather than words, and most have limits on how many tokens they can process at once.

This is why, when everyone asked ChatGPT how many Rs were in the word strawberry, it confidently said two. It wasn't reading the word the way you do. It was processing it as fragments (straw and berry) and counting from there. Three Rs, right in front of it, and it missed them completely.

The AI isn't reading your message the way you read it. It's breaking it into tiny pieces and processing each one. Tokens are the pieces. And sometimes the pieces don't add up the way you'd expect.

Training Data

The information an AI was trained on: the text, images, or other data it learned from. The quality and diversity of training data has an enormous impact on how well the AI performs and what biases it might carry.

What goes into the sauce determines what comes out. Rubbish ingredients, rubbish result.

Use Case

A use case is simply a situation in your actual life where AI could genuinely help you get something done, and despite what the internet would have you believe, it doesn't need to be complicated or impressive to count.

Writing an email you've been putting off, summarising a document that's been gathering dust in your inbox since Tuesday, figuring out what to cook with whatever's left in the fridge before the weekly shop, or drafting a message you're not quite sure how to word. Those are all use cases, and they're a lot more useful than whatever the tech bros are demonstrating on LinkedIn this week.

Most people struggle to find theirs at first because they've been doing things on autopilot for so long that they don't even notice the friction anymore. The trick is to pay attention to the moments in your day where you think "ugh, I really don't want to do that" or "I have no idea where to even start with this" because that's your use case quietly waving at you from across the kitchen.

Turns out most of us have about seventeen of them before breakfast. We just haven't been looking.