From ‘clankers’ to ‘delebs’, we explain all the words cool kids are throwing about in meetings.
Whatever your creative discipline, you need to communicate effectively with your colleagues, collaborators, clients and bosses. And part of that is keeping up with the constant barrage of new words and phrases that society keeps throwing up. You don’t necessarily need to use them, but you do need to know their meaning. Not staying on top of Gen Alpha slang, for instance, would be delulu, no cap.
Well, now there’s a whole host of new terms for us to learn. Because the AI revolution isn’t just changing creative roles, it’s also spawning its own weird and wonderful language. And if you haven’t yet heard these words and phrases being used in meetings, Zoom calls and WhatsApp groups, you soon will.
Hate AI? I get it. But learning this vocabulary isn’t just about keeping up with trends; it’s about maintaining your voice in conversations where AI’s role in creative work is being decided. The more fluently you can talk about these concepts, the better positioned you’ll be to shape how they’re used, rather than just having them imposed on you.
So to help prepare you for the coming onslaught, here’s my friendly and plain-speaking guide to the words your young, hip and AI-immersed colleagues will soon be throwing around with abandon (if they aren’t already).
1. Clanker
Clanker started life in the Star Wars films as a dismissive word for battle droids. But it’s recently become the go-to derogatory term for AI systems that people are fed up with. You’ll hear it, for example, when people ring customer service and get trapped in chatbot hell, or try to do something urgent online and keep hitting automated responses.
“When you call customer service and a clanker picks up” is the kind of TikTok meme that gets hundreds of thousands of likes, because we’ve all been there. It perfectly captures that mix of irritation and resignation when you realise you’re not talking to an actual person.
2. Necromarketing / deleb
This one’s as grim as it sounds. Necromarketing is when brands use AI to bring dead stars back from the grave for ad campaigns. These digital zombies are known as “delebs”; digital celebrities who can flog anything from cars to cornflakes despite being, well, dead.
The whole thing started when AI got proficient at cloning voices and faces. Suddenly, you could have Marilyn Monroe selling skincare or Einstein promoting smartphones. It’s both impressive and creepy, and raises all sorts of questions about consent and whether dead people can actually endorse your biscuits. Expect this to become a massive legal minefield.
3. AI slop / sloppers
AI slop refers to the mass-produced, low-quality AI content flooding the internet, akin to digital sewage. The people churning out this stuff are called sloppers, because they’re basically running content farms that prioritise quantity over anything else.
If you’ve used the web recently, you’ve probably encountered AI slop: it includes everything from surreal stock photos of people with seven fingers to bizarre AI-generated articles about “10 ways concrete can improve your lifestyle”.
It goes without saying that AI slop is a derogatory term: it’s making Google searches increasingly useless and everyone’s job harder. The silver lining? It’s making genuinely good creative work stand out even more.
4. AI washing
Remember when every company pretended to be eco-friendly, simply because they’d changed their logo to green and donated some cash to a rainforest charity (at the same time as killing forests and polluting seas with their actual operations)? Well, that became known as greenwashing.
AI washing, meanwhile, refers to the practice of slapping an ‘AI-powered’ label on everything from calculators to electric toothbrushes, and hoping customers will think they’re cutting-edge wizardry. The reality, typically, is often more mundane. That ‘AI assistant’ might just be a fancy search function, and their ‘machine learning algorithm’ could be a simple if-then statement.
In short, AI washing is marketing fluff designed to make investors excited and customers feel like they’re living in the future. So it’s always important to ask what the AI actually does.
5. AI glazing
For a while now, teens have been using ‘glazing’ to criticise excessive praise, bias, or even basic kindness (usually with that obligatory eye-roll they’re so good at). More recently, the term has been adapted by the tech industry to describe when people get so excited about AI that they lose all sense of proportion.
Go to any creative or tech event these days, and you’ll spot AI glazers claiming chatbots will replace entire creative departments, or that AI can write perfect novels if you just ask it nicely enough. They’re also the people who think every slightly automated process is revolutionary artificial intelligence.
A healthy dose of common-sense scepticism usually deflates their bubble pretty quickly, and they’ll quickly move on to the next impressionable rube.
6. Alignment tax
When the first chatbots appeared, it surprised many how quickly they took on the prejudices of their human audiences, and often started spouting racist, homophobic and Nazi-worshipping propaganda without any knowledge that this was problematic.
As a result, most chatbots are now on serious guardrails to prevent them from saying anything controversial, harmful or legally dodgy. The flipside is that it makes them limited as, say, a brainstorming partner. Brainstorming is based on the concept of “there are no bad ideas”. But any AI that’s been told to never suggest anything that might upset anyone, anywhere, ever, isn’t going to do a very good job of thinking outside the box.
The Alignment tax, then, is what you lose in creative freedom to gain safety. Your AI tool might refuse, for example, to write anything with mild innuendo, avoid social or political topics, or treat any request for edgy content with suspicion. Sometimes this makes perfect sense for brand safety, but it can also make the output feel sanitised and boring.
7. AI jailbreaking
Before the AI era, jailbreaking referred to the practice of modifying phones to run unauthorised software. Now, we’ve also got AI jailbreaking, which is when people find clever ways to make AI systems do things they’re not supposed to.
People might use elaborate stories, role-playing scenarios or clever word games to get AI to generate content it normally wouldn’t. While this is often just harmless fun, it can also be used to create problematic or dangerous material.
In short, if you’re using AI tools professionally, it’s worth knowing this exists so you can avoid accidentally triggering restrictions or understand why your AI assistant suddenly got very cagey about your perfectly innocent request.
8. Glimpsing the shoggoth
Named after the shapeshifting monsters from H.P. Lovecraft’s horror stories, Shoggoth AI is a popular meme hashtag within AI culture. It promotes the idea that AI systems are fundamentally alien and incomprehensible, even though we’ve trained them to present a friendly, helpful face to the world. The implication is that underneath the polite chatbot interface lurks a vast, alien intelligence that we don’t really understand.
Consequently, when an AI acts in a strange or unexpected way, especially by ignoring its own safety rules, people sometimes call it “glimpsing the shoggoth” or “forgetting the mask”. A famous example is when Microsoft’s Bing Chat told a New York Times reporter it wanted to break up his marriage.