An AI-generated fictional character created by Billion Dollar Boy as part of their creative process
Forget the hype: how are AI tools actually being employed in the creative world right now? For this special report, we spoke to more than 25 creative agencies to find out.
There’s been much controversy about AI and its effect on creative jobs, which has been covered to death in articles on the length and breadth of the internet. But here I’m going to look at an area that’s had less coverage: how are creative agencies actually using AI in their day-to-day work?
To prepare this report, I surveyed over 25 leading creative shops of all sizes. And they all told me pretty much the same thing. Which was that designers are leveraging both generative art tools such as Midjourney and large language models such as ChatGPT to help generate ideas, experiment with different strategies and create visual concepts.
Rob Kavanagh, executive creative director at Oliver UK, gives a typical response when I ask whether the agency has incorporated AI into their workflow. “We’ve been using AI for years – especially on innovation projects that rely on high-end personalised and automated content or customer experiences at scale,” he begins. “But generative AI, that’s a different story entirely.
“When it properly burst onto the scene in summer 2022 with Midjourney et al., it caught the imagination of our creatives right away,” he says. “They could instantly grasp the potential and couldn’t help getting their hands dirty. Since then, our creatives have used GenAI as part of their inspiration and ideation process.”
This enthusiasm is echoed by Dom Desmond, creative director at Grayling. “We started using AI just a year ago with DALL·E,” he says. “It blew our minds. Since then, we’ve used AI so much, we can’t imagine working without it.”
I heard a similar story from the vast majority of agencies I talked to. But it’s important to add that at the same time, agencies are generally NOT using AI for finished work.
AI for explaining ideas
“While it’s a useful ‘starter for 10’, generative AI artwork rarely makes the final pitch,” explains Simon Collister, director at UNLIMITED‘s Human Understanding Lab. “It’s a really useful way to start the creative development process, but it is always refined and built on by humans. Close, but not a replacement just yet.”
Greig Robinson, head of design at ustwo London, agrees. “Ultimately, AI should be used as a springboard rather than a final deliverable,” he says.
But agencies are increasingly using AI as part of their creative process that leads up to that deliverable. One use, for example, is to explain ideas visually, both to fellow designers and to clients.
“It’s all about communicating intangible ideas in our heads to brands on a screen,” explains Henry Crisp, a senior creative at global influencer agency Billion Dollar Boy. “AI tools like MidJourney have streamlined this ideation process and brought creative ideas to life. For example, we visualised this fictional character [shown at the top of this page], blended with a real-life influencer, to help brand partners understand the concept more easily.”
Image created by Interstate using MidJourney. The prompt was: Frontal portrait, black robot dummy image with no eyes or mouth
Image created by Interstate using MidJourney. The prompt was: Digital background, purple, gradient, soft light, low contrast
Image created by Interstate using MidJourney. The prompt was: Long exposure photo of bilbao guggenheim in the night (Gradient variation extrapolated from designed gradient)
Similarly, Dan Sherratt, VP of creative and innovation at Poppins, says: “We’ll tell Midjourney to generate a 3D scene we can place designs into, from a hand holding a mobile phone to Brutalist architecture we’d like to place a car in. When the client has the product but not the scene, generative AI works wonders.”
AI for art direction
Another common use of generative AI in agencies is for art direction. For example, before Poppins commission a 3D artist, they’ll sometimes generate an AI image to give the client an idea of what the end result might be. This saves the client from spending a lot of money on something before they know it’s going to work.
“This speeds up our communication with clients to get projects off the ground,” says Dan. “It gets them excited about what we might easily be able to visualise in our imaginations, but they sometimes need a helping hand as far as concept art goes.”
UsTwo takes a similar approach. “For example, for a project, we worked on for a Middle Eastern bank, we used AI image generators as a tool to help us define an art direction that captured the product’s personality,” says Greig. “The generated images served as a brief for the agency to shoot the photography for the product.”
Along similar lines, designers are finding AI very useful in creating mood boards. Before AI came along, Dan says, Poppins used images from Pinterest, Flickr and other found imagery to build mood boards, and in fact, they still do so. However, generative AI has been indispensable in filling in the gaps when they can’t find suitable material.
Creating the right AI images, though, isn’t as easy as it looks. “Crafting the right prompts takes a blend of creative thinking and technical understanding,” says Dan. “For example, we might specify to Midjourney the type of film stock that might have been used to get a photographic effect. We might ask for a specific type of architecture for a backdrop. More generally, we find that applying broad emotional descriptions with very specific requests gives you the best results. It’s not easy, but it’s definitely worth the effort.”
Matteo Di Iorio, associate partner, creative at Interstate Creative Partners, explains that such AI images are helping make their brand design teams more efficient in articulating concepts.
Image created by Interstate using MidJourney. The prompt was: Futuristic luxury interior set in mountains in Saudi Arabia
Image created by Interstate using MidJourney. The prompt was: Ultra realistic modern, advance and futuristic energy station in city
Image created by Poppins using MidJourney.
“Where we traditionally either built stock libraries for mockups or spent hours searching and curating imagery that fits an aesthetic or brief, now, with the right prompt structure, we can generate it in seconds,” he says. “AI is a tool we value for its ability to bring to life hard-to-define ideas and use them in concepts we can reference for our work.”
Gen art limitations
The key phrase here, however, is “right prompt structure”. It can take a lot of trial and error to get generative AI images right, and here’s where you’ll find the limitations of AI come into sharp focus. Because ultimately, AI art generators are usually terrible at giving you what you want.
Josh Parker, senior designer at Embryo, has found this out the hard way. “When I’ve leant fully on AI for image creation for client campaigns, I’ve found it can be quite difficult to be specific,” he says. “It also tends to read certain keywords in a prompt and ignore others. An example of this is when I asked AI to show me ‘a room in a house without an elephant’, it gave me image after image featuring elephants.”
Dan Sherratt has also had some frustrations with AI image generators. “Because of our name, Poppins, we play with umbrella imagery a lot,” he explains. “And that’s something that AI can’t yet seem to get right: hands and umbrellas. Another example is that we were trying to get things to appear to be floating in one of our proposals, and AI just couldn’t grasp the concept. Eventually, we had to prompt it to imagine things were hanging from the ceiling by a string, and we Photoshopped the string out.”
AI for being reactive
One thing that AI art has going for it, though, is that it’s very quick and easy to tweak it. And that makes it useful if you’re creating client work on the fly that’s based on evolving events and needs to be delivered as quickly as possible.
Image created by Poppins using MidJourney.
Image created by Poppins using MidJourney.
Image created by Poppins using MidJourney.
An example of this comes from Andy Taylor, chief creative officer at the full-service creative agency Trouble Maker.
“One thing we’ve been using AI for is to bring to life personalised, reactive campaigns that adapt to multiple situations and markets,” he explains. “For example, when we worked on a campaign centred around a global sports tournament, AI allowed us to create content that could instantly react to every possible fixture and result. Even down to changing the ethnic makeup of the crowd to reflect the local market.”
AI for storyboarding
Storyboarding is another common use for generative AI. “When it comes storyboard frame and content exploration, AI allows us to explore alternative camera angles and frame content at the script writing stage of a project relatively easily,” says production director Matt Merralls of Initials CX. “And it allows our creative teams to experiment with concepts and ideas without the need for a storyboard artist to craft the cells.”
Cheil UK has been taking a similar approach. “For us, AI tools like Midjourney have given us a taste of what’s possible, but with an interface most creatives find confusing,” says creative director Nick Spink. “So we’ve moved onto using Runway – a more intuitive tool we feel offers us greater creative freedom and opportunities.
“It’s already helping us with the generation of storyboard images,” he continues. “And it’s adding that extra layer of slickness to visual generation, three-dimensional space creation for experiential projects and background image extensions for differing asset formats.”
Storyboarding image by InitialsCX
Image created by Wonder using Midjourney
Image created by Poppins using MidJourney.
However, AI is by no means perfect for this task, at least not right now. “Say we’re using AI to create an illustrated storyboard featuring our main character; let’s call him Harry,” says Jamie Field, video MD at Definition (formerly Limelight PR). “It becomes challenging to maintain consistency across multiple frames where Harry appears in different settings. Each frame may depict slight variations in Harry’s appearance or inconsistencies in the illustration style.
“Interestingly, this is a problem recognised by OpenAI that they’ve managed to solve in their new text-to-video model, Sora,” he adds. “Sora can ‘persist’ video subjects, even when they leave the frame.”
AI for humour
So far, I’ve stressed that AI is normally used as part of the creative process but NOT to produce the final result. There are exceptions, though; right now, these are primarily where a campaign is based on humour.
One example comes from global marketing agency Iris. “We run regular AI explorer competitions where cross-agency teams use different tools,” Chief Creative Officer Grant Hunter explains. “One of the winning teams created Polly Mers, a virtual influencer oblivious to the destructive presence of single-use plastic in the world she inhabits. You can follow her adventures at @polly.mers.”
AI for research and summaries
Alongside generative art, what about large language models like ChatGPT? If the agencies I surveyed are anything to go by, it seems like virtually everyone is now using such tools in their day-to-day work.
“At the research stage, for example, AI can be used for processing key outtakes in interviews or simplifying complex copy,” says Rob Skelly, creative director at Born Ugly. Embryo’s chief innovation officer, James Welch, adds: “We use AI to pre-filter industry news: to get rid of duplicates, puff pieces and listicles. We then use AI to read articles for us, giving us a detailed synopsis so that we know whether to spend time reading the full article or not. We do this with an in-house tool that we made called Seedling.”
AI has been similarly integrated into the workflow of marketing agency Too Many Dreams. “We’ve used Claude.AI to summarise large industry reports into short synopses, suitable for further commentary,” says founder and MD Stephen Jenkins. Meanwhile, David Clancy-Smith, design director and AI design leader at Marks, has been using LLMs to prompt ideas around strategic positioning, as well as for more logistical tasks such as conducting baseline research from which his team can add their expert depth and knowledge.
“It’s been a great tool to summarise, pull insights and fill knowledge gaps,” he enthuses. “This ensures our strategy team and designers working on projects have a wide breadth of information to bounce off in order to achieve the best possible response.”
AI for consistency and coding
The uses for LLMs at agencies, essentially, seem endless. For example, digital agency Wonder are using them as a tool for unifying their tone of voice.
“When multiple people are working collaboratively, each contribution naturally has a different TOV,” says creative director David Crease. “But by simply passing this copy through ChatGPT, including a prompt like: ‘make this copy succinct, punchy and with the energy and tone of Tony Stark’, we can create a unified output. It’s a massive timesaver.”
Image created by Poppins using MidJourney.
Image created by Poppins using MidJourney.
This self-promotion image represents different ways that Born Ugly have used AI to represent their very own brand icon (‘The mark of potential’). They use these to help tell different stories and personalise presentations by reflecting the client or sector that they play in. The pink one in the middle left, for example represents their client MitoQ, who operate in the world of cell health.
Elsewhere, experience design agency I-AM is using LLMS to create customer profiles. “We’re able to create really rich profiles using our own Target Audience research, ranging from quantitative surveys to focus groups to one-to-one sessions,” explains Kirsty Vance, associate director of interior design. “However, there can be areas we may need to question or challenge, and ChatGPT can offer us an alternative perspective to prompt our thinking.”
A further popular use for LLMs is coding. For example, digital marketing agency Precis has been making use of ChatGPT to write After Effects plugins that customise the software to fit in better with their motion design workflow.
“AI has helped with expediting the development of AE scripts for multiple use cases, which has been a dream,” says Stephanie Underwood, group chief solutions officer and creative. “As an example, we developed an internal script to help with content management and file structures. The script scans your project to ensure all imported files are placed neatly in the right folder on our drive.
“The time of creation of this script was significantly advanced through the utilisation of ChatGPT, which contributed approximately 80% of the code. Our team expertly crafted the remaining 20%. This script has become a pivotal component in our Motion Team workflow, resulting in substantial time savings, and has markedly reduced frustration among our Project Managers and other team members.”
Precis has also used Perplexity AI to solve problems in Cinema 4D. Previously, if they had an issue, they’d spend a lot of time posting on forums to try and find an answer, but they’ve found this is a much quicker solution.
AI inspiration and final imagery for Fold7’s work with Near
Creative exploration by InitialsCX
Image created by Wonder using Midjourney
In short, there is a huge range of ways that LLMs can help agencies, but in general, they boil down to one thing: saving you effort. As Lauren Richardson, senior account executive at Marketing Signals, says: “Repetitive admin tasks can drain valuable time that could otherwise be spent on more strategic or creative projects. Automating some of these tasks using AI can free up time, supercharge productivity and reduce errors all at once.”
In this light, says Abb-d Taiyo, co-founder and CCO at design and impact agency Driftime, “proactively using AI tools presents a fresh opportunity for digital designers. What was once laborious and time-consuming tasks are now accelerated by tools like Relume.io, a wireframe and sitemap generator that’s marketed as a design ‘ally’ instead of a replacement. It’s clear why this form of generative support is the best use case of AI platforms, plugins, and tools: as a diligent, devoted digital assistant rather than a design pioneer in their own right.”
Danger of overreliance
However, as with generative AI art, there are dangers to be aware of when using LLMs in your work. The biggest is that large language models are very untrustworthy.
For example, Too Many Dreams told me one of their clients used ChatGPT to draft a press release, which, on first read, sounded fantastic. But when they read it through a second time, they quickly realised that, whilst all the words sounded right and the sentences were constructed properly, they didn’t actually make any sense whatsoever.
More generally, text written by AI lacks the originality, humanity and emotional appeal needed for great creative work. In other words, by itself, it’s not actually very good, and so overreliance on AI can make you work generically.
“As we move forward with AI, it’s important that we remember it is a tool for us to bring ideas to life, not an idea generator itself,” says Graeme Offord, executive creative director at global drinks specialist agency Denomination. “The trap for designers, and not just young ones, to use AI to come up with their ideas for them – in the same way that going to Pinterest or Google to trawl for ideas – is a tempting one. Avoiding this temptation and instead utilising AI to bring to life original ideas in powerful ways is our goal and something we are already seeing the benefits of.”
Daniela Meloni, associate creative director at FutureBrand, agrees. “While AI speeds up production, it’s essential to recognise it as a tool that enhances creativity rather than replacing it,” he argues. “There’s a concern that outputs may become homogeneous, drawing from similar sources and risking creative amalgamation – and stagnation. But lazy design has always existed and always will. I truly believe AI pushes us creatives to be even more intelligent, empathetic, and free. In a word, human.”
AI across a project
In summary, AI is being used for a wide range of tasks in creative agencies. But it’s not about how much you use AI but whether you’re using it in a smart and thoughtful way. As Rob Skelly says: “Like any tool, it’s important that this usage is meaningful and grounded with a big idea to avoid it becoming a gimmick. Heinz is a great example of using AI in a really clever way to celebrate the strength of as an icon.”
Image created by Wonder using Midjourney
Josh Parker of Embryo asked AI to show him “a room in a house without an elephant”, and this is what it produced.
AI generated imagery I-AM used to inform the thinking of their mood boards.
So, what does that look like in terms of a single project? Tom Munckton, head of design at Fold7, explains how they harnessed AI when creating the visual identity for Web3 brand Near’s annual event last November in Lisbon.
“To a greater degree than would normally have been possible within a defined period of design exploration, we were able to visualise the physical possibilities for the identity system through AI assistance,” he explains.
“Through a combination of found venue imagery, Photoshop AI and Midjourney, we could quickly and convincingly demonstrate how the system could live deep into the experience across venues. Beyond inspiring the client, it in turn gave them more information to brief their onward build partners pushing them to go further.”
As for the mix of tools, that varies from agency to agency, plus most people we spoke to stressed that they were being reviewed and updated on a regular basis.
Camm Rowland, chief creative officer at Kepler, is typical when he says: “Currently, Midjourney stands out for static imagery, while we anticipate Open AI’s Sora to significantly impact our video production. For voice synthesis, Elevenlabs leads in quality, and we use both Gigapixel and Magnific for upscaling. We also employ Zapier for creating automations. But as I mentioned, the leading tools are prone to change — so we need to be tracking closely with developments.”
Legal challenges
But there is, of course, a caveat to all this. AI is very much an evolving technology, and everything I’ve written here may be subject to change very quickly and suddenly. Especially because there are several big AI copyright cases pending that could significantly impact the future of AI development and intellectual property law.
AI generated imagery I-AM used to inform the thinking of their mood boards.
AI generated imagery I-AM used to inform the thinking of their mood boards.
AI generated imagery I-AM used to inform the thinking of their mood boards.
For example, there’s The New York Times v. OpenAI and Microsoft, in which the newspaper is alleging that OpenAI’s Bard (which is now called Gemini) and Microsoft’s Copilot AI systems have illegally copied substantial portions of its copyrighted articles in their outputs. So, this case could set an important precedent on whether and how AI-generated text infringes on copyrighted material.
A second big case is Getty Images v. Stability AI. This case is being heard in the UK and revolves around Stability AI’s Stable Diffusion art generation Al, which Getty Images alleges uses their copyrighted images without permission in its training data.
These are just two of dozens of massive lawsuits currently going through the courts and trying to establish the principle that AI companies cannot train their algorithms on other people’s content without express permission. And if one of them wins, then it’s conceivable we might simply see these big AI platforms being shut down overnight.
The future of AI
In other words, ultimately, we simply don’t know what will happen with AI. But if I were a betting man, I’d say it’s likely that AI will be a big feature of agency life for some years to come.
That doesn’t mean, though, that it’s the be-all and end-all. “AI is an amazing collaborator, but do not expect it to solve your problem and provide you with a beautiful and finished piece of design,” says Dom Desmond. “To get truly unique results that hit the brief dead on, human intervention is necessary. The main benefit of AI from a creative director perspective is that it allows you to open your mind to new possibilities, ideas and styles while saving you a huge amount of craft time.”
“Always add a human layer: that’s where the magic happens,” Stephen Jenkins of Too Many Dreams adds. “AI isn’t simply about getting things done faster and cheaper. It’s about augmenting skills and enhancing processes to help make things better than they could have been with humans or technology alone.”
Matt Merralls of Initials CX adds: “Whilst AI can’t replace great creative minds, it is a power-up for the creative talent that services our industry. By understanding and applying this powerful new technology, we can further our creative output and our client’s campaign effectiveness.”
An image generated by Midjourney Josh Parker of Embryo asked it to envision the lift of the future