← All insights

I’ve spent nearly two decades building brand systems at enterprise scale. Global rebrands, template governance, design hubs, asset libraries—the infrastructure that keeps a brand coherent when 50,000 people are using it across numerous countries.

So when I started building Together Better Books—a personalised children’s publishing platform for neurodivergent kids—I expected the illustration challenge to be familiar. Design the assets, create the rules, build the system, enforce consistency.

What I didn’t expect was that training an AI model called a LoRA would teach me something new about how brand systems work. And that the lesson applies far beyond children’s books.

The problem: one illustrator, infinite variations

Together Better Books personalises every story for the child reading it. Their hair, skin tone, body type, clothing. The lead character looks like them. Across eight stories, each with unique creature characters, environments, and props, the number of possible illustration combinations runs into the thousands.

No illustrator can hand-draw every version. The economics collapse immediately. So the question became: how do you scale one artist’s creative vision across thousands of outputs without losing what makes it theirs?

Enter the LoRA

A LoRA—Low-Rank Adaptation—is a technique for fine-tuning an AI image model on a specific visual style. You take 20 to 50 illustrations from a single artist, feed them into a training pipeline, and the AI learns to generate new images that match that artist’s linework, colour palette, composition instincts, and visual personality.

Training takes about 20 minutes. Costs under two dollars. And the output is a custom model that can generate illustrations in your artist’s exact style, following whatever text instructions you give it.

The moment I understood this, my brain went straight to brand.

What the illustrator draws vs. what the AI generates

We structured the illustration brief around a simple principle: the illustrator designs the creative vocabulary, and the LoRA learns to speak in it.

The illustrator I’m working with, Lani Greener—another local mum and amazing artist from the Sunny Coast—is responsible for the things that require creative judgment. The modular character system: body shapes, head templates, hairstyles, expressions, clothing that form the foundation of every personalised child. The 15 unique creature and metaphor characters across all eight stories—a penguin, a sentient radio, a popcorn kernel, a Stage Goblin, a What-If Beast made of bubble gum. The core environment compositions. The signature props that carry emotional weight in each story.

Everything else—the personalised child variants, backgrounds at different times of day, supporting characters in crowd scenes, recoloured clothing—comes from the LoRA extending her work.

Lani draws each thing once. The AI draws it a thousand different ways, in her style, without drift.

“The illustrator designs the creative vocabulary, and the LoRA learns to speak in it.”

The modularity that makes personalisation possible

The most useful insight from this process was about modularity. Lani doesn’t draw complete characters. She draws bodies, heads, hairstyles, and expressions as separate components. The AI assembles and recolours them. This modularity is what makes personalisation possible at scale—thousands of unique children, each assembled from a governed set of components, each looking like the kid reading the book.

This is the same principle that makes enterprise brand systems work. The more modular your creative source material—separated layers, isolated components, consistent naming—the more effectively an AI can recombine and extend it.

Brands that have invested in structured, componentised design systems are already positioned for this. Their asset libraries are training-ready. Brands still working from flat files and one-off productions will need to restructure before they can take advantage.

This is the quiet competitive advantage of good design operations. The organisations that built disciplined, modular creative systems over the past decade are the ones that will scale fastest with AI. The infrastructure was about adaptability to tools that didn’t exist yet.

The brand systems parallel

Every enterprise brand system follows the same fundamental structure: a small set of hand-crafted, high-judgment decisions—positioning, identity, core visual language—that get extended across a much larger set of lower-judgment applications.

The traditional approach uses template governance, brand guidelines, and approval workflows to manage that extension. It works, but it’s slow, expensive, and fragile. The further you get from the original creative intent, the more the brand drifts. Anyone who’s watched a 200-page brand guideline get interpreted by a regional team with no design training knows this.

A LoRA changes the economics of that extension layer entirely. Instead of writing rules about how to apply your visual style and hoping people follow them, you train a model on the style itself. The rules become embedded in the output. The AI doesn’t interpret guidelines—it generates from the source material directly.

The design team’s role shifts from production to curation and quality control. They create the source material, train the model, validate outputs, and refine the training set when the AI drifts. The volume of on-brand creative output scales dramatically without scaling headcount.

The catch

A LoRA can only remix what it’s seen. It cannot invent. It cannot make the strategic decisions that define a brand’s visual identity in the first place. It cannot look at a brief and decide that this particular campaign needs to break the rules to land emotionally.

This is exactly the same limitation as any brand system. Templates don’t make creative decisions. Guidelines don’t have taste. The system extends the thinking—it doesn’t replace the thinker.

What a LoRA does is compress the distance between the creative intent and the final output. In a traditional brand system, that distance is filled with interpretation, approximation, and drift. In a LoRA-enabled system, the AI generates directly from the source, and the distance shrinks to nearly zero.

“What a LoRA does is compress the distance between the creative intent and the final output. In a traditional brand system, that distance is filled with interpretation, approximation, and drift.”

The part about people

Any conversation about AI-generated imagery eventually arrives at the same question: what happens to the artists?

For Together Better Books, this was straightforward to answer because the relationship is direct. Lani’s illustrations are the sole training data. She knows exactly how her work will be used. She can see the outputs. She has creative oversight. The AI extends her vision—it doesn’t replace her or draw from anyone else’s work.

If an illustrator’s work becomes the training data for a system that generates revenue over time, there’s a strong case for ongoing royalties tied to usage. The artist’s creative labour becomes a generative asset, producing value long after the original drawings are delivered. Consent, ongoing compensation, credit, creative oversight, and defined scope—this is how we’re structuring it, and I think it’s how the industry needs to move.

Where this goes

I’m using a LoRA to personalise children’s books. But the same pipeline—illustrator creates source material, LoRA learns the style, AI generates at scale—applies to any brand producing visual content at volume.

The creative source still needs to be excellent. The strategic decisions still need to be human. The brand still needs a point of view worth scaling. And the artist who creates the visual foundation needs to be treated as a partner, with compensation structures that reflect the long-term value of what they’re enabling.

The production layer between “what the brand should look like” and “what the brand actually looks like in the wild”—that layer is about to get very, very thin. For someone who’s spent 18 years trying to keep brands coherent at scale, and who’s now building a publishing platform that depends entirely on one illustrator’s talent being honoured and extended with care—I think we can build that future well.

The bottom line

Brand infrastructure meets AI generation

JA

Juli Anderson

Founder, Probably Brilliant

Juli Anderson is the founder of Probably Brilliant, a brand strategy and creative systems advisory studio. Former Head of Global Brand Programs at AECOM, where she led brand alignment across 50,000+ employees in 100+ countries.

Ready to scale your brand systems?

We help brands structure design operations for the AI era. Let’s talk about your infrastructure.

Get in touch