How AI Perfection Is Making Us Fall Back in Love With Flaws

You’ve probably noticed something, even if you haven’t said it out loud. Films look the same lately. Not bad, exactly, just samey. The lighting, the colour, even the way scenes are edited together follows a pattern so consistent you could almost set your watch to it. Scroll through AI-generated art on Instagram for ten minutes and the same feeling creeps in. Beautiful stuff. All of it slightly familiar. Like you’ve seen it before, even when you haven’t.

Something is flattening out. And weirdly, people are starting to push back by reaching for things that are a bit broken.

It’s happening across photography, sculpture, illustration, even the way people write. Photographers are reaching for old lenses with scratches and quirks, ones that bloom at the edges and catch light unpredictably. Sculptors working digitally are leaving fingerprints visible in the clay rather than smoothing them away. Illustrators are keeping the wobbly line. Writers are letting sentences run on a little longer than they probably should. None of these are accidents. They’re choices. And the fact that so many people are making them at the same time is worth paying attention to.

The world AI built: smooth, capable, a little soulless

To understand why imperfection is having a moment, it helps to look at what we’ve been surrounded by. Over the last few years, AI-generated images, videos, music and writing have become genuinely hard to avoid. The output is, by almost any technical measure, impressive. Faces are symmetrical. Colours are balanced. Compositions follow every rule.

And yet. There’s something about spending time with a lot of AI-generated content that produces a specific kind of fatigue. It’s not that any single piece is bad. It’s that everything shares a texture, a kind of frictionless competence that starts to feel, after a while, like background noise. The glowy surrealism of AI art has its own visual grammar that you start to recognise. The cadence of AI-written text has a rhythm you can learn to spot. Films and television have their own version of this, a particular colour grade, a particular pacing, as though everything has been optimised toward the same idea of what a good film looks, sounds and feels like.

Technically accomplished. Oddly familiar. Difficult to remember the next day.

How recommendation engines narrowed our world

Part of what created this landscape is structural, baked into the way the internet now works. The platforms that serve us art, music, film and writing, whether that’s Spotify, Netflix, Instagram or TikTok, all use recommendation algorithms built to maximise engagement. They show us more of what we’ve already responded to, gradually building a picture of our preferences and using it to keep us watching, listening, scrolling.

The logic is reasonable enough on its own terms. But the cumulative effect is that our exposure to culture gets funnelled. The weird, the unfamiliar, the difficult, the work that takes twenty minutes to click, tends not to perform well in these systems. It doesn’t get surfaced. It quietly disappears from view, not because anyone decided to suppress it, but because no one decided to show it.

Meanwhile, AI generative tools are trained on the content that performed best online, the images that got the most likes, the music that streamed the most, the writing styles that got the most shares. So the AI learns to reproduce the popular. It gets very good at making more of what already worked. The result is a loop: the algorithm promotes what’s popular, the AI makes more of it, the algorithm promotes that too.

What homogenised culture actually looks like

If you’ve spent any time on social media or streaming platforms recently, you’ve probably felt this without quite naming it. Netflix film posters have become something of a running joke among designers, a floating head, soft lighting, a particular blue-orange colour grade so ubiquitous it has its own Wikipedia discussion threads. Lo-fi music, that gentle hazy genre of beats-to-study-to, has become so recognisable as an aesthetic that it’s essentially a brand. The music is calm and warm and pleasant, and all of it sounds more or less like all the rest of it.

AI art has its own version of this. There’s a particular quality to a lot of AI-generated imagery, a hyper-detailed dreamlike quality, skin that catches light in a specific way, backgrounds that have depth without quite having weight. It’s beautiful, genuinely. It’s also becoming a genre unto itself, one you can identify at a glance. Even AI-written text has tells, a tendency toward neat three-part structures, a certain kind of confident opener, a summary that wraps things up a little too tidily.

None of this is anyone’s fault, exactly. It’s an emergent property of systems optimising for what works. But the cumulative effect is a kind of cultural narrowing, a shrinking of the range of things we’re likely to encounter.

The body’s response

And then people started reaching for old lenses.

It’s worth taking seriously as a cultural signal, this turn toward texture and imperfection. It’s not nostalgia, exactly, or at least not only that. Photographers choosing lenses with chromatic aberration and soft edges aren’t trying to live in the past. They’re trying to make images that feel like they came from somewhere, that have a trace of the conditions in which they were made.

The same impulse is behind the fingerprints left in digital sculpture. The same thing is happening when illustrators keep the wonky line, or when someone writes a paragraph that meanders a bit before finding its point. There’s a kind of authenticity being signalled, one that has become more legible, more communicative, precisely because the alternative is now so available. When a perfectly smooth, perfectly lit, perfectly composed image can be generated in seconds, an image that isn’t perfectly smooth starts to say something.

What it says, essentially, is: a person was here. A specific person, with a specific hand, on a specific day, making specific choices and also a few mistakes. That trace, which used to be unremarkable, now carries meaning.

Imperfection as a new language

This is the genuinely interesting cultural development, and it’s one that AI inadvertently created. By making flawlessness cheap and abundant, AI has changed the value of imperfection. Grain, irregularity, the lens flare, the visible brushstroke, the sentence that runs a bit long, these things now function as a kind of signature. They communicate authorship in a way that technically perfect work no longer can.

It’s a bit like what happened with handwriting after the printing press, or with acoustic instruments after electronic production took over pop music. Once the technology can replicate the surface, the surface stops being enough. What becomes valuable is the trace of a human process, the evidence that something was made rather than generated.

This doesn’t mean imperfect things are automatically better. A wobbling line is only interesting if there’s intention behind it, or at least the possibility of intention. What’s changed is that the wobble now reads differently. It asks the viewer to consider the hand that made it.

The tension worth holding

It would be too easy to frame all of this as a straightforward story about human creativity triumphing over machine efficiency. The reality is murkier and more interesting than that.

AI tools have genuinely opened things up for a lot of people. Filmmakers who couldn’t afford certain visual effects can now achieve them. Musicians who don’t have formal training can produce sophisticated arrangements. Writers who struggle with structure have tools that can help them find it. The democratisation of creative tools is real, and it matters. People are making things now who wouldn’t have been making things five years ago, and some of that work is genuinely surprising and good.

So what we actually have is two things happening at once, a flattening pressure from the homogenising tendency of AI-generated content and recommendation systems, and an expansion of who gets to make things at all. These aren’t in neat opposition. They’re tangled up together, producing a cultural moment that resists simple verdicts.

The algorithm has opinions. So do we.

What the turn toward imperfection suggests, more than anything, is that taste is adaptive. People are not passively receiving what the algorithm serves them. They’re developing responses to it, aesthetic preferences that push back against the frictionless and the optimised, even when they can’t fully articulate why.

Taste has always been a response to its cultural moment. The Romantics were reacting to industrialisation. Punk was reacting to stadium rock. The handmade, the grainy, the visibly human, these are a reaction to something too, a preference forming in real time, in response to an environment that has suddenly become very smooth.

Paying attention to your own reactions is worth something here. When you reach for the album that sounds like it was recorded in a room rather than assembled in software, when you linger on an illustration that has a slight tremble in the line, when you prefer the version of the story that goes sideways before it arrives, you’re doing something. You’re making a choice that the algorithm didn’t make for you. That’s not a small thing, even if it feels like one.

Laura Mills
Scroll to Top