When max met aisha

When max met aisha

In October 2025, Channel 4 aired a documentary about AI and employment. The presenter, Aisha Gaban, delivered her lines with the slightly stilted cadence you expect from documentary narration. She looked professional. She sounded credible. And at the end of the programme, she revealed something viewers hadn’t spotted: “I’m not real. In a British TV first, I’m an AI presenter.”

Most people watching didn’t notice. That’s the point we’ve reached.

A few weeks earlier, Hollywood unveiled Tilly Norwood, an entirely AI-generated actress. The backlash was immediate and fierce. SAG-AFTRA released a statement emphasising that creativity “is, and should remain, human-centred.” Actors condemned the concept. Film directors called it a threat to their craft. Yet beneath the outrage sits an uncomfortable reality: the technology works well enough to provoke this response.

We’ve crossed a threshold. Digital humans are no longer confined to expensive visual effects studios or experimental research labs. They’re reading the news in India, Kuwait, Taiwan and Greece. They’re influencing purchasing decisions on Instagram. They’re appearing in corporate training videos and customer service chat windows. And most significantly, they’re increasingly difficult to distinguish from real people.

This isn’t science fiction. It’s current practice. The question isn’t whether synthetic humans are coming, it’s what happens now they’re here.

From Max Headroom to Photorealism: A Forty-Year Journey

The concept of digital presenters isn’t new. In 1985, Max Headroom became television’s first supposed computer-generated host. Except he wasn’t. Max was an actor in prosthetics, marketed as an AI creation. The technology didn’t exist yet. The satire did.

By 2000, someone actually tried it. Ananova launched as the world’s first virtual newsreader, recognised by Guinness Records. She was a 3D-animated character reading news via text-to-speech on a website. Primitive by modern standards, but her creators predicted something prescient: a “population boom in virtual people” for roles like agents, receptionists and sales representatives.

They were right about the future. Wrong about the timeline.

Throughout the 2000s and 2010s, realistic digital humans remained expensive, time-consuming and firmly in the uncanny valley. Hollywood used motion capture for films like The Polar Express and created CGI replicas of actors for particular scenes, but these were bespoke projects requiring months of work and massive budgets. The technology improved gradually. The business case remained weak.

Then, around 2018, something shifted. China’s Xinhua news agency debuted an AI news anchor whose face and voice were synthesised from footage of a real presenter. Not perfect, but functional. Not indistinguishable, but close enough for a news bulletin.

By 2023, AI newsreaders had spread globally. India introduced Sana and Lisa. Greece unveiled Hermes. Kuwait launched Fedha. Taiwan deployed Ni Zhen. These weren’t experimental curiosities. They were production systems delivering actual news content to actual audiences.

The Guardian observed in late 2023 that “country after country debuted their first AI news anchor” that year. What changed wasn’t just the technology. The economics shifted. The practicality improved. The acceptance grew.

The Current State: Better Than You Think, Not as Good as They Claim

Let’s be precise about capabilities. Today’s AI presenters exist on a spectrum.

At one end, you have systems like South Korea’s Zae-In, who reads live news on SBS. Technically impressive, but there’s a detail worth noting: a real human actor drives her body and voice in real time. Zae-In’s flawlessly AI-generated face is overlaid via deepfake technology. It’s not autonomous. It’s augmented.

At the other end, you have fully synthetic avatars like Channel 4’s Aisha Gaban, reading from pre-written scripts with no human performance underneath. The visual fidelity has improved to the point where, on first viewing, she looks human. Close inspection reveals tells. Some viewers noticed a “deadness” in the eyes. The mouth sync wasn’t perfect on certain sounds. But these are subtle failures, not obvious ones.

The trajectory matters more than the current state. When NVIDIA revealed in 2021 that part of CEO Jensen Huang’s keynote was secretly delivered by a CGI replica, attendees had no idea until NVIDIA disclosed it afterwards. They’d done a full body and face scan of Huang, used AI to mimic his gestures and expressions, and produced a 14-second segment where the digital version was virtually indistinguishable from the real thing.

Fourteen seconds required a truck full of DSLR cameras and custom AI animation tools. Today, creating a decent-looking talking avatar takes minutes using platforms like HeyGen or Synthesia. You type a script, select an avatar from a library, and receive a video of a lifelike person speaking it with appropriate lip-sync and expressions.

The gap between “obviously fake” and “difficult to detect” is closing faster than most people realise.

The Technology Stack: What Makes This Possible Now

Two technical advances explain the recent acceleration.

First, generative AI and deepfake techniques have matured. Modern GANs and diffusion models can synthesise high-resolution, photorealistic human faces. These systems learn the complex patterns of real human faces moving, then generate new, realistic motion. The Korean company Pulse9 uses deepfake-style face generation to power Zae-In, ensuring her AI face precisely follows the real actor’s micro-expressions in real time.

Second, real-time animation and game engines have become accessible. Epic Games’ Unreal Engine and its MetaHuman system allow creators to build highly detailed 3D human models with realistic skin, hair and facial rigs, then animate them with motion capture or AI. What once required a Hollywood visual effects studio can now be done by a competent developer with cloud compute access.

The integration point matters. Tools like NVIDIA’s Audio2Face automatically generate facial animation from an audio track. Feed it speech, and it produces the corresponding mouth movements and micro-expressions. This removes one of the traditional bottlenecks in avatar creation: manual animation of facial performance.

For static images, we’re essentially already at undetectable. AI-generated photos of people regularly fool audiences at scale. For video and interactive avatars, the timeline is shorter than most expect. Some developers claim we could see real-time AI avatars that fool most viewers within just a few years. Others are more cautious, pointing to lingering imperfections and the enormous complexity of human behaviour.

The technical challenge isn’t just visual realism. It’s contextually aware behaviour. An AI that looks exactly like a human news anchor but speaks with odd phrasing or lacks true understanding will still feel wrong. Progress in large language models is addressing this on the content side, enabling avatars that can hold natural conversations or generate ad-lib responses.

NVIDIA’s ACE (Avatar Cloud Engine) combines realistic graphics with AI brains for interactive characters. It provides cloud-based AI models for speech recognition, natural language understanding and facial animation. Early demos show NPC avatars that no longer speak pre-scripted lines but generate dialogue on the fly, with matching facial expressions.

As these systems improve, the line between a video game character, a virtual assistant and a “real” human interaction will blur. For many use cases, it already has.

The Hollywood Problem: When Technology Meets Culture

The case of Tilly Norwood illustrates both the technical achievement and the cultural collision.

Norwood was created by London-based AI studio Xicoia and introduced via a short parody film. On screen, she appears as a photoreal young woman, described as resembling a fusion of real actresses like Gal Gadot and Ana de Armas. She “acted” in the skit alongside other AI-generated characters, with all her movements and dialogue synthesised by AI tools.

The creator, Eline van der Velden, touted Norwood as a potential “next Scarlett Johansson” and claimed talent agents were interested in representing this virtual actress. The cost argument was explicit: using an AI actor like Tilly Norwood could cut production costs by 90% for certain roles.

SAG-AFTRA’s response was unequivocal: “Tilly Norwood is not an actor. It’s a character generated by a computer programme. It has no life experience to draw from, no emotion.” The union stressed that replacing human performers with AI violates the core of creativity and copyright.

Technically, the demo had problems. Reviewers noted exaggerated mouth movements and occasional visual artifacts like blurring teeth that created an “uncanny valley effect.” Fully digital actors aren’t quite ready for prime time in serious acting roles. A University of Southern California media tech expert was blunt: much of Hollywood has “zero interest” in purely synthetic stars at present.

Real actors carry human depth, spontaneity and audience connection that an AI creation cannot authentically replicate. Not yet, anyway.

But film production already uses AI-enhanced CGI routinely. De-ageing actors is standard. Recreating deceased actors for brief appearances happens. One 2024 film, The Brutalist, used AI to generate some actors’ spoken dialogue in a foreign language. Studios are eyeing AI for digital stunt doubles and background extras at scale.

The economic pressure is real. An AI character doesn’t require a trailer, salary or time off. It never ages or gets injured. It can be perfectly controlled. Some producers will continue pushing AI performers, especially for marketing or experimental projects, regardless of creative resistance.

The ability to convincingly replicate a feature-length human film performance with AI stand-ins is still seen as far off. Short clips or single scenes can fool the eye. Maintaining the illusion over hours of content, with all the subtle emotional range and spontaneity of a human, remains extremely challenging.

The technology will improve. The cultural resistance won’t disappear.

Who’s Building This: The Vendor Landscape

Multiple companies are driving development and deployment. Understanding the ecosystem matters because these tools are becoming increasingly accessible.

NVIDIA provides core technology through its Omniverse platform and ACE toolkit. The company’s GPUs enable the compute needed for real-time rendering. NVIDIA has partnered with game studios including NetEase, Tencent and Ubisoft to integrate AI characters. Their research in AI-generated facial animation and voice synthesis is widely used in avatar systems.

Epic Games provides Unreal Engine and MetaHuman Creator, dramatically reducing the time to get a convincing digital human on screen. The technology has been used in everything from game cutscenes to live virtual concerts featuring digital pop stars.

Specialised startups focus specifically on AI-driven avatars. Synthesia (London) offers an AI video generation platform with a roster of virtual presenters. Soul Machines (New Zealand) creates digital people with animated faces for customer service and education. UneeQ provides a platform for corporate virtual assistants used by banks and other enterprises. DeepBrain AI (South Korea) produces AI news anchors using deepfake technology.

Big tech isn’t absent. Meta invests in ultra-realistic codec avatars for VR telepresence. Microsoft’s Azure includes avatar SDKs. Apple’s Vision Pro introduced the concept of a realistic personal avatar (the “Persona”) for video calls, using machine learning to create a lifelike representation.

Importantly, many of these technologies are accessible via cloud APIs or software-as-a-service. You don’t need a Hollywood budget to leverage an AI presenter anymore. This democratisation fuels rapid uptake.

Disrupted Sectors: Where This Actually Matters

The march of photoreal AI humans impacts several industries differently.

Broadcasting and media can run overnight or niche bulletins with AI anchors, reducing staffing costs for certain formats. As seen with Channel 4’s experiment and various international examples, AI newsreaders can deliver information around the clock. Most major outlets still value human credibility and will likely limit AI to supplementary roles, but the economic pressure exists.

Gaming and interactive entertainment sits on the cusp of an AI-NPC revolution. Traditionally, game characters follow scripted dialogue trees and pre-animated gestures. With tools like NVIDIA ACE and platforms from Inworld or Convai, developers are giving game characters dynamic AI brains and voices. In future open-world games, every NPC could engage in unscripted conversation with realistic facial animation and speech. This greatly enhances immersion. The gaming industry will likely adopt these AI avatars at scale because they reduce the labour of scripting thousands of lines and recording voiceovers.

Customer service and commerce increasingly use digital humans to staff virtual storefronts, websites and call centres. A bank might have a friendly avatar on its app to answer questions. Retail could deploy virtual shopping assistants. Healthcare is testing AI nurse avatars for patient intake. Education experiments with AI tutors available 24/7 who can personalise instruction and make e-learning more interactive.

Film and television production faces the biggest creative resistance but also the strongest economic incentives. AI could replace extras, stand in for stunt performers, or generate actors for minor roles. Advertising might use AI stand-ins for celebrity endorsers (with permission and licensing). Animation studios might create “virtual actors” for movies or interactive stories. The sector will evolve, but slowly and with significant pushback from creative unions.

The Personal Avatar Question: When Digital You Becomes Practical

Here’s where this gets genuinely interesting for individuals rather than corporations.

We’re approaching a point where creating a digital version of yourself becomes practical. Not as a novelty or experiment, but as a functional interface tool.

Consider the implications. Someone with social anxiety could let their digital avatar handle initial video calls. A person with ADHD might find that a digital representation helps them communicate more clearly because the avatar can be programmed to maintain eye contact and measured speech patterns they struggle with naturally.

For professionals who do repetitive presentations or training, a digital avatar could handle the routine deliveries while they focus on strategic work. Customer-facing roles could use avatars for initial interactions, with humans stepping in for complex situations.

The technology for personal avatars already exists. Apple’s Persona on Vision Pro creates a realistic digital representation for video calls. The quality isn’t perfect yet, but it demonstrates the concept. As the technology improves, we’ll see personal avatars become more common in:

  • Virtual meetings (your avatar attends while you’re genuinely engaged but not on camera)
  • Educational content (record once, your avatar delivers it hundreds of times)
  • Customer service (your avatar handles tier-one queries, you handle escalations)
  • Social media (content creation becomes less about filming yourself and more about directing your digital representative)

The cultural acceptance is the bigger barrier than the technology. We’re not there yet. But the same trajectory that made AI newsreaders unremarkable suggests personal avatars will eventually become normalised.

Think about how video calls went from awkward novelty to daily routine within a decade. The same adoption curve could apply to digital representatives, especially if they solve real problems for people who struggle with traditional communication methods.

What This Means for You: Practical Implications

If you’re a CTO, engineering leader or business executive, several strategic questions emerge.

First, evaluate where synthetic humans might actually add value in your organisation. Customer service and training are the obvious candidates. If you’re spending significant resources on repetitive video content, avatar systems could deliver genuine efficiency gains. But don’t deploy them just because the technology exists. Deploy them where they solve actual problems.

Second, understand the detection and authenticity challenge. If you use AI presenters or avatars in customer-facing roles, be transparent. The Channel 4 documentary worked because it disclosed the deception at the end. Using undisclosed AI in contexts where trust matters (financial advice, healthcare, etc.) creates significant risk. Your customers will eventually notice, and the trust damage may exceed the efficiency gain.

Third, watch the talent and IP implications. If you create digital versions of employees, who owns those representations? What happens when the employee leaves? These legal and ethical questions don’t have settled answers yet. Get ahead of them rather than discovering the problems after implementation.

Fourth, consider accessibility applications. The technology that creates photorealistic avatars can also help people who struggle with traditional communication. If you’re building products or services, think about how digital representatives might improve accessibility for users with social anxiety, communication disorders or other challenges.

Finally, prepare for the normalisation. In five years, seeing AI presenters in training videos or customer service contexts will be unremarkable. The companies that experiment now, learn the limitations and build appropriate guardrails will be better positioned than those who wait for perfection.

The Bottom Line

We’ve moved from Max Headroom’s satirical fake AI presenter to Aisha Gaban’s genuinely convincing one in forty years. The technology has crossed from expensive special effects to accessible cloud services. The quality has progressed from obviously synthetic to difficult to detect.

This isn’t a distant future concern. Digital humans are reading news, influencing purchases, staffing customer service and appearing in entertainment today. They’re not perfect. They’re not indistinguishable in all contexts. But they’re good enough for an increasing number of practical applications.

The pattern is familiar. Technologies that seem remarkable become routine through gradual improvement and cultural acceptance. GPS seemed like magic twenty years ago. Now it’s background infrastructure. AI-generated voices sounded robotic five years ago. Now they pass for human on customer service calls.

Synthetic humans are following the same trajectory. The question isn’t whether they’ll become normal, it’s how quickly and in which contexts.

For organisations, the strategic move isn’t to ban them or rush to adopt them everywhere. It’s to identify where they genuinely add value, implement them thoughtfully, maintain transparency, and prepare for a world where digital and human representatives coexist routinely.

For individuals, the question of personal digital avatars remains open. The technology exists. The use cases are emerging. The cultural acceptance is developing. Whether you want a digital version of yourself handling certain interactions is a choice you’ll likely face sooner than you expect.

The technology has stopped being special effects. What it becomes next depends on how we choose to use it.

What’s your view? Can you imagine contexts where a personal digital avatar would genuinely help you? I’d be curious to hear your perspective in the comments.


Want to explore how emerging technologies like AI avatars might impact your organisation? Connect with me here on LinkedIn for practical insights on navigating technology change without the hype.

Today’s webinar with Paul on “AI Trends for Leaders – Present and Future” left me deeply inspired, especially as we explore how these advances will reshape the pharmaceutical industry—both in marketing and sales capabilities. This article opens a fascinating door: the idea of personal digital avatars and how they might genuinely add value in the professional sphere which is not a new concept but deep dived here As someone with over 20 years of experience in healthcare marketing and commercial leadership—across pharma, OTC, global markets and channel execution—I found the avatar question particularly relevant and based on my personal experience with China it has already started in different shape and form. In our field, where customer engagement, data-driven insights, omnichannel journeys and field force transformation are critical, the notion of a digital avatar opens new territory: Could it enhance HCP (healthcare professional) engagement? Personalised detail? 24/7 interaction? And what about regulatory, compliance, and pharma’s unique ecosystem? In short: yes—there are compelling possibilities, but comes with the practical and ethical challenges.

Like
Reply

Fascinating how quickly synthetic communication has become familiar. Digital anchors are just the next step in how we relate to information. How do you imagine a future where each of us might have a digital presence sharing our voice in new ways?

Like
Reply

A very interesting read Paul. But all I can think of now is flip flip flip flip-flops. https://youtube.com/shorts/wm45OWyw4wc

It is a timely article Paul and yes I can imagine a context where a personal digital avatar is useful..maybe even mandatory. I have been training a Podcast Co-Host and eventually will train a version of me and presumably the 2 will natter away about innovation 24/7 or as long as my tokens last. Why...well they will be covering niche stories that real me will never get to, targeting a tiny audience about something they really want to know about (and therefor prepared to sacrifice the ideal of human presenters) . Ultimately video games and customer service use cases will habituate us to this format so I think in time the resistance will change. Cost / Profit / Greed will force our hand to try this and perhaps hybrid human AI creativity is the best we can hope for on screen. This could be good for theatres where the best of human entertainment will be enshrined as a time capsule of human creative endeavour from an era that has already gone...but maybe not everyone has noticed.

Like
Reply

Seems so long ago now. 😬 Techniques such as RAG and just the right training corpus would let you box it in pretty well now I suppose…or get your slant just right. I’d say “we’ll be swimming in it soon” but of course we already are! It’s not going back in the box, so folks had better start getting their heads around it.

To view or add a comment, sign in

More articles by Paul Bratcher

Others also viewed

Explore content categories