Skip to Content
Artificial intelligence

This is fake news! China’s ‘AI news anchor’ isn’t intelligent at all

November 9, 2018

Take a look at the TV anchor above. At first glance he seems perfectly normal, albeit a bit wooden. Look closer, though, and you’ll notice something a bit off about his voice and the way his lips move.

That’s because the anchor isn’t real at all.

AI mimicry: The digitally synthesized anchor was created by Sogou, a search company based in Beijing, in collaboration with China’s state press agency, Xinhua. Sogou used some cutting-edge machine learning to copy and re-create a real person’s likeness and voice. The company fed its algorithms footage of a real anchor, plus corresponding text, and trained it to reproduce a decent facsimile that will say whatever you want.

Anchorman, oh man: Let’s be clear, though. The anchor isn’t intelligent in the slightest. It’s essentially just a digital puppet that reads a script. The “AI” in this case is the software that learns what makes a convincing-looking face and voice. That’s certainly impressive, but it’s a very narrow example of machine learning. You can call it an “AI anchor,” but that’s a little confusing.

Face off: This kind of technology will help improve animation, special effects, and video games., But there are reasons to be worried about how it might be misused to spread misinformation or besmirch someone’s reputation. A similar approach can be used to stitch a person’s face onto someone else, and it’s already been used to create all sorts of unsafe-for-work clips.

Never-ending news: Two anchors have been created, one that speaks English and another that speaks Mandarin. Both have been put to work by the agency on its WeChat channel. Xinhua claims the anchors “can read texts as naturally as a professional news anchor” and says they will “work 24 hours a day on its official website and various social media platforms, reducing news production costs and improving efficiency,” the report says.

Fake future: A couple of months ago, I saw the company’s CEO, Wang Xiaochaun, give a talk at Tsinghua University during which he demoed several AI projects, including one that let people assume the likeness of a famous movie star during video calls. One thing is clear: the future will look (and sound) pretty weird.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.