Copy
Using one neural network is really great for learning patterns; using two is really great for creating them.
MIT Technology Review
Sponsored by Intel
The Algorithm
Artificial intelligence, demystified
I challenge you to a duel
11.30.18
Hello Algorithm readers,

In the last three weeks, we laid down the basics of AI. To recap:
 
  • Most AI advancements and applications are based on a type of algorithm known as machine learning that finds and re-applies patterns in data.
  • Deep learning, a powerful subset of machine learning, uses neural networks to find and amplify even the smallest patterns.
  • Neural networks are layers of simple computational nodes that work together to analyze data, kind of like neurons in the human brain.
Now we get to the fun part. Using one neural network is really great for learning patterns; using two is really great for creating them. Welcome to the magical, terrifying world of generative adversarial networks, or GANs.
Celebrity faces generated by GANs. (Nvidia)

GANs are having a bit of a cultural moment. They are responsible for the first piece of AI-generated artwork sold at Christie’s, as well as the category of fake digital images known as “deepfakes.”

Their secret lies in the way two neural networks work together—or rather, against each other. You start by feeding both neural networks a whole lot of training data and give each one a separate task. The first one, known as the generator, must produce artificial outputs, like handwriting, videos, or voices, by looking at the training examples and trying to mimic it. The second, known as the discriminator, then determines whether the outputs are real by comparing each one to the same training examples.

Each time the discriminator successfully rejects the generator’s output, the generator goes back to try again. To borrow a metaphor from my colleague Martin Giles, the process “mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another.” Eventually, the discriminator can’t tell the difference between the output and training examples. In other words, the mimicry is indistinguishable from real.

Sponsor Message

e1b3a91d-61f6-47bf-a264-97a311c01dce.png

Bringing Visual Intelligence and AI to the Edge

08560ab6-6bcb-4f8c-bb05-5eff9e322120.png

The new Intel® Neural Compute Stick 2 makes it easier than ever to develop computer vision and AI inference applications for edge and IoT devices. This affordable USB development kit lets you test, tune, and prototype deep neural networks directly on your device, without requiring a network or cloud connection. The Intel® NCS 2 includes more cores and a dedicated AI hardware accelerator for increased performance over the previous generation – all while operating at low power.

Learn more!

You can see why a world with GANs is equal measures beautiful and ugly. On one hand, the ability to synthesize media and mimic other data patterns can be useful in photo-editing, animation, and in medicine (such as to improve the quality of medical images and to overcome the scarcity of patient data). It also brings us joyful creations like this:



On the other hand, GANs can also be used in ethically objectionable and dangerous ways: to overlay celebrity faces on the bodies of porn stars, to make Obama say whatever you want, or to forge someone’s fingerprint and other biometric data, as researchers at NYU and Michigan State recently showed in a paper.

Fortunately, GANs still have limitations that put some guardrails in place. They need quite a lot of computational power and narrowly-scoped data to produce something truly believable. In order to produce a realistic image of a frog, for example, it needs hundreds of images of frogs from a particular species, preferably facing a similar direction. Without those specifications, you get some really wacky results, like this creature from your darkest nightmares:


(You should thank me for not showing you the spiders.)

But experts worry that we’ve only seen the tip of the iceberg for what wonders and troubles GANs will bring us. As the algorithms get more and more refined, glitchy videos and Picasso animals will become a thing of the past. As Hany Farid, a digital image forensics expert, once told me, we’re poorly prepared to solve this problem.

Hopefully, when we figure it out, it won’t be too late.

Deeper

Here’s some more bedtime reading on the technique:

  • The original 2014 paper by Ian Goodfellow that brought GANs into this world
  • A 2018 paper out of Google Deepmind on the concept of BigGANs—GANs but with way more computational power
  • An excellent technical breakdown of GANs with code for how to implement one
  • A sampling of interesting GAN applications
  • A list of all the named GAN algorithms
  • And my personal favorite: GAN Twitter—it’s just delightful

TR Archives


Martin Giles, San Francisco bureau chief, on the man who invented GANs: “Goodfellow is now a research scientist on the Google Brain team, at the company’s headquarters in Mountain View, California. When I met him there recently, he still seemed surprised by his superstar status, calling it ‘a little surreal.’ Perhaps no less surprising is that, having made his discovery, he now spends much of his time working against those who wish to use it for evil ends.” Read more here.

Rabbit Hole

Here are some tools to make your own GAN-generated images! I’ve shared some of these with you before, but I’m still waiting to receive some artistic creations.

  • The GAN tool in Google’s Seedbank is your basic vanilla GAN
  • Ganbreeder, released earlier this month, lets you mash together as many images as you want to create hybrids that are 46% zebra, 23% jack-o’-lantern, and 31% toilet paper
  • GANpaint, launched by MIT this week, lets you repaint buildings and landscapes with a single stroke of your cursor

Send your masterpiece to algorithm@technologyreview.com.
 

Distributed ledger technology and digital tokens are rewiring commerce...

but lack of trust may stall progress. Discover strategies to navigate this new world at the Business of Blockchain 2019 event. Reserve your seat today.

Research


The machine learning algorithms that let modern smartphones track faces and respond to voice commands can also be used to tell if a person is depressed.

In a study led by Fei-Fei Li, a prominent AI expert, Stanford researchers fed video footage of depressed and non-depressed people into a machine learning model that was trained to learn from a combination of signals: facial expressions, voice tone, and spoken words. The resulting system was able to detect whether someone was depressed more than 80% of the time.

While the work is at an early stage, the researchers suggest that the technology could provide an easier way for people to get diagnosed and helped in future. But they also caution that the technology should not be used to replace a clinician and note that further work is needed to ensure it’s not biased towards a particular race or gender.

Bits and Bytes


Chips powered by light could make AI algorithms crazy fast
The rise of deep learning could help optical chips succeed where past attempts have failed. (TR)

Global automakers are feeding data to the Chinese government 
Officials say the data helps improve public safety and infrastructure, but critics worry about the potential for surveillance. (AP)

Cambridge Analytica used fashion tastes to identify right-wing voters
Great American brands, like Wrangler and L.L. Bean, are aligned with conservative traits. (NYT)

Four billion people lack an address. Machine learning could change that
Extracting roads from satellite images could offer a new way to address the unaddressed. (TR)

Appraisal algorithms could buy and sell homes without the help of humans
Proposed regulations could loosen the requirements for an evaluation by a licensed human appraiser. (WSJ)

Quotable

You’re not gifting an Amazon Echo; you’re gifting a relationship with Alexa.

Kaitlyn Tiffany, a technology reporter for Vox, on why smart speakers are a boon for Amazon and Google.

Jackie Snow
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao.
Know someone who might appreciate The Algorithm?
Forward this email
Was this forwarded to you, and you’d like to see more?
Sign up for free

Artificial intelligence is here. Own what happens next.

Attend EmTech Digital 2019
March 25-26, 2019, St. Regis Hotel
San Francisco, CA

Register Now
You received this newsletter because you subscribed with the email address: <<Email Address>>
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR