The Verge Google Quick Draw
It was a happy accident.
Jonas Jongejan, a creative technologist and member of the Google Creative Lab, was jetlagged and heading to an internal hackathon. He was wracking his brain for something fun to make that was related to artificial intelligence. Then it hit him: What if he took a machine learning algorithm built to identify objects in photos, and trained it with images of line drawings and doodles instead? He discovered that it worked pretty well, and even when it failed, the results were entertaining–while also offering a small peek into how computers see. "The fact that it was not perfect, that you could play with it," Jongejan says. "That was where Quick, Draw! was born."
Within six months,Quick, Draw!had gone viral. People from all over the world were playing the experimental game, which asks you to doodle an object or animal while the neural network tries to guess what it is. Itwas on the front page of Reddit. Dozens of publications, Co.Design included, covered the quirky little site.
Quick, Draw! also had an unintended consequence. Anyone who played the game–now millions of people–was contributing their drawings to a new data set full of doodles. Together, they were creating a whole new set of drawings on which to train a neural network more advanced than the intentionally imperfect Quick, Draw! algorithm. The group's next game, AutoDraw, was built from the data collected from the first. It went further thanJongejan's original; while players still doodled objects as the neural net tried to guess, it also provided them with a clip art version of their drawing, whether it was a car or a tree–clearly useful functionality.
Jongejan never imagined thatQuick, Draw! would end up with more than 50 million doodles, or that it would spawn research papers, coding tools, and even an analysis of how people are really bad at drawing flamingos. His game is the biggest runaway success of Google's AI Experiments project, and perhaps the best illustration of its value to the company.
Based out of Google's Creative Lab, AI Experiments is created by a team of coders, designers, and writers whose only job is to make cool stuff using Google's technology. The year-old project is a way of demystifying cutting-edge technology by transforming it into fun experiences and games, but it's also a way to introduce new ideas to Google researchers and engineers. And while the Creative Lab team says their only new intended use for the Quick, Draw! data set is to make a T-shirt out of T-shirt drawings, it's easy to imagine a neural network that recognizes your drawings showing up in a Google keyboard at some point in the future.
One year into the initiative, the 16 or so experiments it's produced offer insights into how the company is positioning itself as a leader in machine learning and AI. By making its technology more understandable to the average person and more accessible to developers through fun, often downright silly experiments, Google reinforces its brand, grows its market share, and teaches people how to think about the brave new world of artificial intelligence.
Google's AI Guinea Pigs
The concept of putting interactive experiments online isn't new for Google. Creative Lab launched its first experiments in 2009, with a focus on Chrome. One of the most successful was an interactive film in collaboration with the band Arcade Fire. The video was customized to each viewer using images of their childhood home generated by Google StreetView, and it proved to be wildly popular. Since then, Creative Lab has released Android Experiments, AR Experiments, WebVR Experiments, AI Experiments, and most recently, Voice Experiments, all of which are open source and rely on Google technology. (The company's latest voice experiment, Paper Signals, launched this week.)
But AI Experiments itself is more recent. It got its start in November of 2016, when Alex Chen, a creative lead at Google Creative Lab, was talking with two Google researchers about what machine learning is and what it can do. They showed him a visualization of how a neural net works, and Chen brought it back to his colleagues at the Creative Lab. The seed was planted. "I thought, oh, it'd be nice to make this technology more accessible, for more people to understand what it really is, the math behind it and how it works, but also to start playing with it and interacting with it," Chen says.
Jongejan–his colleague at Creative Lab–was already developing Quick, Draw!, and it would become one of the first few AI Experiments to launch in late 2016.
To build the game, Jongejan worked with a Google research team that focuses on deciphering handwriting with machine learning. This kind of cross-team collaboration is typical for any experiment the team builds. The Creative Lab often works with product teams as well, jumping into the process while the Google technology is still being developed. For instance, the Creative Lab started working on an experiment with Google's AR platform ARCore before it was released.
"We see that something's going to happen, and we jump in and try to work with that medium up front, immediately, and try to iterate on experiments that can push the tech forward and can show some glimpse of what this tech might potentially do in the future on a larger scale," Jongejan says. "And sometimes that helps the product teams. We make ourselves guinea pigs."
AI Experiments have two primary target audiences–the public, obviously, but also the developer community. "This isn't a Google product," Chen says. "It's meant as a giant fun code example so another developer might wonder, 'I wonder how they hacked this together, how they got this working. I didn't know that was something that the Cloud Vision API [which can identify objects within images] could do.'"
From a business perspective, these developers are some of Google's most prized users, and experiments are a way to pique their interest in Google products like TensorFlow, its machine learning platform, and tools like Cloud Vision API. It's a big reason why Creative Lab makes these kinds of experiments.
"In a very holistic way, just the fact that we're exploring what's possible with these public tools, I think there's value there," says Amit Pitaru, who's a creative lead for the experiments. If you can show a developer the cool stuff they can make with TensorFlow, maybe you can convert them into an active user and increase the number of developers building using Google's services. In essence, I pointed out, it's increasing Google's marketshare of developer tools. "I hope so!" Pitaru tells me.
Marketing–And Normalizing–AI
AI Experiments are a subtle kind of advertising. In fact, Creative Lab is housed inside Google's marketing department. But the company isn't just trying to sell developers on using its new tech. As Google CEO Sundar Pichai said last year, Google is now an AI-first company, focused on finding new and better ways to organize information through AI. Machine learning powers the search function within Google Photos, where you can type in a keyword like "flower" and see all the photos you've ever taken of a flower. Machine learning lets Google Assistant understand what you're saying. It's integrated into dozens of other Google products and services.
The advent of AI has ignited a fierce debate over whether it represents a threat (as Elon Musk fears) or a boon for humanity. Google has a strong incentive to convince people that it is a responsible developer of these technologies–that they can trust Google to not be evil as the company continues to cook up more and more algorithms to slip into its software. How do you do that? You explain machine learning in a way that's fun, engaging, and simple.
"All of our experiments have a very playful aspect," says Jane Friedhoff, a creative technologist at Creative Lab with a background in indie game development. "When people are playing, that's when they're really engaged with what they're doing. They're poking at the boundaries of a system."
The most recent AI Experiment, called Teachable Machine, does exactly that. It works like this: You press big colored buttons on your screen while making different hand gestures at your camera. Watching your camera feed, the neural network algorithm learns to associate the green button with your hand on your head, the purple one with your hand in front of you, and orange button with your hand to the side. It's a basic form of teaching an AI to respond to your gestures. After that, you can test it out with other gestures and actions. This process–of feeding an algorithm lots of data so that it learns how to classify it–is the basic premise behind machine learning.
"We designed it to be a bit of a lowest common denominator," says Barron Webster, the designer who worked on Teachable Machine. "It's super dumb at this point–we wanted to make it as easy to use as possible."
Teachable Machine is relatively easy to break; some gestures work well while others don't. The team wants to set people's expectations at a reasonable level, so that when they experience real-life AI, they understand why sometimes it works and sometimes it doesn't. "One of the things that we kept going back to in our role, is making it as simple for people to understand what's going on behind the hood–in maybe an abstract way," Webster says, "So that when they encounter this technology in the wild, they have a bit more intuition and understanding about how machine learning is processing information, why sometimes it's wrong, why recommendations sometimes feel weird."
Even the Teachable Machine's interface reflects that goal. Designing in an indication of how confident the algorithm is was crucial. Above each "training" button, there's a bar that indicates confidence: When the algorithm is more confident, the bar is filled in. Users need to realize that its internal mechanisms aren't binary–its decisions are based on a best guess calculated with data, not on hard and fast rules. The interface encourages users to play around with the system, but also points out the limitations of the technology. "[Users]can start to feel where things get wiggly and where things get fuzzy and where things break down," Webster says. "By letting people play with that by themselves, in a safe context, with their own image, lets them understand a little more why machine learning is wrong sometimes."
Today, the Creative Lab team is developing a physical prototype of Teachable Machine made of plywood with big plastic buttons. They're convinced that having a tactile, tangible interaction will make the concepts behind machine learning even more accessible.
When I first played around with the tool, I tried to train it to recognize different facial expressions, like a smile versus a frown. I had no such luck. But according to the game designer Friedhoff, it is these imperfections that give people the chance to ask questions. "Now your question is, wait–why does this work better for body movements than for faces?" she says. "And that's an actual piece of curiosity that you can dig into."
AI Literacy
Designing algorithms that are fair and unbiased is one of the biggest problems in tech today, including at Google–in 2015, one of its image classifying algorithms labeled black people as gorillas. Bias in AI also has huge implications in the world at large, from the way loans are given to the criminal justice system, where predictive policing targets black men.
I asked the Creative Lab team about whether their experiments have a role to play in illustrating some of the problems with AI, given that these issues can impact people's lives in significant ways. They were careful to frame their experiments not as a way to confront problems with AI, but to teach users literacy about what AI is and how it is flawed. The team hopes that their decision to design imperfect algorithms that are easily broken boosts that literacy. Ideally, if you understand how an algorithm works on a basic level, you'll be equipped with the knowledge to be more critical of its conclusions. "We do want to educate people that, don't just take machine learning for granted," Jongejan says. "Machine learning is only what you put into it."
It's a lot for an experiment to do. In all likelihood, the users of something like the Teachable Machine are not instigating any kind of deeper investigation or thinking about how algorithms fails us. The Creative Lab's cheerful representation of what machine learning does isn't designed for the task of helping people understand AI on a critical level. Still, having a baseline understanding of a technology that is infiltrating the products we use everyday does better equip you to contribute to the debate about ethics and the role of AI in society.
"I think the net positive of putting things out there like this is more people get the technology demystified in their minds so they can all participate in the conversation," Chen says. "When you don't have an inkling as to what's going on, you can't have a meaningful conversation about it, in terms of what we should do and what are the right ethics."
Ultimately, AI Experiments help Google–they're a teaching tool, and a useful way for Google to publicize its AI-first agenda. More than anything, it's worth remembering that even some of Google's official products are still just experiments themselves–and even the AI products thousands of people are already using don't always act the way we expect them to. As I was standing in Creative Lab's space talking to the group, a Google Home that was sitting on the table suddenly interrupted my question. One of the team members quickly muted it.
Source: https://www.fastcompany.com/90152774/the-dead-serious-strategy-behind-googles-silly-ai-experiments