🔬 Original article by Kathryn Hulick, author of Welcome to the Future: Robot Friends, Fusion Energy, Pet Dinosaurs and More (illustrated by Marcin Wolski, published by Quarto, 2021).
Will AI come alive and take over the world? This is not a scenario that AI experts worry about. However, I write about science and technology for kids. And the movies and TV shows they watch are packed with story lines about rogue robots and intelligent machines turned evil. AI is never a data-crunching algorithm that folds proteins or turns speech into text. It’s either a hero or a villain. It is C3PO and R2D2 in Star Wars. It is the Terminator and the mechanical spiders controlling the Matrix.
Unfortunately, kids aren’t the only ones who misunderstand AI. Most adults in the developed world probably realize that AI technology lets computers do smart things. But that may be all they know. A 2017 study by Pegasystems asked respondents if they had ever interacted with AI technology. Only 34% said “yes,” while the others said “no” or “not sure.” In fact, based on the devices these respondents reported using, 84% of them had interacted with AI. This type of vague familiarity with AI can lead to some serious misunderstandings about what this technology is and what it can do – now or in the future.
My latest book for kids, Welcome to the Future (Quarto, 2021), looks at science fiction visions of the future and explores what might actually happen – with robots and AI as well as with genetic engineering, virtual reality, and more. The book also asks kids to think about ethical issues related to technology – how can we make sure that future technology benefits all of humanity?
To answer these types of questions, kids (and adults) must understand what AI is actually capable of. To get to this type of understanding, they need clear and accurate science communication about AI. Science communicators must bridge the divide between the AI of science fiction and the real AI that engineers build today.
Here are a few of the misconceptions I’m always careful to confront whenever I write about AI for a general audience.
Deep learning is not deep understanding
Deep learning is a catchy name, but an unfortunate one. Even though I know better, the phrase still conjures up a mental image of a sage meditating and coming to profound understanding. Of course, the word “deep” isn’t meant to refer to profundity of any sort. It refers instead to measurement, like the deep end of a swimming pool. Whenever I write about deep learning, I always make sure to explain that this type of model has many more layers than other similar learning models. And that’s the only reason why it’s called deep.
A related misunderstanding often happens when science communicators compare deep learning (or any other learning models) to the brain. This leads people to the misunderstanding that AI works like the brain does, and with more computing power, will be able to mimic a brain. This is wrong.
Yes, artificial neural networks were originally inspired by the brain. That’s why they have “neural” in their name. However, people still don’t know exactly how the brain and its neurons create thought and intelligence. The virtual “neurons” in an artificial neural network are not like the biological neurons in a brain. Because of this, an artificial neural network (as they are built now) would not duplicate the same type of thought and intelligence that happens in a biological brain, even if it contained the same number of virtual neurons and connections among them.
Clearly, the “learning” that a deep learning model does is not the same as the learning that happens in a brain. When a brain learns, it understands. AI models learn to match new data as best they can to their training data. They don’t understand anything about these data. They are pattern-matching statistical processes, not brains or minds.
People fear AI for the wrong reasons
Elon Musk notoriously said, “With artificial intelligence, we are summoning the demon.” Whatever you think about Musk aside, lots of people have similar fears about AI. They worry that it may somehow come to life and that humans won’t be able to control it. The common misunderstandings about AI’s ability to think or reason that I discussed above are probably the main reason this fear is so pervasive. Even if people understand that AI can’t think like a brain now, they may mistakenly believe that AI is much closer to general intelligence or superintelligence than it actually is.
In 2011, IBM’s Watson defeated two human Jeopardy! champions. Accepting defeat, Ken Jennings wrote next to his final answer: “I for one welcome our new computer overlords.” This was a joke, but jokes like this one feed people’s fears. Watson certainly seems like it could be understanding those questions, so maybe it could somehow develop desires and goals beyond those programmed into it. Of course, AI experts know this is ridiculous. In my book Welcome to the Future, I wrote “Watson can’t turn against humans and become an overlord any more than a toaster could suddenly decide to freeze bread instead of heating it.” Watson is far more complex than a toaster. But it’s still a machine that can only do what it’s programmed to do.
Of course, it’s possible that AI could someday gain understanding and intelligence similar to or exceeding that of a typical person. If or when this happens, it’s very important that the AI’s goals and motivations align with ours. This is called the alignment problem, and it’s something a lot of smart people are working on. Brian Christian’s book The Alignment Problem: Machine Learning and Human Values goes through the issue in detail.
In his book A Thousand Brains, neuroscientist Jeff Hawkins, argues that intelligence and goals don’t necessarily come bundled together: “Intelligence is the ability to learn a model of the world. Like a map, the model can tell you how to achieve something, but on its own it has no goals or drives. We, the designers of intelligent machines, have to go out of our way to design in motivations.” According to this argument, we’d only design the motivations that we want, and if AI were doing things we didn’t want, we could just turn it off.
How to live with AI
Here’s the real problem, though: What do we want? Every human being has different goals and motivations. Some have motivations that others find abhorrent or evil. AI doesn’t need to have general intelligence or superintelligence for people to come along and use it for horrible purposes. Today’s AI already makes it disturbingly easy to build autonomous weapons and smart surveillance systems.
AI can also lead to unintentional harms. It can perpetuate systemic racism, sexism, and other biases. That’s because all AI can do is match data. If data are biased, AI will be biased, too. Some facial recognition systems are racist or sexist (or both) because their training data sets contained more white faces or more male faces. Another AI model, COMPAS, was used in the US court system to help predict how likely it was that a defendant would commit another crime if released. If the defendant was black, the model was far more likely to incorrectly predict that the person would re-offend. If the defendant was white, the model was far more likely to incorrectly predict that the person would not re-offend. That’s horrifying. But it’s not the model’s fault. The people who build these types of models are responsible for anticipating and correcting for possible biases.
What we should fear is not AI, but the misuse of AI by evil, power-hungry, or ignorant humans.
I asked Ayanna Howard, dean of Ohio State University and author of the book, Sex, Race, and Robots: How to Be Human in the Age of AI, how we can make sure that we build AI that benefits all of humanity. She said, “Whenever you’re designing new technology, you need to have diverse voices contributing to that. You need to have diverse voices talking about what should be done, how to mitigate harms, and how to do things for the good of humanity.”
In other words, we all need to contribute. All of our voices matter when it comes to designing beneficial AI – even kids’ voices. In order for non-experts to weigh in, they need to understand what’s really going on. So let’s work together to dispel AI misunderstandings and spread the word about the actual issues and concerns that we all need to work together to overcome.Â
More about the author
Kathryn Hulick is the author of Welcome to the Future: Robot Friends, Fusion Energy, Pet Dinosaurs and More (illustrated by Marcin Wolski, published by Quarto, 2021). This book for kids and teens explores ten different technologies that could change the world in the future. The book challenges readers to think about the ethics of each technology – how can we use it to benefit all of humanity? As a freelance science journalist, she regularly contributes to Muse magazine, Front Vision, and Science News for Students. Hulick lives in Massachusetts with her husband, son and dog. In addition to writing and reading, she enjoys hiking, painting, and caring for her many house plants. Her website is kathrynhulick.com You can follow her on Twitter @khulick or on Instagram or TikTok @kathryn_hulick This article was inspired by a piece for The Gradient, A Science Journalist’s Journey to Understand AI. She also spoke about science communication and ethics in AI on a recent episode of the podcast Adventures in Machine Learning, “How to Teach Kids Science with Kathryn Hulick.”