Stories
Stories
Don’t Be Afraid of AI
Terminator 3 clip: Warner Bros. Pictures © 2003.
Dan Morrell: I think one of the biggest mainstream crossovers that artificial intelligence has had on the modern era is the Terminator movies. For those of you who haven't seen them, the basic premise is this: Computers get smart, become self-aware, take over, sometimes take on the shape of Arnold Schwarzenegger, and try to eradicate humanity. Here's a refresher, courtesy of Terminator 3:
“The virus has infected Skynet.
Skynet is the virus! It's the reason everything's falling apart!
Skynet has become self-aware. In one hour it will initiate a massive nuclear attack on its enemy.
What enemy?
Us! Humans!”
But Donna Dubinsky (MBA 1981), CEO of Numenta, says all that Hollywood-inspired fear of artificial intelligence is misplaced. Dubinsky is an icon in the tech world. She was the founding CEO of Palm and the cofounder of Handspring, ushering in two of the tech ages biggest leaps—handheld computing, and the smart phone. So, she's essentially been famously successful by being right about the future. We sat down with Dubinsky to talk about what the future of AI really looks like and why we don't need to worry about the Terminator.
- READ MORE
-
Morrell: Donna, I want to talk about Numenta, where you are the CEO. It's often shorthanded as an AI company. Talk to me about what Numenta does.
Dubinsky: Sure. Well, first of all Numenta is a company with kind of two missions. One is a very scientific mission and the other is a commercial mission. On the scientific side, our objective is literally to reverse engineer the human neocortex. Figure out how the brain works. That is the core of who we are and why we're here. The second part of the mission is really about figuring out how to commercialize that and how to apply it in a way that can benefit humanity. But the two missions sit side by side in some sense.
Morrell: So, compare what you do to the rest of the AI sector. How is what you do different from everybody else in AI?
Dubinsky: Well, all these companies in AI, they kind of sound the same. They talk about neurons, they talk about neural networks, they have some of those words. But they're actually not very biologically true. They don't have neuroscientists on their staff, they don't read the latest papers about what neuroscientists are discovering and they're really disassociated from the brain and the brain's neuroanatomy. They are very focused on what is really decades old algorithms that they have advanced and applied data and applied technology to solve some very big problems. But, they're not really trying to figure out how the brain works and how to apply it. And so, all of them, no matter if they found the same, they really are doing something fundamentally different.
Morrell: There have been a number of big tech names in recent years who have really raised the alarm on AI. I think Bill Gates one of them, Elon Musk certainly. What is it that they have wrong in their assessment? Why shouldn't we be afraid of AI?
Dubinsky: Well, I don't think they're all wrong. We should certainly use caution and be thoughtful about new technologies. But I'd say a couple things. First of all, there's a presumption that these technologies are much closer to doing humanlike things than is true. They're not. They're not humanlike in any way, they're not likely to be humanlike in the near future. The human brain really—you could think of it as two parts of a very big picture—the old brain and the new brain.
The old brain is about desires and emotions and anger and all those things. And the new brain or the neocortex is the analytical engine, that's the thing that knows what is a cat and how to telephone your mother and how to do a math equation. So, we are really studying the new brain, the analytical engine of the brain. It doesn't have desires, it doesn't have needs. It's basically taking in sensory data and trying to make a model of the world and make predictions and understand the world around it. That's the part of the AI thing that we're doing.
So, you worry a little less about these dangers once you understand that that's not what is being created, are things that want to do evil in the world. That being said, certainly these are powerful tools that can be used for evil. I would say the same is true of many other technologies that have been developed over time. Take guns. I mean, guns have been used for valid, legitimate purposes. Defense, hunting, whatever. And as we know in recent years, they've been used for bad things as well. So, every new technology has things that we can benefit from and things that we have to be careful about. I do feel that we need a good, thoughtful, regulatory regime to think about how we regulate AI as it develops in a way that gets the best out of it while limits the things that are potentially problematic and where it can be misused.
Morrell: Have you thought about what that regulation might look like?
Dubinsky: I have to say I read an article by somebody else who I thought laid out some great ideas. His name is Oren Etzioni at the Allen Institute and he talked about the idea of several different regulatory concepts. Sort of a construct. And just to give you the first one as an example. He said, "If it's illegal for a human, it should be illegal for an AI machine." And I thought that was a great starting point. So, if you peeping into my window is illegal for you to do as a human, a drone shouldn't be allowed to fly up and peer into my window. So, I think that there are things we could do to have smart regulations that say, "Let's be thoughtful about this." I think it would be hard and I think it would take some work, but I think that it's doable and it's important.
Morrell: I think that some of the other fear is less, you know, sort of Skynet computers will take over, but also there's this fear that AI will take away jobs, right? Do you hear that concern and what it your reaction to something like that?
Dubinsky: Of course I hear that concern. I think it's expressed frequently and it's true. AI will eliminate some jobs. It will create others. And this indeed has been the case in technology since it was introduced. I like to use the example of elevator operators. Well, you know, elevators used to all have operators. When we automated them we eliminated those elevator operator jobs. Does that mean we shouldn't have put in automated elevators? No. Of course not. But what it does mean is that we need to pay a lot more attention in this country in retraining and enabling people to adjust their skills as the jobs adjust.
Today, there's an incredible shortage of computer programmers. Well, those people who were the elevator operators are not prepared to be computer programmers 'cause they don't have the training to do that. I think many of them would have the capacity to do it. So, I think what we don't do well as a society and we need to do much better when there's disruptive technology going on, which has been as long as I've been alive, is to create the tools for people and the training for people to help them adjust their skills.
Morrell: You are right in the midst of this. You run an AI company. Do you find yourself getting this question about the fear of AI less often? Where are we with this conversation?
Dubinsky: We get it relatively regularly. We actually wrote a piece about it a few years ago that people can Google, I think we need a nice job. Like, we talked about how things to really worry are things that have replication capacity. Our algorithms don't self-replicate in any significant way beyond the control of the engineer creating them. A virus, by contrast, could be a lot more dangerous. The Ebola virus that self-replicates could do enormous damage to human beings. We should really focus on those kind of distinctions and say, "What are really the things we should fear?" That is something to fear, big time. Something that gets replicated beyond our control. But that's not what we're doing. So, we try to calm those fears with some of these kind of analytical approaches but I find, generally speaking, people want to sensationalize things and they want the scary stuff to emerge. So, even if you counter it, we don't get quoted as much in those things as the people who are happy to sound the alarms.
But, if it leads to some thoughtful and smart regulation, I'd be happy about that. Today, it's just people arm waving but if there were some legislators out there who said, "Okay, what are the real issues here and how do we get down to the hard work of creating the kind of regulatory regime that we need?" That would be a good outcome of that fear, in my view.
Morrell: Okay so, you just laid out so much of the negative stuff that we hear about artificial intelligence. What is the good vision of the future? What does this look like when AI is used for positive impact?
Dubinsky: There are many problems to date that people are frustrated that they simply can't solve with existing technology. It's hard for me to even predict those things. I like to say when we created the Palm, we never imagined Uber. And so, we don't know what they all will be, but you know, we have a little bit of an inkling. Certainly things having to do with safety and security. I like to use an example of, we met with a company that makes robots that look like snakes and these snakes have sensors all over them and they send these snakes to try to go into dangerous buildings, say a building on fire or something, to assess the situation. They told us, you know, the snake gets to a set of stairs and it can't climb the set of stairs. So, a human being has to somehow program it for those specific stairs.
Where, you as a human, if you see a set of stairs that are novel, that you've never seen before in your life, you have no problem going up or down those stairs, you know? You don't have to be programmed to learn that particular set of stairs. So, they said to us, "Look, if you could help this snake climb these stairs without us having to program it for those stairs that would be huge. Just that."
Morrell: Right.
Dubinsky: And we get example after example, I just used that one 'cause it's so visual, of ways in which there are things we cannot do today, or it's very dangerous for humans today, that if we had a machine with a little more capability in it, that could do some things by learning from its environment and not having to be programmed for, then that would be a value add.
Morrell: Donna, you were just speaking with Professor Yoffie's class, going over a case where you were the protagonist, on Numenta specifically. And I wonder, what were some of the questions that the students had in regards to artificial intelligence and machine learning? Where are their heads at?
Dubinsky: Well, there's a mix of course. Like anything. One thing I would note is the students today have much more familiarity with it than the students of last year or the year before. I see a definite trend of people who've now spent a couple of years out there working on it. I felt like the students who had actually been working on it were very sane and sober about what it can and can't do. I think the students who haven't work on were more, kind of starry-eyed about it, than the students who'd actually worked on it. So, that was interesting to see.
Morrell: But that's an interesting insight. Students who have some experience in this area are much more rounded, it sounds like. Is it just that AI's something we probably don't have to worry about for another hundred years, really? You know, I mean, is that the thing? Maybe like our children's children's children need to worry about this, but like, it's probably not a problem for the next millennium.
Dubinsky: It depends what you're worried about. If you're worried about us creating robots that do everything you and I do and that risk taking over the world. Yes. You can put that one aside and not worry about that for a hundred years, at least. That is not happening, so you can sleep soundly at night. If you're worried about that there will be more and more powerful tools to do more and more good and bad things in the world, then I think that is within our lifetimes and that is worth worrying about. So, I think understanding those, and I said earlier how to regulate them, I think is important. But to keeping in mind the fact that it could really solve some very important problems, whether in sustainability or in planet exploration or in biology and disease fighting. There are just a million areas where, you know, they're still looking for help on solving problems with a lot of data and we think we can help there. You know, we've had these infrastructure things, bad things happen, bridges collapsing, and I keep imagining if you had a little AI robot, think about something that fits in your hand, people gotta stop thinking about robots as R2-D2.
Imagining a little robot that has sensors on it that is just put on the bridge and crawling all over the bridge all the time. Taking in all of its sensory data, vibration, noise, you know, visual, whatever it can take in about the bridge. Noticing and building a model about what's normal and then seeing early signs of something that's abnormal and raising it to somebody's attention to say, "Oh, there's this thing here that is just not normal and we really should look at this."
We're so far from doing something that simple. I mean, you would have to program it for that specific bridge, you would have to tell it what to look for. Instead of just letting it loose and saying, "Survey the bridge all the time for us." You would have to program that thing. And that's the sort of functionality where I get very excited, where I say, "Wow. Can we make the world safer?" Because we can have little bots that are doing a specific thing, they don't have to be humanlike, they don't have to be doing everything. They can do just one thing. But if they can do one thing better than we could do that thing today, we would all benefit from that.
Morrell: But you're saying that we can't even really do that because it's not practical to program that. But, is it possible with Numenta's learning model to eventually do something?
Dubinsky: Yes. What I believe is within reach, that we can create application specific bots that are smart, that can find their way through things, that can climb those stairs, that can sort out that bridge, and can help us in ways that it's not replacing the human elevator operator, it's not even done today. Nobody's doing that today. The inspector comes to the bridge once every five years. I mean, how often does the inspector come to the bridge?
Morrell: Right.
Dubinsky: So, you know, it really isn't so much taking away jobs or destroying mankind as it is, "Wow, can we enable whole new things that simply can't be done today that make us healthier, that make us happier, that make us safer?" These are things that are within reach and these are things that are exciting.
Skydeck is produced by the External Relations department at Harvard Business School and edited by Craig McDonald. It is available at iTunes or wherever you get your favorite podcasts. For more information or to find archived episodes, visit alumni.hbs.edu/skydeck.
Post a Comment
Related Stories
-
- 15 Dec 2024
- HBS Magazine
Forward Thinking
Re: Bill Nussey (MBA 1996); By: Janine White; Illustrations by Richard Borge -
- 19 Aug 2024
- Skydeck
Quantum Leap
Re: John Levy (MBA 1979); Pitch Johnson (MBA 1952); Amit Kumar (MBA 2008); Carolyn J. Fu (Assistant Professor of Business Administration) -
- 01 Jun 2024
- HBS Alumni Bulletin
Competing in the Age of AI
Re: Karim R. Lakhani (Dorothy and Michael Hintze Professor of Business Administration); By: April White -
- 01 Jun 2024
- HBS Alumni Bulletin
Redefining How Businesses Operate
Re: Shikhar Ghosh (MBA Class of 1961 Professor of Management Practice of Business Administration); Suraj Srinivasan (Philip J. Stomberg Professor of Business Administration Chair, MBA Elective Curriculum); By: Jennifer Gillespie
Stories Featuring Donna Dubinsky
-
- 06 Dec 2018
- HBS Alumni Bulletin
Source Code
Re: Donna Dubinsky (MBA 1981); Dan Bricklin (MBA 1979); David B. Yoffie (Max and Doris Starr Professor of International Business Administration); By: Dan Morrell; illustrations by Daniel Hertzberg -
- 24 Apr 2014
- Making A Difference
Multiple generations of computing
Re: Donna Dubinsky (MBA 1981) -
- 01 Dec 2007
- Alumni Stories
A View from the Top
Re: Bill Sahlman (MBA 1975); Donna Dubinsky (MBA 1981); Mal Mixon (MBA 1968); Hansjorg Wyss (MBA 1965); Jaime Zobel de Ayala (MBA 1987); Martin Sorrell (MBA 1968)