Good, bad, or biased: AI is learning our actual selves.

Part IV of our series, “Real Perspectives on Artificial Intelligence” features Dan Huttenlocher, the inaugural dean of the MIT Schwarzman College of Computing.

The Engine
4 min readJun 9, 2020

No matter how responsibly developed AI may be, its generality seems at once its greatest asset and its greatest danger. How do you reconcile this duality?

Dan is the inaugural dean of the MIT Schwarzman College of Computing. Previously he helped found Cornell Tech, the digital technology oriented graduate school created by Cornell University in New York City, and served as its first Dean and Vice Provost.

I think it’s a great question. Much of this — and I don’t know if it’s fear of AI, exactly — but much of the sense of the potential dangers of AI comes from a misunderstanding of what the technology really is.

There’s often a tendency to anthropomorphize technology and with AI this tendency is much more extreme. In the end, these are still just algorithms. When we’re using machine learning, for example, they are algorithms that we can teach instead of algorithms that we have to code. But they’re still algorithms. They’re not going to become evil. There’s no rational basis for that worry at the present time (I’m not saying it’s impossible for that to be true in some future with technologies one cannot foresee today, but for the path we are on it is science fiction).

If these algorithms are used for something bad, it’s because people decided to use them for something bad. They don’t decide to do things that are bad on their own.

I’ve heard similar sentiments expressed throughout the AI community. On one hand, this sentience like Data from Star Trek or the Terminator robot is far fetched, but it’s also so ingrained in popular culture, that it’s hard to think about AI any other way.

It’s easy to anthropomorphize AI when you have an Alexa or Google Home or something similar in your house answering your questions. And just imagine when these tools become robots that you’re conversing with — which won’t be long, I’m sure. It’s natural that people will then assume that these machines have a full range of human intelligence. People already anthropomorphize their Roombas.

How these intelligences will evolve, especially considering how machine learning has grown lately, poses some interesting questions. Many of these platforms are now learning from us rather than being programmed — they learn what we do, not what we aspire to do, not what we say we’re going to do. They are not necessarily learning our best attributes, they are learning our actual selves. And that can be embarrassing.

And these attributes can be interpreted as biases.

Yes. Biases we may not even know we have. Or biases we do know we have some of which we legislate against. So when you start observing behaviors and decision making behaviors by humans to train machine learning, there are all kinds of bias there, which may be undesirable.

Once we recognize that these platforms are surfacing biases then we can get more systematic in combating those biases. It’s what good actors are doing now. And what legislation will eventually require. I think we’re in a time period now where people are rightly agitated — and that’s good because it’s highlighting things that need attention in the world, not only attention in AI. I think AI will continue to evolve in a positive way — we’ll end up in a much better place than today.

I like the optimism! A bit of a pivot here — will AI, like calculus or geometry, become a prerequisite for every engineering program?

Yes, I think so. And I would broaden this beyond AI to encompass “computing” in general. Even now it is something that every student needs to learn. As AI gets to be a more and more important part of computing, it too will become required. MIT already has a large percentage of undergraduates studying machine learning. In many senses, computing is already a de facto requirement.

So much has changed since your time as a student at MIT’s AI Lab. What’s surprised you the most since you left?

The big surprise is how quickly AI became practical. As a grad student in the 1980s, we joked around that AI was the perfect research problem, because it would always look like it was almost ready. Now here we are, a few decades later, and AI is used everywhere.

The final question: in 20 or 30 years, do you see any industries or sectors being AI holdouts? Those in which AI will be difficult to implement for technological or cultural reasons?

For technological reasons? I can’t think of anything. Maybe if there’s some big AI backlash, maybe we’ll see some sort of Neo Luddites who are specifically anti-AI, that is certainly not impossible.

--

--

The Engine
The Engine

Written by The Engine

Built by MIT, we help founders create the next generation of world-changing companies. Visit us at engine.xyz.

No responses yet