Staying optimistic about AI amidst a weaponization of trust.

Part II of our series, “Real Perspectives on Artificial Intelligence” features Colonel Randy “Laz” Gordon — an Air Force test pilot, HBS alum, AI leader, and overall badass.

The Engine
4 min readJun 1, 2020

Should we remain optimistic about a future increasingly reliant on AI?

Colonel Gordon is the Director of the USAF/MIT Artificial Intelligence Technology Accelerator. He is a Presidential Fellow, Harvard Business School Alumnus, DARPA Fellow, and MIT Fellow. He served the F-22 Combined Test Force as its commander.

When I came into this, I was very much focused on the notion of “AI as a savior” — whatever problem you have, whatever it is you’re trying to do, there’s an AI for that. But ultimately it’s a reflection of we as humans and the society that we come from. So rather than getting hyper-focused on the technology of AI, what I’ve learned is that it’s actually a behavioral science — it’s a reflection of the society that we come from.

So should we be optimistic? Absolutely. In the United States, we have a society built on some very fundamental values. During my time in the military, I traveled around to a lot of different places in the world where those fundamental values just aren’t there. In those nations in which AI technology develops, it will be a reflection of those times, those places. The U.S. is by far, in my view, is the place to be optimistic about such development.

When you meet with senior Air Force leaders, how do you explain the reality of AI? To what degree do you have to temper expectations?

I often have to temper expectations in my conversations with senior leadership — state of the art is nowhere near what you’re thinking and nowhere near what you picture in science fiction. My team’s been trying to teach people about the reality of AI versus the rhetoric of AI. And that gulf is pretty huge.

I think a lot of that is driven by just the name itself. Many people interpret “artificial intelligence” as trying to say that a machine has some level of consciousness — that we share the same level of understanding and appreciation. In reality, the technology is essentially just very good statistical modeling. There is no “understanding.”

AI is subject to all the biases that we as humans ultimately feed it. It is subject to the quality of the data and then the construction of the algorithm itself. So it’s extraordinarily brittle.

In the Air Force we tend to think if there’s a problem “I’m going to throw an AI against it” and AI is going to fix everything. Then you have a deeper conversation about the quantity and quality of data. Is that data shareable and releasable? And even if you have all those things, it’s only really good at a narrowly defined statistical model.

It’s not like it’s going to show up and suddenly fix the entire Air Force personnel management system. It’s not going to suddenly create artificial flying machines. There’s a great difference between that rhetoric and the reality of where things are at.

Even so, AI is ubiquitous and incredibly powerful. What do you see as the most significant breakthroughs in the evolution of AI in the past 20 years? And how are those breakthroughs impacting our lives today?

The growth of compute power and the sheer amount of data that’s available now has created a weaponization of trust. What do you believe anymore?

When you see something online, our natural human tendency is to trust it because the engines that run those algorithms are designed to understand your preferences and continuously feed you more of the thing that you’ve already expressed a preference for liking. It’s almost like a silent curator that’s operating behind the scenes. A secondary effect of this is that it splits people such that we don’t even have agreement on the same level of facts.

Compounding that, when people see something written, they think that that comes from another human being when in reality it could have been created by an AI. Add in the power of generative adversarial networks, and you can create realistic images of people that don’t actually exist. How do you believe what you are seeing and how does it not become a contagion? This is a psychological dimension of technology that we haven’t really had to deal with as a society yet.

So how do we combat this “weaponization of trust”? What should the US and its academic institutions be doing to ensure not only an enduring competitive advantage in AI, but also responsible development of the technology?

When it comes to any emerging tech that rapidly changes — like what we’re seeing with respect to AI — there really is no roadmap because you’re at the tip of the spear. The only way to ensure lasting success is to have open, direct dialogs between disciplines and sectors. That’s something that’s very unique to the United States that will keep us on the forefront.

Take the current COVID-19 pandemic, for example. If there’s a means with which to use the statistical power of AI and machine learning to better diagnose the disease, to better understand where breakouts are occurring, to better deliver telemedicine — that’s something you’ll see developed here. And when we do that — when we lead with our values — then that becomes an example to the rest of the world.

--

--

The Engine
The Engine

Written by The Engine

Built by MIT, we help founders create the next generation of world-changing companies. Visit us at engine.xyz.

No responses yet