Cover art for Direct Current podcast episode “AI: Safer, Smarter, More Secure” featuring a stylized blue background evocative of cyberspace.
U.S. Department of Energy

MATT DOZIER: Welcome back. This Direct Current – An Energy.gov Podcast. I’m your host, Matt Dozier. Still broadcasting from inside my coat closet. I hope you’re well. If you’re like me, you’ve probably found it hard to transition to a fully digital social life. We’re relying on virtual tools to stay in touch with our loved ones more than ever, with all the glitches and hiccups and awkwardness that comes with it. Does anyone else find seeing their own face during video calls incredibly distracting? It can’t just be me. So. One side effect of us living our lives through the internet is we’re interacting more and more with artificial intelligence. Food delivery apps, streaming services, telemedicine, they’re all using some form of AI to bring dinner to your door, recommend a new show to binge, or connect you with a doctor. The same is true for almost any digital service that isn’t totally reliant on individual humans. And while these AI-powered tools can be incredibly useful, they aren’t perfect. They can make mistakes, or have security flaws that leave them vulnerable to hackers. My guests in this episode spend a lot of time thinking about the risks of handing over so much responsibility to AI systems — and how we can improve them. Make them safer, smarter, and more secure. This is the second of our two live episodes recorded pre-quarantine at the American Association for the Advancement of Science, or AAAS, meeting earlier this year. Thanks for listening, and stay safe out there.

(DIRECT CURRENT INTRO THEME)

DOZIER: Hello everyone, this is Direct Current, an Energy.gov Podcast. I'm your host, Matt Dozier, with the U.S. Department of Energy. We are here live at the 2020 AAAS Meeting in Seattle on the Sci-Mic Podcasting Stage, presented by This Study Shows. I'm delighted to welcome my guests today, Kyle Bingman and Cort Corley, thanks so much for joining me today.

COURT CORLEY: Thank you so much.

KYLE BINGMAN: Yeah, thank you.

DOZIER: I'm going to start by having you introduce yourselves. Tell us where you work and what you do. Court, we'll start with you.

CORLEY: Sure, so my name is Court Corley and I am a data scientist at the laboratory. I lead a bunch of our AI research, as well as a group of data scientists that apply AI and machine learning across energy, science, national security-type domains, and it's a really fantastic way to see just how far we've come with AI, and what can be done, which we'll totally bash over the next 30 minutes of this podcast. So. (LAUGHTER)

DOZIER: And this is Pacific Northwest National Laboratory, so we're in the neighborhood, sort of.

CORLEY: Absolutely, so, we are in Richland, Washington, at our main campus, half of my group, and we have a larger presence as well in South Lake Union in Seattle.

DOZIER: And so you're also at the lab, right?

BINGMAN: I am. So I actually work at our Seattle office, so just a mile away or so. My name's Kyle Bingman, I'm an advisor on assured artificial intelligence here at the lab. What that means is I'm essentially figuring out our research direction or research goals, how to essentially develop and deploy AI that is trusted, safe, and secure.

DOZIER: So we're talking about AI today, artificial intelligence. It's a big area of research for the Department of Energy and the National Labs. I've heard people say we're living in a "Golden Age of AI." Just how widespread is AI in our lives today?

CORLEY: So it's really everywhere. If you imagine your phone, if you've ever used the photo app on either Google or iOS — the other day, I wanted to see what sushi I had eaten, so I opened up my photos app and I typed in "sushi," and lo and behold, came back all these photos of sushi. And so what that is it's an AI, it's a machine learning algorithm that goes in and detects objects and images and then categorizes them and makes them searchable so I can go back later and find pictures of sushi that I had, or dogs, or anything else I want. So whenever you say, "AI is everywhere," that's one example of it being really everywhere... that we touch.

DOZIER: What are some of the places that people would be surprised, you think, to learn that AI is at work?

BINGMAN: So that's one thing I was actually doing this morning, is seeing if I could brainstorm a list of all of the places that I see AI day to day. Janelle Shane, she's this researcher that looks at AI in the world, she's amazing, and one of the things she says is that if you've been on the internet, you've probably interacted with an AI and not realized it. So it's everything from you getting your driving directions, it's getting matched with a doctor in a live health service, it is things like figuring out how to get custom playlists. And then even outside of that, it's stuff like getting your pictures to look better on your phone. Some of the best AI in phones is actually in the camera. There's all of these kind of weird unexpected places that we're actually using it all the time.

DOZIER: Wait, so what is the AI doing to my pictures in my phone?

BINGMAN: It's making them look better. So you can essentially have a lower-quality camera that is able to make photos look like they are from a really expensive camera.

DOZIER: So there are lots of different forms of AI, right, and we're talking about all of these different applications that are already in use. How has our definition of what constitutes AI changed over time?

CORLEY: So I think it's grown and it's morphed, but it's also been the same. So if we think back to, I think if you look at the Wikipedia page, it says AI goes back to antiquity with automatons and Greek mythology. But I think in modern-day vernacular it came around in the '50s, talking about things that humans can do, and making a computer think, see, touch. So today, what we think about as AI includes all of those things. There's a great quote by Andrew Ng, who is a Stanford professor, and he says, "If a human can do a task in a couple of seconds, then likely an AI can do it today." Where that will be in 5 years, is probably maybe a minute or two, so that means picking things up, recognizing objects and images, detecting, sensing, all of those things, as really how it has changed. If I think back to whenever I was in grad school, there was no speech translation, it was good old thesaurus and my Spanish dictionary to try and learn Spanish. Today, all that is done for me, with maybe the caveat that it's still not perfect.

DOZIER: Yeah. We talked a little about the evolution and some of the steps that have come along the way, and what people thought AI was, and redefined it subsequently. So tell me a little bit about that.

BINGMAN: Yes, that's one of the interesting things that's happened over the years is essentially, every time we say something is AI, we decide that it's not that — that it's actually going to be something else. Really, where this all started, like Court was saying, back in the '50s, people were doing something called rules-based AI, as in, we have to explain everything there is about the world to a computer, and then we will have an artificially intelligent system. In fact, there's these professors at Stanford who thought that, you know what, we're going to spend a summer figuring out how to do this, and by the end of the semester we'll have, essentially, artificial intelligence figured out. But it turns out, there's a phrase called, "You know more than you can say." It's incredibly hard to describe the world in any way that is comprehensive outside of very specific, small tasks. So over time, what happened is that we've been trying to figure out ways to offload the determination of what the world is and how the world works from humans onto the AI. And that's one of the things that happened in the late '70s, early '80s, is this push toward machine learning, of, "Well, this system, we'll give it a rough outline of the world. We'll tell it what's important, and then it will figure out how things work, and figure out what those patterns are." That's still hard, it still didn't work really well, so eventually what happened is they realized we could make a system, an artificial neural network — the technique actually goes back to the '40s, but it got reinvigorated — and the whole idea with that was that you don't have to tell it really anything. All you have to do is give it data and some information about what that data is, and over time the AI will kind of use trial and error to make itself better. The downside with all that is you have less understanding of what's going on. With those rules-based AI, you know what it's going to do...

DOZIER: Because you made the rules.

BINGMAN: Yeah, exactly. But now it's — we didn't. We just kind of told it what direction to go in. 

CORLEY: Here's a pile of data, please take a look at it and tell me what you're going to do with it.

BINGMAN: Yeah, exactly.

DOZIER: So as AI becomes more complex and more commonplace, what are some of the risks? So we're talking about making AI safer, smarter, more secure — what are some of the risks of handing over so much power to algorithms?

CORLEY: So over the past couple of years, I've spent a lot of time applying a particular part of AI to science and energy missions at the Pacific Northwest National Laboratory. And what's interesting, over that time, you begin to see all the great things it can do, and then we begin to say, "OK, well, now we can use AI for climate science, climate modeling, for high energy physics, but now what does it mean when we start to use AI for security?" And I think we've talked about some really interesting examples of, what is the risk in an autonomous vehicle? If we have a car that's self-driving, that seems to open up a lot more risks, and discussion about risks and safety than would be if you were talking about a high-energy physics experiment. So a lot of this developed over the past few years, at least internally to what we're working on, and then looking outward to see what other people are working on as well, for the risks. And I know they involve the security, so how secure is my model, can it be messed with or hacked? Is it safe, does it work the way I think it's going to work? And there's many, many other ways to think about the categories of risk.

DOZIER: I wanted to talk about self-driving cars, especially, because I think they are one of the most high-profile examples of people seeing AI being applied in a way that is very visible, very present. They're already rolling out in cities across the U.S. So tell me a little about what sort of things you're concerned about and thinking about in an application like that, which could potentially put people's lives at risk.

BINGMAN: Sure. So when you think about an autonomous vehicle, there's AI all in it — that's the name of the thing. But when you break it down, it's made of a bunch of different AI-based systems that are all doing a specific task. They're figuring out what the drivable space is. They're figuring out what vehicles are around it. They're looking for pedestrians. Everything you do when you're driving. But one of the things we keep seeing in academic research is that those specific tasks often can be fooled. There are papers out there about how you can put stickers on a stop sign, for instance, that for us would look just like some random graffiti on a stop sign, but that would cause the autonomous car to believe that it's now seeing a speed limit sign, and potentially would ignore that direction to stop. This is happening more and more and more, there's increasing numbers of techniques and methods out there that are potentially able to fool vehicles in that way.

DOZIER: Right. There are other concerns, as well, in terms of understanding the way — we talked earlier about the way that these algorithms are arriving at certain decisions. So tell me a little bit about what you're thinking about in terms of understanding the mechanisms by which they reach those decisions.

CORLEY: So the way they reach their decisions, often, is by training them on data. We’ve seen a lot of news stories, or I've seen a lot of news stories, recently about bias in the data itself, how it's trained, what it's used for, how the data was collected. And all those things translate to the autonomous vehicle setting. What was the data that was collected? Was it LIDAR data, was it video data, was it stereo data? How was it collected? How was it trained? And the risks introduced by that, and the models that are built from it. And Kyle was just talking about this are of adversarial machine learning, where you could insert something to make the AI do something it wasn't supposed to do. Well, you can mess with the AI itself, but you can also mess with the data. So what happens if you have an autonomous vehicle, and now there's all these risks associated, how was the data used, how is it protected, how is the model training? Because you're right, it's a safety-critical application of AI. So how do you have assurance that it's going to work the way you want it to? So I think a lot of the things that Kyle and I think about at the lab is very much that assurance angle of, yes, we know that in the literature there are risks to data, we know there are risks to models, we know that there are risks to how these things work. But it's also beginning to think more broadly of what are the large systems that could be affected by it? And what can we do to help?

DOZIER: Speaking of large systems, what are some of the other big applications that the lab and others are looking at going forward, in terms of AI rolling out as a new way of controlling things?

CORLEY: The one that I think the most about is the grid. The electric grid is made up of independent connecting components of electricity flowing across transmission lines. It's very much a critical piece of infrastructure to get electricity to our hospitals, to our schools, to our street lights, to everything else. And it is driven by human operation today. So there are human operators that, depending on the strain on the system, follow guidance based by standard electrical engineering and the science associated with the grid, what actions they should take upon that. So it's a very human-driven process now, which means that it is more robust in some senses, but also at risk in others because it's slower, maybe it can't react as quickly as one might like. So people are trying to use AI to help augment that process to be able say, OK, under strain on the grid — they call it emergency grid contingency — or making sure if there's a situation, the AI itself will say, OK, these are the best ways to go about protecting the grid, turning the station on and off, and so that's a really exciting way that AI could be used on the grid.

DOZIER: Now these are really big systems which raise some really big questions, then, about how we're going to secure them, how we're going to protect them from outside interference. Where do you even start?

BINGMAN: For me, one of the things I think is most important is that we start accepting that this is potentially a very big risk. We've seen things happen over and over and over with technology that we make something, and then we rush to implement it, and then we realize it's vulnerable. We've seen this with the internet. We've seen this with cars, when we realized that cars didn't have any safety systems to protect the drivers. Just time and time again. So to me one of the things I think is most important is that we're like, OK. We do implement this in the systems, we want to help make our grid better. We want to do better science. We want to do all these things, but at the same time we should be making it a priority to do this safely and securely.

CORLEY: So as I know, Kyle, you've said to me that if you invent the ship, you invent the shipwreck. (LAUGHTER)

DOZIER: So, in terms of identifying what "the shipwreck" could be, what are some of the tools and tricks that you have, and scientists have, to start trying to address those risks going forward?

BINGMAN: Yeah. So my background is actually in cyber red-teaming. And one of the organizations I worked for in the Air Force was what's called an "aggressor unit." So you take the mindset of a creative, capable adversary, and you look at the full spectrum of — in our case at that time — a network, and figure out what are the various things that could potentially happen. What could we do? And we're doing it to make things more secure. It's one of the things I believe we should do with this, is take a look across the range of how an AI system is developed, all the way from when the data is collected like Court was talking about through its training process and then through its deployment, to take a hard look at what that is — and not to stop it, but to help it be better.

DOZIER: Talk a little about the work that's happening at the lab in terms of trying to address this and understand these concerns.

CORLEY: So I think the area we invest a lot in is in this "assured AI" concept, and it's really beginning to divide the area into kind of categories of focus. I guess first direction is acknowledging it. There's some great reports out there, Microsoft has published a series of them describing what are the risks to their enterprise, and I think for what we're doing is it's very much the same, what are the risks to our enterprise, so kind of acknowledging it. Security, so that is what is the security of the data, the models, the things that we are developing that are in critical applications. The other is how safe are they? So can we ensure their robustness, what are ways that we can measure how it will operate or how it will work in the real world? And those are the things that we see in the literature, are there's a ton of papers that are very academic in the sense of they're experimentation, they're trying it out. They're saying hey, is this going to work? Is this not going to work? But we are doing at the lab is saying, "Is this a problem in the real world?" Is it a problem in the physical sense, like in fog, whenever it's raining? Is a patch or a sticker on a stop sign really going to be a problem in all conditions? And trying to understand that, what is the boundary of what we need to think about.

BINGMAN: With that too, understanding how AI is actually integrated into systems, and what potential safety or security concerns arise from that. With autonomous vehicles, you have systems that are essentially special-made, that they were able to engineer specifically to work with AI. But when you talk about systems like the grid, it's implementing AI into older systems, and we need to understand in advance what are the implications of that, and how do we do this smartly?

DOZIER: So you folks are asking these questions. Is anyone else asking these questions?

BINGMAN: There's actually an increasing amount of people. Microsoft's report is one that I was personally so excited to see. The statement they made about how important security was to their enterprise was great. Another really good one was OpenAI. They've been kind of at the forefront of leading discussions about what does it mean to release AI into some type of use in society, and making sure that we're thinking about what we're doing, and not rushing into it.

DOZIER: So we're at a big scientific conference, one of the biggest, here. Obviously there's a lot of excitement around using AI in science. Are there specific concerns you have when it comes to adding more AI, and potentially more uncertainty, into research findings because of AI's complexity and it being kind of a "black box."

CORLEY: So I think that the answer is not more concerned, but just more awareness and education needs to happen. So we're going to be using it, it's coming whether we like it or not. And so whatever form it's in, we need to have a dialogue about the safety and security of it. We need to be able to describe it and characterize it and go forward. So meaning, if you're going to have an imaging system that has an AI that's going to detect a cancer, is there going to be a human in the loop, as well, to be able to augment that diagnosis? Or is it just going to be fully automated pathologists, so no more radiologists anymore? I don't think we're there, and I don't think that's what we're saying. I think, yes, let's use it, but let's help AI make us better, and smarter, and more effective human-machine teams as we go along. 

DOZIER: Right. Do you ever feel like a buzzkill, going around, everybody's so excited about AI, you're the ones saying, "Wait, hold on, let's think about this for a minute"?

BINGMAN: I think it's easy to, sometimes, but then you stop and think about what you're doing. I'm not trying to stop this. We're trying to make it better. And once you can help people see that, and kind of get the vision for, "Yeah, we're going to keep using this, and we're going to do it even better," it's easy to step away from that.

CORLEY: And I definitely give the analogy of penicillin. Penicillin was this thing that was accidentally discovered, and people used it, but they had no idea of the science behind it, the theory behind it, microorganisms, anything of that sort. And AI's kind of in the Dark Ages right now in that same way of, in the future we'll have a theory about how it works, but right now we know some things work and we're going to try and use them to be functional and effective — and it's working really, really well, and making things a lot better in many cases.

DOZIER: And it really is important to understand how it's actually working, especially someplace like the Department of Energy and the National Labs, when there are high stakes with a lot of these applications, right.

CORLEY: Absolutely. I think the Department of Energy has invested a lot in high-performance computing and scientific applications over the years, from atmospheric science, nuclear energy, and everything in between that really involves complex high-performance computing simulation. That is both of scientific value, energy resilience, and security value. So the next step is AI that follows from that, and that's one of the reasons why the DOE does care about this, because AI is going to be supporting all of those scientific missions and the energy missions that go along with it. So the next question is, what are we doing about that as the DOE? And yes, it's very much we will use AI in all we do, but we're also going to come at it with a, "How do we make sure that we're using it in a safe and robust and resilient way, to ensure that we have the best use of the technology?"

DOZIER: What does the future look like, to you? Are you optimistic about us being able to take AI, use it to its maximum benefit, and also keeping that risk at an acceptable level?

BINGMAN: I am, actually. Which, coming from a cyber background is maybe surprising, that I'm optimistic (LAUGHS). But with this, especially in the adversarial machine learning community, there is this whole growing number of researchers who are very excited about figuring out potentially how do these systems work, where could they go wrong, and how could we make them better? There's just so much excitement in that community, and so much energy towards making progress in it that I really think we can make good steps.

DOZIER: Court, what about you?

CORLEY: So for those that haven't seen "Black Mirror," the series, hopefully I'm allowed to say on here it's a great way to see, like, what is the opposite end of what could happen. Dystopian future. And definitely I don't think that will ever happen. It's fictional, and it's up to us as scientists and engineers and advisors and leaders in the field to make sure that doesn't happen. And I think there's enough people that really care, and are creative enough to see an end state where it maybe wouldn't be as positive for us that are working on it, like Kyle said.

BINGMAN: That's one of the key things, is that by having this discussion and starting to do this work, potentially we forestall a reality where we do have insecure AI, where we do have unsafe AI. And if that turns out to be something that never would have happened to begin with, that's fine because we've still done all the work, we've still had all the conversations that have made it a priority. 

DOZIER: Yeah. A step in the right direction.

BINGMAN: Exactly.

DOZIER: Cool, well thank you both very much for joining me today. I really appreciate it. 

BINGMAN: Yeah, thank you.

CORLEY: Thanks so much, it was great being here.

DOZIER: Thank you to my guests, Court Corley and Kyle Bingman. That's it for this episode of Direct Current. Thank you to AAAS for having us here on the Sci-Mic Stage, presented by This Study Shows. You can find Direct Current at energy.gov/podcast or wherever you get your podcasts. Follow us on Twitter @energy. I've been your host, Matt Dozier, thank you so much for listening

Artificial intelligence is all around us — even if we don’t realize it.

Whether it’s in a self-driving car, a food delivery app, or our nation’s electrical grid, the rapid spread of AI and machine learning raises some big questions about security. How do we make sure AI-controlled systems are working as intended? And how do we protect vulnerable technologies from outside interference?

Our guests from Pacific Northwest National Laboratory are working hard to answer those questions — and to get more scientists, engineers and tech leaders to start asking them. This episode was recorded live at the 2020 American Association for the Advancement of Science (AAAS) annual meeting in Seattle, Washington.

Diving into Data at PNNL

Video Url
Pacific Northwest National Laboratory is advancing the understanding and improvement of contemporary data analytics and artificial intelligence with application to scientific and national security problems.
Video courtesy of the Department of Energy

Pacific Northwest National Laboratory (PNNL) is a hotbed of data science and machine learning research. Our guests in this episode, Courtney Corley and Kyle Bingman, are part of a PNNL team that is advancing the frontiers of scientific research and national security by understanding and improving contemporary data analytics and artificial intelligence with application to scientific problems. Learn more about the lab's groundbreaking work.

All Things AI

The Artificial Intelligence and Technology Office (AITO) is the Department of Energy’s center for all things AI. The office is working to accelerate the delivery of AI-enabled capabilities, scale the department-wide development and impact of AI, expand partnerships, and support American AI leadership. Learn more about the office's work, and subscribe to DOE'S AI newsletter for regular updates.