"Traffic Congestion Brasilia" by Mario Roberto Duran Ortiz Mariordo - Own work. Licensed under Creative Commons Attribution 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Traffic_Congestion_Brasilia.jpg#mediaviewer/File:Traffic_Congestion_Brasilia.jpg

SPECIAL GUEST: Josh Corman (of IAmTheCavalry.org) - Our cars have become computer networks on wheels... but we rarely think about what that means. Just as our home and business networks are vulnerable to attack, now our cars are too. What are we truly risking when we get behind the wheel? And what does this mean for the future, and the idea of the self-driving car. Join us and find out. Recorded 8/17/2014.


You can download the episode here.


Mike & Matt's Recommended Reading:

About Josh Corman, from his personal site Cognitive Dissidents

TEDx Naperville speaker page for Josh as a speaker at 2013 event

Josh's 2013 TEDx Naperville talk, Swimming With Sharks

Josh on Twitter

IAmTheCavalry.org, grassroots organization that is focused on issues where computer security intersect public safety and human life

IAmTheCavalry.org's Five Star Automotive Cyber Safety Program

Hackers Reveal Nasty New Car Attacks--With Me Behind The Wheel (Video), by Andy Greenberg (Fortune, 7/24/2013)

Open Letter to the Automotive Industry by IAmTheCavalry.org

OpenSSL home page

Heartbleed Bug home page

Wikipedia on Heartbleed Bug

HD Moore's home page

Wikipedia on HD Moore

Metasploit Project home page

Wikipedia on the Metasploit Project

Change.org petition to the Auto Industry (aka http://bit.ly/5starauto/)



Mike Johnston: Welcome to another episode of Robot Overlordz, Episode Number 99. On the show we take a look at how society is changing. Everything from pop culture reviews to political commentary. Technology trends to social norms all in under 30 minutes every Tuesday and Thursday. I'm Mike Johnston.

Matt Bolton: And I'm Matt Bolton.

MJ: And joining us tonight on tonight's episode to talk about security and the exposure of our increasingly connected lifestyle is Josh Corman, a security strategist philosopher and co-founder of rugged software and IAMTHECAVALRY.org. Josh, thanks for being here.

Josh Corman: Thank you.

MJ: I was actually first introduced to your work at TED X Naperville 2013 and your talk ‘Swimming with Sharks’. So I guess to start for people that aren't familiar with you and your work, can you tell us a little bit about your background and kind of I am the Cavalry?

JC: Sure, I'm one of the founders of this I am the Cavalry movement, it was started one year ago at DEF CON 21 but I've been in the cyber security arena for about 13 years professionally and a hobbyist before that but I came to it from like a philosopher background, not from a hacker background and what we noticed over time is that our dependence on technology is growing much faster than our ability to secure it and while we were trying to, and failing to protect credit cards and websites, software is now in your cars, your medical devices, your homes and in your public infrastructure and when we looked for the adults in Washington, or in industry that were going to fix it for us we realized the cavalry isn't coming and as scary as this is, that means the cavalry is us. So this is the notion that we should ready ourselves to be a voice of reason and technical literacy and do education and outreach to the public, to public policy makers, the media and to our neighbors so that we make intelligent choices about where and how we depend on technology.

MJ: That's certainly I think an issue we're concerned with that obviously the government is pretty big and it seems like they have a harder and harder time keeping up with technology and how fast it's changing. I guess what do you think people in the general public miss about the general environment that we all live in nowadays?

JC: Well, I don't know if this is evergreen since the dawn of fire, but in general my observation is that people, it's human nature to adopt technology for their obvious, immediate benefits, but like everything, there's always a cost benefit equation and we sell them concise, or estimate, or factor the downstream consequences or the costs of those benefits and it's not new. Technically there's nothing new under the sun, but we have such a tarball of, the frequency and impact of those dependencies is so pronounced an accelerated that I think it's really difficult to connect cause and effect anymore. And we used to use technologies to bend to our will and intent. Now I think we're bending the way humans interact with technology in reaction to it. I'm not anti-tech, I just think that sometimes I think we're fools, damn fools, and I'm trying to bring a little bit of sanity, that maybe you shouldn't put Bluetooth on your insulin pump so someone can easily and remotely kill you even accidental packets could easily and remotely kill you.

MB: You know I just watched your TED video in preparation for this and read your letter and stuff, and when Mike brought up the government, they're obviously not going to help because if you've ever watched our Senators and Congressmen, they know less about the internet than probably my parents do. So your help's never going to come from that direction.

JC: You know, I'm finding that's not quite as bad as I anticipated. So many of the folks that were drawn to this Cavalry initiative have made significant career changes in the last 12 months to better make our day jobs support this passion and one of the ones I did is I became a CTO of a software technology company and I've been working on Capitol Hill since January. I've had over 70 specific meetings, some with a couple, some with several and I'm finding very young, very savvy staffers and while some of their bosses may think the internet, you know, what is it? The internet is a series of tubes? You know, I actually brought, he just left the job to go back to grad school, but I brought a 20-something staffer from Senator Rockefeller's office in a very senior position to DEF CON last week and he fit right in. After the big target breach he wrote a kill chain analysis of the target breach that if you didn't know it could have been written by anybody in the security industry. It was pretty good. So I think while it's true that they're not going to ever be experts on this I think it’s also true that they'll never be experts on fracking or stem cell research or any other highly technical thing and instead of me being upset at their lack of knowledge I figured it falls to us to raise their knowledge in an independent, noncommercial way.

MJ: What do you find I guess in, you touched on it already a little bit, in working with the government. I mean you've mentioned that a lot of the staffers are young. I guess do you think that's something that the rest of, those of us who kind of see the government on C-SPAN or something like that, we're just not seeing?

JC: Well, I'm not polyanne-ish either, this is going to be hard, and it's going to take a lot of time. But I'm noticing when you bring data and a strong argument and independent verification of that argument, they know how to take in expertise. So if we talk about scare tactics all the time they're not going to listen and if we talk about things that only promote our company's products they're not going to listen. And if we use a lot of jargon or if we get mad at them every time they use the word cyber they're not going to listen. That's why I really think we've had to become ambassadors, right? If you're going to go work with another country or another species or another culture you have to normalize and meet them at their level, use the words they use and bridge that knowledge gap.

MB: Do you think that, and maybe we are, but it seems more prevalent in places like China and especially North Korea where they're actually teaching hacking in schools, do you think that it's a good, I think that it's a good idea just because you are getting people in who can find these vulnerabilities and actually make the security better when you can understand how to get around it. Does that make sense? You know, from my perspective I think we ought to be teaching hacking in college and it doesn't seem like we are.

JC: Well I think we need to, yeah, in general I agree. The word hacking is a pejorative, right? Most people think that hacking equals criminal? But it's really just like any technology, the hammer can build a home or crack a skull. It really depends on who's using it and in which context. I think one of the efforts and challenges here is not to rehabilitate the word hacker because that's very much about us, but what I'm doing is, I'm giving counter evidence. I guess I should back up a half step. When we first launched the Cavalry we had three planks. We had body, mind, and soul. Body was public safety and human life and that's what we've ultimately narrowed to. But mind and soul were about, mind was about the increased criminalization research. Things like the Computer Fraud and Abuse Act and what not being used. It was written in 1984 and only slightly tweaked in 86 and it's still being used even though the internet has changed dramatically so it's a really dangerous thing that whenever something bad happens they go after the people with this capability and talent and the last one was about the mashup of civil liberties and technology, you know, post-Snowden and what not. We really narrowed to the first one and in part it's because when some of my colleagues went to Capitol Hill trying to say that they should reform the Computer Reform and Abuse Act and that security research is our free speech, they were basically told you sound like a bunch of whiny brats and it's just not going to, it's never going to be a priority for them whereas if I come in there or if the Cavalry is promoting here are some critically necessary services to the public good where you have capable white hat hackers donating their time to help the automotive industry a and the medical device industry focus on patient care and public safety for your constituents, they're leaning on the edge of their seat saying how can we help? Or what can we do? Part of the answer is making connections and causing hearings and what not, but another part of the answer is well, the increased fear over Computer Fraud and Use Act has a chilling effect and may deprive us of this very necessary, critical research at a time when we most need it. So it's a very different style but it's getting the same thing accomplished which is let's protect positive use of hacking. So in general, to your original question, I think the more people who understand how software is almost infinitely vulnerable and how trivial it is to inflict harm, then the less likely we are to put ourselves into those risks in the first place by realizing wow, if I add Bluetooth and wireless to this car, you know, I've basically made it as vulnerable as my home PC. The difference is at home I might get spam and it's a nuisance, but in my car it could shut off my breaks, turn the steering wheel, or hurt me and my family.

MJ: In your TED X talk, you mentioned a couple of the vulnerabilities in cars and you just did actually in that last point. I guess for people that don't know, what kinds of vulnerabilities currently exist, or that have been discovered somewhat in cars with some of these technologies that are out there already kind of in the wild.

JC: Well, there are dozens of documented ones on our Iamthecavalry.org, we have a /auto page and we have a timeline of many, but not all of the public car hacks. There are far more than that that are not public, that we've discovered and analyzed. But essentially, now that software permeates your car, most of the things in your car are controlled by software and since they all share the same circuitry and they have very little security between parts of the computers. It's not, I lie a little bit, I say it's a computer on wheels, the truth is it's a network on wheels. There are several computer systems talking to other ones. So the demonstrated things on the Forbes.com video from last year, they could disable the brakes. They could tug on the seat belt. They could jerk the steering wheel. Many of these cars now have parking assist or collision avoidance and as such they can turn the steering column with a motor stronger than you are. So they actually ripped the steering column out of the hands of the journalist that was driving. It's pretty terrifying stuff. You think that when you hit your pedal you're actually applying pressure to the brakes but it's all electronic. So they were able to shut that off. They can accelerate the car. Pretty much anything you can do with a button in your car, you can do once you're compromised it. And it's one thing to have that amount of control in the software assuming there's no bugs and assuming no one would want to hurt you. It's another now that we're adding app stores to the on board entertainment system. So would you like to download a compromised version of Facebook which can then disable your brakes. So I'm not saying there's a lot of people out there the kind of people who would want to steal your credit card are not the kind of people who would likely want to hurt you, but there are different adversaries with different motivations and we're giving them very easy access to assert their will on you whether in a targeted way, which is unlikely, or in generic ways like when people make a bomb out of pressure cookers at the Boston Marathon. Or wear an underwear bomb on a plane. There's lots of people who want to hurt you and we're increasingly giving them ways to do so. But the adversary I'm most concerned about in these cases is the adversary known as accidents. Just normal bugs and glitches. You know, there's a certain amount of testing that goes into these, but right now you don't even have to hack most medical devices. You just need to port scan them and they'll usually fall down.

MJ: It seems like this is only going to get worse. I mean one of our favorite topics to talk about and I think this is what led to us connecting with you actually, is that self-driving cars possibility and from a technology that we as general consumers would like to see, I mean the idea of not having to pay attention to the road sounds great, but you really start thinking about some of the vulnerabilities and it starts to also look like it would be pretty nightmarish. I guess, are you guys looking to apply some of the recommendations with that kind of in mind and get out ahead of it?

JC: Yeah. So I should acknowledge that the newest car I just bought has unbelievable safety features that technology has supported and made possible. So we are not anti-technology. That would be ignorant. What we are is trying to make sure that cost benefit equation has full information. So while the open letter we wrote to the car industry a few weeks ago, it was mostly designed that the discussions or the features currently under development or in the field. But as we get to vehicle protocols where the cars are talking to each other and doing collision avoidance, as you get to self-driving cars, the system of cars is also vulnerable and also prone. So for example, through the work I do on my day job, I am watching the kind of organizations that are deploying open source like open SSL and other open network stacks and I know that many of the car makers are using them in cars and they're using old and vulnerable and exploitable versions of these remote protocols. And it's not that they're wildly irresponsible, they just haven't really design in anticipation of a talented adversary. You know, if you understood that someone might try to hurt you, some of their choices would not have happened or would have happened in much more secure ways. There's a remote kill bit for example, in a lot of vehicles and they're coming to most vehicles being pressured out of South America and Europe and it will come to the US next, but this is the ability to disable a car over the air if it's been stolen or if you suspect it’s been stolen. You could easily imagine that you should draw geo-fencing around trucking lanes and if it leaves the geo-fenced area and it could kill it. The problem is, is if this is being done remotely, imagine shutting down the emergency response vehicles for an entire city. Imagine shutting down food shipments. It takes about 24-48 hours before there's rioting and looting in any densely populated area. So you're taking something that wasn't hackable. You're making it hackable and if that was a conscious risk-reward choice, that's fine. If it was elective risk and avoidable risk and our silence enabled that then that's not fine and that's really why we're trying to offer a helping hand. Our belief is simply that they are masters of their domain in auto, we are masters of our domain and now that our two domains have collided it's going to take some sort of positive collaborative long term effort to make sure that our best and their best are factored for future choices.

MB: My biggest fear, you know, just from listening to you is that the chances of anonymous or somebody hacking my personal car are obviously about slim to none, the fear that I'm getting from listening to you is the fact that most hackers will share and make it easier for other people to do stuff. So then you've got people who can get on the internet and download some sort of widget or whatever and now you've got people who have just a limited amount of knowledge who can do all of these things.

JC: Yeah, you know, without using too much jargon, most people that listen to your show probably know the term Moore's Law, that computer power doubles every 18-24 months whether it's disk space or CPU or memory, in fact I was just listening to one of your previous episodes that brought it up. One of the things that I coined a couple years back was HD Moore's Law and HD Moore, if you're in my industry you know who he is, but he was the inventor of the Metasploit Project and, M-e-t-a-s-p-l-o-i-t. It's the second largest ruby project on Earth, but it's the second largest attack tool and it's used by white hats to assess their vulnerability against common, known exploits. Well the thing is what I've asserted is that while computer power doubles every 18-24 months, the strength of an unskilled adversary, or a script kiddie, or a casual hacker with no computer skill grows at the rate of the Metasploit Project because every day new payloads, new evasions, new exploits are added as a tool and that makes an unskilled adversary much, much stronger. It's actually a wonderful tool for defenders but if the defenders aren't availing themselves of it it's only an advantage to adversaries. And the way to dampen that advantage is to make sure that we are always the, what I'm basically saying is if you're not tall enough for HD Moore's Law you're not tall enough to ride the internet. I think a simple sniff test is if these dependents in medical devices or cars can't handle Metasploit then they have no business putting the remote connectivity on these things. So to your point, I'm not saying, I think one of the reasons people aren't scared yet is they assume you have to be a super-elite, world class ninja hacker to do these things but you don't. Once someone makes the proof of concept it's incredibly easy to make a pointy, clicky weaponized version of it.

MB: That's what I was afraid of, was getting it into the hands of common people and there are enough lunatics out there, especially not even in our country, the fact that somebody in you know China or North Korea or whatever could basically stall my car out in the middle of the expressway or make the wheel jerk to the left is, it's obviously very frightening.

JC: You know to give you some good news. I mean we have this term FUD. I mean fear, uncertainty, and doubt and typically snake oil salesmen use FUD to separate fools from their money and while what I'm describing is scary, I have no intention or interest in scaring people. Fearful decisions are seldom good ones. What I really, I tried to hijack FUD in that TED X talk, I said what about, let's just have facts. I think the facts can be evaluated soberly as long as we're sharing the facts about what someone can do to modern cars then maybe we'll make better response choices and the U is urgency and I've actually improved the D. Now I'm not calling it demanding, I'm calling it diligence, right? So as long as we're seeking facts, we have an urgency in the pursuit of those and we're diligent in making the right risk reward trade-offs, I don't need fear, I don't want fear but the challenge is in most of these industries we wait until something really, really bad happens. You know we wait for a quick example, very quick because it's not polarizing, but the Ohio, in Ohio there's a river called the Cuyahoga River and around the turn of the century 100 years ago, 100+ years ago it caught on fire and stayed on fire twice and that was like the wake-up call that maybe we were polluting too much but it really took a burning river of fire before that could happen and the response to it was a lot of what you now see with environmental laws and clean-up costs and the EPA and you know, some people were telling me you just gotta wait for a lot of dead bodies Josh. I'm not going to wait and the reasons I'm not going to wait is pretty much every time we've waited for a catastrophe, the socio-political and economic response has been very unpleasant with long lasting, unintended consequences. Think of the last few tragedies and think of the long-lasting ripple effect that have come from the over-reaction to it. So partly when that cyber Cuyahoga moment comes I want us to be more educated and have a thoughtful and planful response so it's less hyperbolic. And the other reason is, when you think about it, it takes three to five years to do research and development on a car technology, another three and a half to five years to get it to market, Even longer in industrial control systems. If we aren't having the discussions and improving the decisions now, when something bad does happen you're going to have a several year reaction time to be able to start making enhancements. And furthermore, at least in cars and medical devices, there's no evidence capture on board in these systems so you won't even, if you're waiting for evidence of hacking before we do something you may never actually have evidence because there's no evidence capture and that's why it's one of the five starts to have tamper evident evidence capture. So you know, I'm not trying to scare people, I'm trying to say if you put technology like a computer in these systems they inherit all the system vulnerabilities and potentially some new adversaries and it's time you start putting in the scaffolding and the capabilities to be able to respond to those failures. 

MJ: So Josh for your average person, say our parents, or you know, some of the people that we know, friends that we went to high school with, or something like that, if they were to hear what you, you know, what we've described so far, what kinds of things can they do to kind of get involved and kind of raise the bar somewhat? I mean that seems to be somewhat the mission of I am the Cavalry, to raise that bar. You know, what can your average person do to kind of get involved in this?

JC: Well, I don't think the average person should become an expert on all the ways cars can get hacked. So I doubt you and I, between the two of us, the three of us could actually enumerate what the five stars are in a five star crash rating system, but we know that we want a five star car more than we want a three star car. So I wanted to give some sort of economic indicator and that's what the five star auto cyber safety rating system that we've kind of offered forth is. It's not saying that your cars can't be hacked, it's saying, look, there's five capabilities every car should be able to have and we want to show which ones have them and which ones don't. And really they're all about failures so the first one is tell us how you're taking steps to prevent failures, security failures. The second one is because you're going to miss things, show us that you have a coordinated disclosure policy inviting the help of the third party researchers without suing them to help you find failures. The third one is since failures are going to happen, can we capture and learn from our failures with some sort of a black box like we have in airplanes? The fourth one is because these will happen, do we have a secure update mechanism to respond quickly to a failure so that we can patch systems like you patch your home computer? And the last one is can we isolate and contain failures? The idea of separating critical and non-critical systems like, let me put it really simply for your mother-in-law. A hack of the radio should never have the capability to shut off your brakes. And by asking these car makers to attest to, we're looking for flaws, we're asking for help looking for flaws, we're learning from flaws, we're responsive to flaws, and we're isolating flaws, those five things, if people start asking for that or demanding that or sign our petition, we use that to go back to the auto industry and say we're not going to tell you how to do these, but you need the ability to do these things and if you want help in designing and architecting them to be better than they otherwise would have been, we're going to give you free help doing so. So part of this is just catalyzing the conversation of realizing you know, if you get spam and viruses on your home PC you can also get them on your car now and if they keep adding risky features, that increasingly is the danger level. 

MJ: Okay.

MB: Do you think, like the auto industry and some of these drug makers and the things you were talking about with the insulin pumps and stuff, are they being, are they treating this more reactive or are they actually going after these things ahead of time? You know what I'm saying?

JC: Yeah, I think mileage varies. We wrote this five star framework initially as an open letter to the car industry, but if you squint, I put great effort into making this fit medical devices as well, critical infrastructure, Bluetooth door locks, or home alarm systems at your house, so these five capabilities are evergreen on purpose. I think what we found is there's some really excellent initiatives on some of those fronts, at many manufacturers but no one was really good at all of them. Tesla for example, already has a coordinated disclosure policy. They basically say not only will we not sue you if you find flaws in our software, we'll even give you recognition and reward. SO that's an example that's really good behavior that we want to see copied. Other people though, they're hiding all their, they're keeping their cards close to the vest, they're doing really, really primitive and ineffective mechanisms that they think are going to work but we already know from our experience that they never work, so I would say it's lumpy. Some really bright spots in a lot of places, other places are pretty far behind. And that's why we felt that if we left it alone and just talked about it quietly we wouldn't catalyze the right corrective efforts soon enough if that answers your question.

MB: Yeah, definitely.

JC: But there's some really smart people in these industries. Often they knew what they needed to do, in fact, one of the things we were accused of is why didn't you talk to some car makers first? We did. For nine months. Many of them, and many of them are doing great things, or at least they want to do great things but they never got executive buy in or sign off. And in a lot of these cases they were begging for something like this because they know that once it's a public discussion it may just be that spark they needed to say okay, let's finally fund that project I've been asking for, for four years. This is with the, this is approached with humility, we've been measuring this quietly and collaborating quietly and we're going to continue to be discrete, but at some point issues of public safety require public dialog and we know have at least put that forward on the automotive front and we intend to do the same on the other three projects.

MJ: Fantastic.

MB: Yeah, outstanding. 

JC: I would love it if your listeners would go to the Change.org petition. You can get an overview of the five things. We even made a bit.ly link if you want to put it in your show notes, but it's the number 5 and then the words all lower case, star auto, so, bit.ly/5starauto and it basically says now that cars are computers on wheels we want them to maintain the safety standards we've become accustomed to. But thank you for having me on. I love the fact that you guys are concerned about technology and its impact on society because it's increasingly a necessary conversation.

MJ: Thanks a lot for joining us.

MB: Yeah, thank you very much. That was, I actually like I said, I learned a lot. So.

JC: Well thank you.

MB: Yeah, I think our listeners will too.

JC: And remember, when we say I am the Cavalry, it's meant for something that you say. So signing off, Josh Corman, I am the Cavalry and so are you.

MJ: That's all for this episode of Robot Overlordz, you can find our show notes, links about tonight’s topic, and old episodes online at Robot Overlordz with a  z. com. If you've got any feedback for us you can email us. I'm This email address is being protected from spambots. You need JavaScript enabled to view it..

MB: And I'm This email address is being protected from spambots. You need JavaScript enabled to view it..

MJ: Thanks everyone for listening.

MB: Thanks.


Image Credit: "Traffic Congestion Brasilia" by Mario Roberto Duran Ortiz Mariordo - Own work. Licensed under CC BY 3.0 via Wikimedia Commons.