Skip to main content

The guy with rudimentary tools who hyped things

 

I've just released a new research that describes in detail the reverse engineering methodology and vulnerabilities found in a DAL-A, safety-critical, certified avionics component: Collins' Pro Line Fusion - AFD-3700, a LynxOS-178 based system deployed in both commercial and military aircraft. At the time of writing this I don't know exactly what will happen after the disclosure. However, this time, I certainly know what will not happen. 

I understand this statement does sound a little bit cryptic, so you should keep reading to understand the context; from where this situation is coming and why this point has been reached.

Right, the title is probably more suited for a cheap sequel of Stieg Larsson's "Millenium" trilogy rather than for the usual technical contents I publish over here, so for the fans of that saga I would kindly ask you to forgive the liberty of giving myself that license. You'll understand that title afterwards.

This post contains traces of a 'plot' spanning several years now. As a compulsive fiction reader I didn't want to miss this opportunity to follow a dramatic structure, thus having a little bit of fun out of situation that, for me, has been everything but fun. That said, I've learnt a lot along the way, which is probably the only thing that paid off.

In this story there are no evil or good characters, I guess it's just people doing their job the best they can.  Obviously there has to be some kind of conflict, which emerges from the fact that the nature of their jobs, although theoretically pursuing the same objectives, usually makes them clash. There is also an escalation on the action over the years, some plot twists included, until reaching a high tension moment that determines how the conflict will be resolved. The resolution is yet to be written...

As one would have expected I'll write this story from my perspective, others may have a different one. Let's start.

Index

1. 2018
2. 2019
3. 2020
4. 2021
5. 2022

2018.

During a flight to Copenhagen, aboard a Norwegian Boeing 737, I noticed something weird in the In-Flight WiFi, which was provided by a satellite network. Once at the hotel I found out it was possible to reach, over the internet through a misconfigured SATCOM infrastructure, tens of in-flight aircraft from different airlines. We coordinated the disclosure with the affected companies, they acknowledged the issues and addressed the situation. 

Later on, that find, along with others, ended up being part of the 'Last Call For SATCOM Security' research, which was presented at BlackHat USA 2018. Before elaborating what happened around that presentation, it is worth mentioning this research was a follow-up for 'A Wake-Up Call for SATCOM Security' presented at BlackHat USA 2014. Coincidentally, for that occasion, I elaborated some threat scenarios impacting military SATCOM terminals, derived from the vulnerabilities identified during that research. 

Nowadays, the highlighted points in the previous image will surely ring a bell for you, due to its similarities with the Viasat incident. However, at that time, some of the private and public feedback I received, which was covered by this story at Reuters, pointed out that I was hyping the whole issue and the risks were minor, if any at all. Despite this, I moved forward and defended my research, always having the support of IOActive, which has been a constant during all these years.

8 years later, in an 'unexpected' plot twist, there were others who were actually successful in 'replicating' similar attacks in the real world...

But let's get back to 2018.

After reversing the firmware I figured out it was possible to compromise the ARINC 791-compliant SATCOM deployment onboard, including the Antennal Control Unit. So my initial assumption was that maybe, there was some potential safety risks derived from turning this kind of SATCOM deployment into an intentional radiator. I read tons of documents and regulations of the aviation industry about HIRF, I did the maths, etc,...all to reach the conclusion that this attack vector was clearly a dead-end for the aviation industry, as they obviously did a phenomenal work addressing HIRF issues. 

So I explicitly praised the aviation industry ('the industry' from now on) about this: in the paper, in the slides, to journalists...

        

I explicitly clarified there was no safety risk at all for the industry: in the paper, in the slides, to journalists...

The research materials had been obviously reviewed by A-ISAC members, who gave the thumbs-up. A-ISAC representatives were even invited, at our request, to participate in a press conference BlackHat organized before the talk. Everything seemed to be fine.

Well, roughly 1 hour before giving my talk I was summoned to an urgent meeting with A-ISAC members, where they required me to modify the slides here and there. At a certain point the situation was no longer sustainable so I closed my laptop and just left. It's not the kind of experience you want to go through right before presenting, especially if you're not good at it, as in my case. However, that incident didn't change, even a bit, my approach: I gave the talk as it has been previously agreed. I literally started the talk by directly going to the conclusions to avoid misunderstandings, just to point out, once again, that there was no safety risk for the aviation sector.


Right after the talk, A-ISAC, speaking in the name of the industry, published the following press release. You could easily picture my face as I was reading through it. 

A month later, I was also being exposed as a 'success case' for their strategy ("[...] for example in a recent engagement with a threat researcher who sensationalized the claim of being able to hack a plane") in a series of US congressional hearings to understand the cybersecurity threats to the industry. 


Fool me once, shame on you; fool me twice, shame on me.

2019.

In 2008, the Boeing 787 was subject to public scrutiny from different sectors because of its novel design, as the FAA stated:

11 years later I managed to find, exposed to the internet on a publicly available server, part of the firmware for the Boeing 787. 

The same disclosure situation started over again, however this time we were required to sign an NDA, with very specific conditions, to engage into technical discussions with Boeing. I was genuinely interested in moving forward with those conversations so the NDA was signed: wrong decision. After signing, there were no technical discussions at all.  That's all I can say. 

Before presenting the research 'Arm IDA and Cross Check: Reversing the 787's Core Network', I knew I would be facing a similar scenario to what happened in 2018. So, few days before going to Vegas I published 'Shaping the message and killing the messenger', which also addressed the A-ISAC's press release from 2018.

Unfortunately at that point I already knew that I had been fooled again, so my freedom of action was severely diminished due to that very specific NDA agreement. 

Having found vulnerabilities in components that enabled the novel 787 design, in addition to the evidence that showed the different aircraft data domains were not physically isolated, I certainly considered interesting to move forward with the research, acknowledging its limitations and asking for 3rd party verification as we couldn't get it done, as it was mentioned in the paper.


Boeing's response in the media was literally: 

"IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we’re disappointed in IOActive’s irresponsible presentation."

I have to admit that the part about using rudimentary tools just got me. Do not get me wrong, I understand Boeing's position: it's not very convenient for a company to openly talk about IP. Also, if you don't reveal what those mitigations are, how could someone even dare to question them?

However, I also expect others to understand my position. As a security researcher I must confess that I usually ignore claims from companies that are not backed by technical details. I know this may initially sound arrogant but truly it's not: it's just my damn job. That's what I do for a living, otherwise my reports would basically be like this:

"The website of the product in scope claims its security is unbeatable, also implementing military-grade encryption. 

As a result, I think everything is fine. Nothing to see here, let's move on.

In general terms, a positive outcome from all this mess was that Boeing leveraged this scenario to change its approach toward security researchers (in case you were wondering, I never heard back from them), btw using a pretty cool name to be honest: "Operation Reverse Thrust". They got involved in Defcon's Aerospace Village and probably other initiatives I'm not aware of.


There was no positive outcome for me though, rather the opposite: in certain circles the research was accused of being hyped, etc. Taking into account the limitations I tried my best and provided as many technical details as possible, but probably I didn't communicate things properly.  I'm not whining about that thou, as I mentioned security people challenge claims for a living, either from companies or from other researchers. It's a healthy approach, and that was exactly the conclusion of my 'Shaping the message and killing the messenger

Anyway, if you accuse someone of hyping you should be able to clearly explain why. However, nobody moved forward and said "hey Ruben look, this is wrong because in the 787 you have this, also this and don't forget that". I didn't want to be right, I just wanted to discuss why I was wrong, if that was the case. My priority is always to provide technically accurate information. 

Also, it should be noted that I'm only responsible for those documents released under my authorship, I obviously can't control neither headlines nor what others interpret from my published materials. 

In 2019 I couldn't fully exercise that defense due to the NDA in place, but at least that experience helped me to make sure I would never fall for the same trick next time; because if you work in security, although some people forget about it, there is always a next time.

2020.

I'd say this entire study is relevant, especially from the current perspective. 




This explains many things.

"FAA officials told us that inspectors do not review system schematics to look for potential cybersecurity issues but, instead, rely on the applicant to explain the systems, identify any cybersecurity issues, explain how the issues are addressed or mitigated to meet requirements, and explain the test results that confirm the mitigating controls have been implemented correctly."


2021. 

In 2019 a simple google search ("index of" "boeing") allowed me to find the non-certified part of the 787 firmware. In 2021 another rudimentary (sorry not sorry) query, involving common ARINC 665-3 file names, led me to discover the certified AFD-3700 Runtime software on the publicly available Rockwell Collins support portal. Yes, as anyone could guess I was periodically scanning the internet using these silly tricks to see if there was something interesting to look at: a poor's man approach to aviation research with a surprisingly high rate of success.

When I realized what I had found it was a mix of feelings: 

  • I was excited because I would eventually be able to analyze DAL-A certified avionics. Finally, I would see with my own eyes everything I was told about certified avionics during my previous research. 
  • I was curious because that system was based on LynxOS-178, a DAL-A avionics RTOS for which there wasn't any publicly available security research.
  • I wanted to kill myself assuming the perspective of another aviation disclosure on the horizon.

Then I raised my hands to the sky and with a primal scream, I asked the avionics gods for clemency before launching IDA Pro, a peasant's rudimentary tool from the days of yore, made of stone and wood, which can be used to break things...Well, it didn't happen exactly in that way but I think it could be used as inspiration for some PR teams.

Jokes aside, what really happened is that I spent many weeks reversing and reading everything I could find in order to figure out how things were working. 

I didn't want to focus this new research on the data loading part, so my priority was to find vulnerabilities that could be exploited during any phase of flight. The reason is that data loading operations require the aircraft to be on ground, and certain devices may be disabled at that time. According to what I read in some articles, the industry saw this scenario as one of those adduced mitigations, which were never elaborated, for the Boeing 787 research.

Talking about those ground operations and their authentication mechanisms, I think it is fair to just leave here the following announcements derived for something I found, which later on was responsibly disclosed to Honeywell. I'm sure those readers in the industry will be familiar with the applications/service. No further details will be provided because it's way better to live in ignorance.

                                                                            ...



Back to the AFD-3700 research, we started to coordinate with Collins in March. For obvious reasons, I had no expectations about engaging into technical discussions, at all.

Obviously I have nothing personal against Collins' representatives involved in the disclosure, who have been always nice during the conversations. However, I soon realized there wouldn't be anything different this time either.  As expected, there wasn't any kind of technical discussions. They also actively maneuvered to prevent us from testing the identified issues in a controlled environment.

2022. 


The scenario we are in today has significantly changed since 2018 due to many factors, including obviously a war in Europe. The perception of the risks nowadays is clearly different.



As this new scenario for the industry is unfolding, I find interesting to observe the similarities with my experience (which is likely pretty much the same for all researchers) in a post-Stuxnet ICS world.

In 2011 Dale Peterson asked me to participate in S4's Project Basecamp, along with other security researchers. It was an ICS security initiative intended to assess the security posture of popular PLCs in the industrial sector. So he shipped to me an 'AB ControlLogix'  with an EtherNet/IP module, and to be honest, that was the first time I had physical access to an actual PLC.  Back then I was an independent security researcher with limited resources, so my research into the ICS world had been limited to the analysis of firmwares I could find on the internet. However, essentially, the approach to break that PLC was the same, although easier due to the ability to perform a live testing (if you're curious this is the report, and the results). 

11 years later, today actually, at the same S4 scenario, researchers from Dragos and Mandiant will be disclosing further details on a real-world malware able to attack widely deployed PLCs and safety controllers. 

In 2012 I joined IOActive, which allowed me to physically work on all kind of ICS environments and industrial facilities: from some of the largest vessels in the world to substations or warehouses. Almost every single time, regardless whether you had physical access to a device or not, a significant part of the work required spending many hours with IDA, looking at the firmware running in all those industrial devices and also related software. Eventually, as expected, the issues identified during those reversing sessions were also commonly applied in real-world, controlled scenarios. 

Obviously, we all know that an aircraft is a very specific environment, but it is not made of magic.  

Nation-state actors do not usually depend on other security researchers publishing research to accomplish their objectives, they have their own teams and enough resources. The whole point of independent security research, even when it is really limited, is that it offers a valuable way to anticipate threats and identify risks for everyone. 

As I mention in this new research's paper:

"In general terms, the threats against safety-critical assets should be evaluated from the perspective that an adversary’s capabilities remain consistent, but their intentions may change overnight."

Anyway, from a personal standpoint I'm done with this situation. After this research I have no plans to continue researching into aviation security. If that's a relief for some people, well...I'm happy for you.
  

Paper

For obvious reasons, I engineered the paper assuming a complex, adversarial, disclosure scenario.  


This basically means that:

1.- It can also be used as a reference in case the scenario described in the 'Personal statement' below is activated.

2.- It was carefully structured, using techniques coming from other research disciplines, such as 'iterative questioning', also endowing the own narrative with formal logic structures. By doing so, at a later time, it is possible to demonstrate:
  • Inconsistent rebuttals
  • Vague/falsely disputed statements

Personal statement.

First of all, I would like to publicly praise, and thank, IOActive for its continuous support at every aspect of this research, both professional and legally. That said, I want to clarify that this is a personal statement, as an independent security researcher.

After my past experiences with the disclosure of aviation security research, I've already assessed and assumed the potential consequences derived from the current one, from both professional and legal perspectives. 

Backed by a digital rights lawyer, former deputy in one of the European national parliaments, and in view of the efforts that have been made to achieve a coordinated, technically accurate, verifiable and productive disclosure process I would like to state the following:

  • Once this research has been published, there might be a scenario where some of the involved entities may decide to publicly provide unsustained, misleading or non-technical allegations in order to discredit either this research or the researcher’s approach.
  • If the previous situation eventually occurs, the research materials required to let technically competent individuals/organizations reproduce, verify (if that is the case), or even extend this research, will be made available by following a procedure in compliance with the European Intellectual Property laws. This procedure will be implemented to ponder the right of access to information, internationally protected by freedom of the press.

In the absence of technically grounded conversations, which should be the proper way to solve a technical dispute, neither professional reprisals nor legal actions will make me desist from defending both this research integrity and my very own as a security researcher.