Skip to main content

Understanding a chip-to-cloud 'eID' solution to find logic vulnerabilities


A relatively common approach to designing cost effective, user-friendly, chip-to-cloud solutions is to leverage the communication capabilities of the user's mobile phone. As a result, instead of endowing the device with all the required electronics and software that would enable it to autonomously transmit and receive data from the internet, the product is developed to use a short-range communication stack such as Bluetooth/NFC (something any modern mobile phone supports by default) and then an App in the phone will create a communication channel with the backend, thus acting as a bridge for both worlds.




For instance, we can find this architecture in solutions for handling rental cars (virtual keys), electronic identity, authentication, and all kind of of IoT devices such as Electronic BagTags.

In this post I'm covering the analysis of an eID solution, let's call it 'Honest eID', that implements this paradigm.  I'm deliberately anonymizing/omitting certain technical details, as it was part of a disastrous, unresponsive and poorly managed private bug bounty program. 

The main idea is to focus on those  reusable vulnerable patterns that may be useful for those developers and researchers who have to deal with this kind of solutions.

Introduction to 'Honest eID'


The approach to analyze this solution was purely black box. I targeted  the Android flavor, so I used Frida, Burp and JADX as the main targets in scope were the App and the endpoints.  Basically I reverse engineered the App statically as well as its communication flows to understand how the solution was implemented, so the vulnerabilities I found are mainly logic flaws.

I came up with following diagram to illustrate the key elements of the solution. 


The main goal behind this architecture is to be able to use the eID card without requiring a physical terminal, according to an ISO 7816-4 compliant solution, which is instead logically implemented on the Backend. As a result, the APDUs are encapsulated and transmitted over an E2EE Secure Channel, which is established between the App and the Backend. In addition to this Secure Channel, the App maintains a regular HTTP communication channel over TLS.

There are different security boundaries implemented in the solution so I focused on finding ways to bypass them. The following attack scenarios are the most realistic ones I considered.

- MITM

The solution is trying to prevent this common attack scenario by implementing an  E2EE Secure Channel.

- 3rd Party App

Usually, in this kind of solutions you do not want a 3rd party application, other than the one provided by the original identity provider, interacting with your Backend. To achieve this, the App is relying on the  key Attestation functionality provided by Android >= 8.0 (also iOS), which is backed by the TEE.



The following diagram shows how the Secure Channel is established. We can distinguish two main stages, where the most interesting vulnerabilities were found: the handshake and the flows between App and endpoints in the Backend, once the E2EE Secure Channel has been already established.



Vulnerabilities (from low to high impact)

1. Inconsistent signature verification logic between Backend and App during the handshake

During the "Secure Channel" handshake, the Backend and the 'Honest eID' App perform an exchange of cryptographic materials required to complete the ECDH key agreement protocol.

However, there is an inconsistence in the signature logic used to verify those values: both the Backend and App do not properly validate their length. The length of these cryptographic materials is well defined so there is no reason to allow that flexibility.

During the handshake the Backend and the App share their respectively generated signatures for a buffer that originally contains the following concatenated byte arrays:

-  Cryptographic materials from Backend to App

[ 00000000 ] NULL (4 byte)
[  ] keyField from Backend (91 byte)
[  ] appChallenge (32 byte)
[  ] BackendChallenge (32 byte)

- Cryptographic materials from App to Backend

[  ] keyField from App (91 byte)
[  ] keyField from Backend (91 byte)
[  ] BackendChallenge (32 byte)

Both the App and the Backend just concatenates these fields together and sign the resulting buffer. This also represents an issue that potentially weakens the authentication as there is no domain separation.

This may be leveraged by a malicious actor to perform certain cryptographic attacks that highly depend on the underlying logic. Let's see an almost benign example, where the malicious actor is assumed to be able to perform a MITM during the "Secure Challenge" handshake.

1. The malicious actor systematically removes the first byte of ‘appChallenge’ from the App's request.

2. When the response from the Backend is received, the MITM actor checks whether the last byte in the Backend’s ECDH public key is equal to the first byte that was removed in the 'appChallenge’. If so, it also modifies the Backend's ‘keyField’ response to reduce its size by removing the last byte.

3. The app logic will concatenate the buffers, keeping its original ‘appChallenge’ (32 bytes, locally generated) value. As the last byte of the Backend's ‘keyField’ and the first byte of the original 'appChallenge’ are equal, the signature received from the server is still valid, but the app will be using the modified ‘keyField’ value (truncated by the MITM actor to 90 byte), which is different from the original ‘keyField’ value the Backend signed.

The signature is still valid from the app’s standpoint, but the ‘keyField’ value is different (one byte less). In this specific case this issue has no real impact, as the java EC implementation will internally discard the malformed key.

However, this vulnerable pattern may enable serious cryptographic attacks in other circumstances.

2. Backend does not verify 'ApplicationID' fields during the SecureChannel Handshake

Any sensitive data transmitted between the 'Honest eID' app and its Backend is encrypted using an AES-GCM scheme, whose key and initial IV are derived from different materials obtained during an ECDH-based key agreement protocol.

Before deriving the key, this "Secure Channel" requires a handshake, where the cryptographic materials required to securely implement the ECDH-based key derivation phase are exchanged between the App and the Backend.

During this handshake, the Backend expects to receive an ECDSA certificate chain, which is generated via the hardware-backed (TEE) Attestation API, bound to the Attestation challenge previously sent by the Backend. However, the Backend does not validate the certificate fields related to the 'ApplicationId' in order  to guarantee that it is actually the legitimate 'Honest eID' App the one who wishes to establish the "Secure Channel".

The remaining cryptographic steps required to derive the AES GCM Key/IV in order to establish the "Secure Channel" do not require any further validation from the Backend, so it is possible for an arbitrary 3rd party App to consume the 'Honest eID' Backend API through the Secure Channel in the same way the original 'Honest eID' app does. 

3. 'Secure Channel' implementation is vulnerable to an AES-GCM IV Reuse attack.

The implementation of the 'Secure Channel' is prone to an AES-GCM IV reuse attack due to a flawed logic when handling counter values.

The 'Secure Channel' uses an AES-GCM scheme, whose Key and IV are derived from the Shared Secret (in addition to other materials) generated after completing the ECDH key agreement protocol between the App and the Backend.

The format of a "Secure Channel" messages is as follows:

Counter + '.' + Base64-encoded Ciphertext

The 'Counter' is used to generate the IV, by performing an 'HmacSHA256' of the 32-bit counter value using the initial IV key, and then xoring the first 16 byte of the result with the remaining 16 byte.

This counter value is being incrementally increased by the App and the Backend each time one of the parts receives and/or processes a message. However, this logic is fundamentally flawed without strictly controlling the flows and contexts associated to the counter value, as a malicious actor performing a MITM can control when each part receives the message, thus having the ability to anticipate or force certain requests.

As a result, it was possible to force a state where two "Secure Channel" messages, coming from both the App and the Backend (as seen in the diagram above), have been encrypted using the same IV (the same counter) which essentially breaks the AES-GCM security model, thus allowing the attacker to decrypt the ciphertext of any message that has been encrypted using the reused IV, in addition to enable other attacks against the authentication of the messages. 

In this specific case, a chosen-plaintext scenario was also possible in one of the endpoints, so the decryption of arbitrary length messages was immediate by  XORing the chosen plaintext, its ciphertext and the target "Secure Channel" message.

AES-GCM-SIV is a secure alternative for these situations.






















Popular posts from this blog

SATCOM terminals under attack in Europe: a plausible analysis.

------ Update 03/12/2022 Reuters has published new information on this incident, which initially matches the proposed scenario. You can find the  update  at the bottom of this post. ------ February 24th: at the same time Russia initiated a full-scale attack on Ukraine, tens of thousands of KA-SAT SATCOM terminals suddenly  stopped  working in several european countries: Germany, Ukraine, Greece, Hungary, Poland...Germany's Enercon moved forward and acknowledged that approximately 5800 of its wind turbines, presumably those remotely operated via a SATCOM link in central Europe, had lost contact with their  SCADA server .  In the affected countries, a significant part of the customers of Eutelsat's domestic broadband service were also unable to access Internet.  From the very beginning Eutelsat and its parent company Viasat, stated that the issue was being investigated as a cyberattack. Since then, details have been scarcely provided but few days ago I came across a really inter

VIASAT incident: from speculation to technical details.

  34 days after the incident, yesterday Viasat published a statement providing some technical details about the attack that affected tens of thousands of its SATCOM terminals. Also yesterday, I eventually had access to two Surfbeam2 modems: one was targeted during the attack and the other was in a working condition. Thank you so much to the person who disinterestedly donated the attacked modem. I've been closely covering this issue since the beginning, providing a  plausible theory based on the information that was available at that time, and my experience in this field. Actually, it seems that this theory was pretty close to what really happened. Fortunately, now we can move from just pure speculation into something more tangible, so I dumped the flash memory for both modems (Spansion S29GL256P90TFCR2 ) and the differences were pretty clear. In the following picture you can see 'attacked1.bin', which belongs to the targeted modem and 'fw_fixed.bin', coming from t

De-Anonymization attacks against Proton services

  In November 2021 YesWeHack invited me to participate in a private bug bounty program organized by  Bug Bounty Switzerland on behalf of Proton AG.  The scope of the program was quite interesting and heterogeneous, as it covered most of the applications and services offered by Proton, such as ProtonMail and ProtonVPN. As a result, multiple technologies and codebases were in scope, ranging from typescript, in the open-source part of Protonmail, to .NET/Swift used by ProtonVPN apps for Windows and macOS respectively. Proton is well-known for its privacy-driven services offer, so they are based on Switzerland where the legislation seems to match Proton's requirements to provide that kind of services: thus maximizing the privacy of their communications, minimizing the amount of data they log from their users while keeping a law-abiding status.  It wouldn't be realistic to think of Proton users as an homogenous group; you may be using Proton because you're genuinely worried