State of ITJekyll2020-05-19T18:01:45+10:00https://stateofit.com/Chris Culnanehttps://stateofit.com/blog@stateofit.comhttps://stateofit.com/UKContactTracing2020-05-20T00:00:00-00:002020-05-19T00:00:00+10:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<h1 id="security-analysis-of-the-nhs-covid-19-app">Security analysis of the NHS COVID-19 App</h1>
<p>Dr Chris Culnane, <a href="mailto:chris@culnane.org">chris@culnane.org</a>,<br />
Twitter: <a href="https://twitter.com/chrisculnane">@chrisculnane</a>,<br />
Independent Security and Privacy Consultant,<br />
Honorary Fellow University of Melbourne, <br />
Visiting Lecturer University of Surrey</p>
<p>Vanessa Teague, vanessa [at] thinkingcybersecurity.com <br />
CEO, Thinking Cybersecurity Pty. Ltd., <br />
A/Prof (Adj.), Australian National University.</p>
<p>19 May 2020</p>
<p>The following security analysis was conducted via a static analysis of the released source code for the UK’s COVID-19 Contact tracing Android app and an evaluation of high-level design documents.</p>
<p>An earlier version of this report was shared with the NCSC on 12 May 2020. We would like to thank NCSC for their rapid response to our report and the constructive dialogue that has taken place since. We have refined the document to clarify a number of points and, where applicable, include a broad summary of their responses. A full response is available at:</p>
<p>This post is cross-posted at <a href="https://stateofit.com/UKContactTracing">StateOfIt.com</a> and <a href="https://github.com/vteague/contactTracing/blob/master/blog/2020-05-19UKContactTracing.md">https://github.com/vteague/contactTracing</a>.</p>
<h2 id="contents">Contents</h2>
<ul>
<li><a href="#Introduction">Introduction</a></li>
<li><a href="#Protocol-summary">Protocol summary</a></li>
<li><a href="#Attacker-model">Attacker model</a>
<ul>
<li><a href="#Server-vs-authority">The importance of distinguishing between an untrusted server and an untrusted authority</a></li>
<li><a href="#Malicious-proxy">Core security properties not achieved against a malicious TLS proxy</a></li>
</ul>
</li>
<li><a href="#Untrustworthy-server">Weaknesses and mitigations against a compromised or proxied server</a>
<ul>
<li><a href="#Public-key">Distribution of Public Key during Registration</a></li>
<li><a href="#InstallationID-HMAC">Distribution of InstallationID and symmetric HMAC key during Registration</a></li>
</ul>
</li>
<li><a href="#Problematic-design">Problematic Design Decisions</a>
<ul>
<li><a href="#Longlived-broadcast">Long lived BroadcastValues (Encrypted IDs)</a></li>
<li><a href="#Eight-seconds">Monitoring of the interaction every 8 seconds creates a unique interaction signature</a></li>
<li><a href="#KeepAliveCharacteristic">keepAliveCharacteristic value is deterministic</a></li>
<li><a href="#Unencrypted-log-uploads">Unencrypted log uploads</a></li>
</ul>
</li>
<li><a href="#local-log-protection">Inadequate protection of local log files</a>
<ul>
<li><a href="#Data-stored-unencrypted">Data stored unencrypted on the device</a></li>
</ul>
</li>
<li><a href="#Disclosure-policy">Source Code Access and Responsible Disclosure</a></li>
<li><a href="#Legal-commercial-political">Legal, Commercial and Political Issues</a></li>
<li><a href="#Conclusion">Conclusions</a></li>
</ul>
<p><a name="Introduction"></a></p>
<h2 id="introduction">Introduction</h2>
<p>The UK’s COVID-19 tracing App contains well-designed protections against many of the attacks that threaten a contact tracing scheme. Although we are not convinced that the perceived benefits of centralised tracing outweigh its risks, we acknowledge that the cryptographic protocol of the UK’s app includes a much better effort at mitigation of most external attacks than, for example, its Singaporean/Australian counterpart. The generation of ephemeral encrypted IDs by the app, rather than on the server, significantly improves both privacy and integrity against a malicious or compromised server. At least, it should, after the main patches described in this document are complete.</p>
<p>We also appreciate that both the app code and a detailed whitepaper have been made available before large-scale deployment. This is a great benefit for exactly the kind of detailed analysis and improvement we now suggest. We refer to Whitepaper Version 0.1, 3rd May 2020 and (Android) app beta code as accessed from GitHub on 7th May 2020.</p>
<p>No cryptographic protocol can meet its security goals if its assumptions are not met. Secure protocols can also be undermined when composed of different components with inconsistent assumptions, particularly in cases where encrypted and unencrypted data are transmitted side-by-side. In this report we show the following.</p>
<ul>
<li>In the presence of an untrusted TLS server, the registration process does not properly guarantee either the integrity of the authority public key or the privacy of the shared secrets established at registration. The result completely undermines core security goals of the protocol, including its privacy and its resistance to spoofing and manipulation.</li>
<li>In the presence of an untrusted TLS server, the storing and transmitting of unencrypted interaction logs facilitates the recovery of InstallationIDs without requiring access to the Authority Private Key.</li>
<li>Long lived BroadcastValues undermine BLE specified privacy protections and could reveal additional lifestyle attributes about a user who submits their data.</li>
<li>The monitoring of interactions at 8 second intervals could create unique interaction signatures that could be used to pairwise match device interactions, and when combined with unencrypted submission, allow the recovery of InstallationID from BroadcastValue without access to the Authority Private Key.</li>
<li>The use of a deterministic counter to trigger KeepAlive updates risks creating an identifier that could be used to link BroadcastValues over multiple days.</li>
</ul>
<p>The white paper refers only briefly to the establishment of shared secrets at registration time, mentioning in a footnote that ``It would be better if both client and server contributed to the entropy in the [InstallationID]. This may be changed in
the future.’’ We show here that this is not a nice-to-have for the future, but a critically important foundation for core security goals of the protocol. Fortunately, we believe it is fairly easily achieved by reusing the mechanism already in use for sending symmetrically encrypted IDs.</p>
<p><a name="Protocol-summary"></a></p>
<h2 id="protocol-summary">Protocol summary</h2>
<p>The implemented protocol is very similar to what has been described in the <a href="https://www.ncsc.gov.uk/files/NHS-app-security-paper%20V0.1.pdf">NHS App Security Paper</a>, so we will not repeat the details and only provide a high-level overview.</p>
<p>The system adopts a centralised approach in which devices register with a small amount of information (partial postcode) and are in turn issued with a random InstallationID. This ID remains static throughout the operation of the App, and is referred to in the code as the sonarId.</p>
<p>The authority’s public key is distributed, along with the InstallationID and a symmetric key for performing the HMAC, during registration. We will refer to the latter as the ‘HMAC key.’ Although the authority’s public key is referred to as the ‘server public key’ in the code and white paper, it is important to note that the corresponding private key does not need to be stored on the registration or data collection servers - it is the private key of the NHS, and is needed only for the decryption of contact events for exposure notification. We therefore refer to the public key as the ‘Authority Public Key,’ so that it is not confused with the server’s TLS public key. TLS is used to perform the HTTP request, however, there is no indication that certificate pinning is being used, possibly because the service is run through CloudFlare as a Content Delivery Network.</p>
<p>Once every 24 hours the App generates a new ephemeral Elliptic Curve Key Pair. This key pair is used with the Authority Public Key to perform an offline ECDH key exchange. From that exchange an AES symmetric key is generated that is used to encrypt the dates, InstallationID, and CountryCode. We will refer to this as the ‘AES key’ to distinguish it from the symmetric key used for HMACs. The encryption algorithm also produces an authentication tag. This collection of data is called the BroadcastValue.</p>
<p>Somewhat confusingly, the BroadcastValue is only part of what is broadcast. The full broadcast payload is:</p>
<p>P = (countryCode + BroadcastValue + txPower + transmissionTimeBytes + hmacSignature)</p>
<p>The HMAC Signature is constructed using the HMAC key issued to the device during registration, and covers the following values: countryCode, BroadcastValue, txPowerLevel and transmissionTimeBytes.</p>
<p>It is important to note that the transmission time, Country Code and power level are all broadcast unencrypted. We shall revisit the use of the HMAC in <a href="#Untrustworthy-server">the Section on weaknesses and mitigations against a corrupt or proxied server</a>.</p>
<p>As noted in the NCSC description, once connected the devices exchange range data every 8 seconds. Log data is stored in clear text on the device, protected only by the inbuilt App separation provided by Android/iOS. As such, someone who has root access to the device will be able to read the full log.</p>
<p>If a user has symptoms and decides to upload their log data, the record of observed BroadcastValues, the timings of the contact, the list of RSSI values from the 8 second measurements of distance during the contact, and the InstallationID are packaged for upload. The NCSC white paper claims the following: “The log is integrity protected by an HMAC, encrypted with the shared symmetric key established at registration and sent to the service infrastructure over TLS-protected channels.”</p>
<p>However, we can find no evidence that the data being uploaded is encrypted with the shared symmetric key. Perhaps that sentence was intended to say “The log is integrity protected by an HMAC, <em>computed</em> with the shared symmetric key established at registration…” It is clearly HMAC’d but appears to be submitted unencrypted, beyond TLS. The implications are described in the section on <a href="#Unencrypted-log-uploads">Unencrypted log uploads</a></p>
<p><a name="Attacker-model"></a></p>
<h2 id="attacker-model">Attacker model</h2>
<p><a name="Server-vs-authority"></a></p>
<h3 id="the-importance-of-distinguishing-between-an-untrusted-server-and-an-untrusted-authority">The importance of distinguishing between an untrusted server and an untrusted authority</h3>
<p>The white paper’s technical analysis assumes, “In our model, the infrastructure provider and the healthcare service can be assumed to be the same entity.” However, for many people deciding whether to use the app, trusting the NHS might be quite different from trusting its infrastructure providers, which include US companies such as Google and Cloudflare.</p>
<p>It could be argued that since the server is controlled by the Authority then it is as secure as the Authority Private Key. However, that is a false equivalence. Secure systems are designed to minimise the quantity of material that must be kept absolutely secret. In the current system, the entire user database (including InstallationIDs and HMAC keys) must be kept as secret as the Authority Private Key, which is going to be a difficult task if there are millions of users and records. The number of people with access to the server, either through maintenance or support roles, is vastly larger than the number who should have access to the Authority Private Key. Furthermore, the server is publicly facing, potentially receiving requests from millions of users. Its risk profile is considerably worse than that of the private key.</p>
<p>The security of the communications from the public facing server are also not solely a product of the authority’s actions. TLS Proxying is common place, in which TLS connections are terminated prior to eventual end-point and inspected for security and monitoring reasons. This approach is deployed within the system, with CloudFlare acting as a TLS Proxy and Content Delivery Network in front of the actual server. <!--Likewise, many organisations deploy TLS Proxies to intercept and inspect communications on their corporate networks and devices.--></p>
<p>The whitepaper is not entirely clear on which service performs the decryption of logs - we assume it is probably the risk modelling service, and that more detail will be available when the server code is made open. The important point here is that, if the registration process is upgraded as we describe below, then only the service responsible for decryption will need to know the Authority Private Key <em>and no other services will need to hold cryptographic secrets</em> including the HMAC key or InstallationID.</p>
<p>As such, we distinguish between an untrusted server and an untrusted authority. We assume the authority will adequately protect its private key, holding it securely offline with strict access control. Conversely, we do not assume the server is immune from breach or unauthorised access either internally or externally. As such, we assume the following about the current configuration:</p>
<ul>
<li>The authority will adequately protect the private key and it will therefore <em>not</em> be accessible to an attacker</li>
<li>The authority will make best efforts to protect the contents of the server, but cannot guarantee prevention of unauthorised access</li>
<li>The authority will deploy TLS to protect communication, but is not able to guarantee the security of the TLS Proxy</li>
</ul>
<p>With the above in mind we proceed to examine what impact an attacker could have were they to compromise the server or the communication between the server and the end-user.</p>
<p><a name="Malicious-proxy"></a></p>
<h3 id="core-security-properties-not-achieved-against-a-malicious-tls-proxy">Core security properties not achieved against a malicious TLS proxy.</h3>
<p>The whitepaper describes several core security goals. The most relevant to this report are:</p>
<blockquote>
<p>4) It should not be possible for an external observer to associate any Bluetooth transmission with any device-specific information.</p>
<p>5) It should not be possible to submit spoofed data on behalf of another user.</p>
</blockquote>
<p>We take (4) to include identification of the logged Bluetooth transmissions that are uploaded when a person self-diagnoses with COVID-19.</p>
<p>We also suggest two other important goals, which are not explicitly mentioned in the whitepaper but seem important.</p>
<blockquote>
<p>9?) It should not be possible to alter the event log that a user uploads when they self-diagnose with COVID-19.</p>
</blockquote>
<blockquote>
<p>10?) It should not be possible to silently prevent a user from successfully operating the app and submitting their contacts should they wish to.</p>
</blockquote>
<p>The HMAC computed over the event log is clearly intended to achieve Goal 9. However, like Goals 4 and 5, it fails against a compromised or malicious server that knows the HMAC key. We show below that these three goals can be achieved, and attacks against Goal 10 somewhat mitigated, by improving the registration process. We are not able to guarantee Goal 10 against an actively malicious TLS proxy.</p>
<p><a name="Untrustworthy-server"></a></p>
<h2 id="weaknesses-and-mitigations-against-a-compromised-or-proxied-server">Weaknesses and mitigations against a compromised or proxied server.</h2>
<p><a name="Public-key"></a></p>
<h3 id="distribution-of-public-key-during-registration">Distribution of Public Key during Registration</h3>
<h4 id="the-problem">The problem</h4>
<p>The Authority Public Key is downloaded from the server at registration time, without any certificate checking.</p>
<h4 id="the-implications">The implications</h4>
<p>This introduces the risk that the initial communication could be intercepted by the TLS Proxy (CloudFlare).</p>
<p>If the transmissions were subject to alteration, and the interception capability was persistent it would be possible to replace the Server Public Key, intercept any uploads, decrypt them, and then forward them on or drop them entirely.</p>
<p>This would also mean that a user’s BLE messages, when logged by other users, could not be properly decrypted by the NHS, thus preventing the person from being notified if one of their contacts tested positive for COVID-19. This breaks Goal 10.</p>
<h4 id="correcting-the-problem">Correcting the problem</h4>
<p>The Authority Public Key is not secret and does not change frequently.
It should be distributed with the app rather than distributed during registration. If it seems important to update the Authority Public Key without an app update, then distribute a Certificate Authority Public Key with the app and distribute the Authority Public Key at registration with a certificate that is carefully verified.</p>
<h4 id="ncsc-response">NCSC Response</h4>
<p>NCSC clarified that the intermediate certificate pinning was in use, which limits attack to the contracted TLS Proxy (CloudFlare). The suggestion to independently verify the the Authority Public Key, either through inclusion in the App itself or via a Certificate Chain, remains advantageous as it mitigates the risk of compromised TLS Proxy, even if such a compromise is considered to be a low risk event.</p>
<p><a name="InstallationID-HMAC"></a></p>
<h3 id="distribution-of-installationid-and-symmetric-hmac-key-during-registration">Distribution of InstallationID and symmetric HMAC key during Registration</h3>
<h4 id="the-problem-1">The problem</h4>
<p>The InstallationID and HMAC key are downloaded from the server at registration time.</p>
<h4 id="the-implications-1">The implications</h4>
<p>A compromised or malicious TLS proxy learns the secrets that are supposed to be shared between the user and the NHS.
There are several possible attacks.</p>
<ol>
<li>
<p><em>The attacker can use the HMAC key to identify the user from their Bluetooth transmissions, either when they are first broadcast or when they are uploaded because a contact tested positive.</em> This would work as follows: whenever the attacker sees a Bluetooth transmission, it runs through its dictionary of known HMAC keys, testing each one to see whether the last 16 bytes of the HMAC it generates on the payload match the HMAC Signature of the transmission. This allows it to identify the InstallationID by retrieval from its dictionary, though it cannot (and doesn’t need to) decrypt the payload. This breaks Goal 4.</p>
</li>
<li>
<p><em>An attacker with knowledge of InstallationID and the HMAC key can obviously generate spoofed broadcasts that are indistinguishable from the user’s true broadcasts.</em> This in turn would facilitate the creation of fake contact events. This breaks Goal 5.</p>
</li>
<li>
<p><em>If the attacker with knowledge of the HMAC key can intercept the infected user’s upload of their event logs, then it can drop events and recompute a correct HMAC.</em> If it also knows the InstallationID and HMAC key of other users, it can insert forged contact events from those users into the upload. This breaks Goal 9.</p>
</li>
</ol>
<h4 id="correcting-the-problem-1">Correcting the problem</h4>
<p>Unlike the public key, the HMAC and InstallationID cannot be shipped with the app, because they are supposed to be different for each user. To address passive attacks from a compromised TLS Proxy, and assuming that the public key is securely shipped as described above, it is easy to reuse the techniques for generating AES symmetric keys to generate Installation IDs and HMAC keys as well. At registration time, the app could generate a new Elliptic Curve Key Pair. This key pair could be used with the Authority Public Key to perform an offline ECDH key exchange. From that exchange the InstallationID and HMAC key could be derived.</p>
<p>This exactly mimics the code already used to generate ephemeral AES keys, but is used to generate persistent shared secrets instead. This has the great virtue that the HMAC key is never stored in plaintext on the server, and indeed does not need to be derived explicitly until the back-end is trying to decrypt a contact’s BLE broadcasts.</p>
<p>This does not entirely eliminate trust in the server-side TLS Proxy (CloudFlare), since it could still intercept and generate an entirely fake initial registration, causing valid submissions to appear to invalid, and facilitate the generation of fake BLE events, however, it does resolve passive attacks. Legislative protection should have been enacted to explicitly protect the data in question and it should be made clear during the consent process that the data will be sent over CloudFlare.</p>
<h4 id="ncsc-response-1">NCSC Response</h4>
<p>NCSC have confirmed that there were existing plans to refine the registration process.</p>
<p><a name="Problematic-design"></a></p>
<h2 id="problematic-design-decisions">Problematic Design Decisions</h2>
<p>The issues described below would also be exploitable by a compromised proxy server, however, we distinguish them on the basis that rectifying the proxy server issue does not mitigate the fundamental problems with the design decisions. In particular, the design creates a situation in which the uploaded data needs to be secured to the same level as the Authority Private Key, which as we have already described is an unrealistic expectation. This creates two further attack models, one in which the server is compromised, and the other in which unauthorised access is granted, or data shared with a broader group of people.</p>
<p>In the case of an external attacker compromising the server, the attacker model is similar to a compromised proxy server, except they would gain access to the full data set, not just what they had observed. This will make some of the problems described below more effective in terms of scale of attack.</p>
<p>In the case of unauthorised or broader access, the concern is that there is an incorrect perception of the security of the uploaded data. This could result in a broader group of people having access to the server or the data than would be permitted for the decrypted data or the Authority Private Key. As we will show, the problematic design decisions result in multiple opportunities to recover the InstallationID from the encrypted value, without the need for access to the Authority Private Key. In essence, if someone would not be granted access to the decrypted data or the Authority Private Key, they should also not be granted access to the uploaded data. As such, the attack model includes both the malicious adversary described above, and internal misclassification of the security of the data.</p>
<p><a name="Longlived-broadcast"></a></p>
<h3 id="long-lived-broadcastvalues-encrypted-ids">Long lived BroadcastValues (Encrypted IDs)</h3>
<h4 id="the-problem-2">The problem</h4>
<p>BroadcastValues have a lifetime of 24 hours which facilitates device tracking over a period of 24 hours, undermining the privacy protections native to BLE, as well as revealing some lifestyle attributes.</p>
<h4 id="the-implications-2">The implications</h4>
<p>BLE is designed to have MAC address randomisation, with a recommended cycle time of 15 minutes in the <a href="https://www.bluetooth.com/specifications/bluetooth-core-specification/">Bluetooth Core Specification</a>. This is to prevent scanners, of which there are many, from tracking a device over a period exceeding 15 minutes. However, as has already been discussed in regards to the <a href="https://github.com/vteague/contactTracing/blob/master/blog/2020-04-27TracingTheChallenges.md">Australian Contact Tracing App</a>, if constant values are broadcast via BLE that exceed that 15 minute window, then the in-built protection becomes redundant. In the case of the UK, the period is extended to 24 hours, which is considerably longer than even the 2 hours of the Australian app. (Though we have no reason to believe that the UK app suffers from the same failures to update at the advertised frequency that the <a href="https://docs.google.com/document/d/1u5a5ersKBH6eG362atALrzuXo3zuZ70qrGomWVEC27U/edit">Australian App does.</a>) Note also that the privacy advantages of inbuilt BLE rotation could be undermined even by a 15-minute rotation if that rotation was not synchronised with it. This is one of the advantages of the Apple-Google API, quite independent of the preference for centralisation.</p>
<p>The justification for such a long period is based on evaluating social distancing, not contact tracing. That is not the primary purpose of the app, nor consistent with what the public believe the app to be doing, nor sufficient justification for compromising the privacy of users and facilitating widespread device location tracking. The privacy risks associated with such a long period are also not adequately expressed to the end-user.</p>
<p>Furthermore, when someone self-diagnoses and uploads their logs, access to just the encrypted BroadcastValues that they have received risks revealing a number of lifestyle attributes about the uploader. For example, by comparing the BroadcastValues recorded on the device between 3am and 5am, and subsequently between 11pm and midnight, the viewer will be able to determine whether the uploader woke up and went to bed with the same person, or more revealingly, if they did not. Further examination of occurrence of BroadcastValues during the day will allow inference of whether the person is in a relationship with someone from work, or whether they potentially met someone after work. The NCSC wrongly dismisses the concerns with social graphs and re-identification, incorrectly assuming the establishing someone’s name is a prerequisite for re-identification, whereas it actually occurs whenever additional information, beyond what was anticipated, is learnt about an individual. Re-identification of social network graphs without demographic information has been demonstrated, for example by <a href="https://arxiv.org/pdf/1102.4374.pdf">Narayanan et al 2011</a>.</p>
<h4 id="correcting-the-problem-2">Correcting the problem</h4>
<p>The lifetime of BroadcastValues should be reduced to less than 15 minutes to ensure they do not undermine the intended privacy protections of BLE.</p>
<h4 id="ncsc-response-2">NCSC Response</h4>
<p>As already conveyed in public documents, the length of the BroadcastValues is under review.</p>
<p><a name="Eight-seconds"></a></p>
<h3 id="monitoring-of-the-interaction-every-8-seconds-creates-a-unique-interaction-signature">Monitoring of the interaction every 8 seconds creates a unique interaction signature</h3>
<h4 id="the-problem-3">The problem</h4>
<p>Monitoring active connections with 8 second pings creates an interaction signature that may facilitate pairwise identification in upload data, permitting the recovery of InstallationIDs without needing access to the Authority Private Key.</p>
<h4 id="the-implications-3">The implications</h4>
<p>If two users upload their data it may be possible from just the record of the 8 second pings to pairwise match their interactions. In doing so the InstallationID would be recovered without needing access to the Authority Private Key.</p>
<p>If two users Alice and Bob, who have met each other at some point during the last few days, both upload their contact records to the server, it should not be possible to discern from the upload that they had met, without first decrypting the BroadcastValue and recovering the InstallationID (Goal 4). It should also not be possible to link either Alice’s or Bob’s InstallationID (sent during the upload unencrypted except with TLS) with their respective encrypted BroadcastValues. However, due to the detail created by recording the RSSI every 8 seconds, which is uploaded unencrypted, an attacker who is able to observe the uploads or gain access to the server, may be able to learn which users have interacted and thus the mapping of InstallationID to encrypted BroadcastValue without needing access to the Authority Private Key.</p>
<p>An interaction between Alice and Bob will be defined by the start time, the end time, and the proportional changes in RSSI between those times. The start and end time will be the same for both devices, and that alone may be enough to uniquely identify them. As we showed in previous work, timing alone may act as an identifier <a href="https://arxiv.org/abs/1908.05004">Myki Re-Identification</a>. Even if it is not sufficient, when combined with the record of RSSI values a unique signature will be created. The record of RSSI values will form a discrete time-series of an interaction. If we cross-correlate those time-series between devices, the devices that interacted should have a higher correlation than those that did not. In other words, both devices should exhibit an increase in RSSI as the devices get closer, and a decrease as they get further apart, at the same points in time. In effect both devices are recording the same changes in distance, and as such, they are creating a shared signature of the interaction.</p>
<p>It may seem that such interaction signatures will not be unique, but there is significant entropy in the timing and RSSI values associated with an interaction, certainly sufficient to uniquely identify pairs.</p>
<p>Due to the InstallationID being submitted with the log data as well, once a pair is established it allows the mutual linking of plaintext InstallationID to encrypted BroadcastValue by examining the corresponding received BroadcastValues on each device in the pair. This can subsequently be leveraged to link the device to further interactions on the same day, including those which are more fleeting, due to now having the mapping of InstallationID to BroadcastValue.</p>
<h4 id="correcting-the-problem-3">Correcting the problem</h4>
<p>The detailed interaction data should be treated as equally identifying as the InstallationID and encrypted accordingly. Further justification for the necessity and frequency of the pings is required.</p>
<h4 id="ncsc-response-3">NCSC Response</h4>
<p>NCSC assert that additional pinging is required to accurately model interactions. They have confirmed that the contact modelling work will be published when completed. Encryption of the logs and uploads is planned, which will resolve the issue in terms of potential for linking prior to decryption.</p>
<p><a name="KeepAliveCharacteristic"></a></p>
<h3 id="keepalivecharacteristic-value-is-deterministic">keepAliveCharacteristic value is deterministic</h3>
<h4 id="the-problem-4">The problem</h4>
<p>keepAliveCharacteristic (8 second pings) uses a predictable incrementing counter to trigger notifications.</p>
<h4 id="the-implications-4">The implications</h4>
<p>In certain circumstances it might be possible to link successive BroadcastValues across two days by interrogating the KeepAlive counter. This would allow device tracking beyond the already excessive 24 hour period.</p>
<p>In order to record the distance between two devices every 8 seconds during a contact, the app updates the value of the keepAliveCharacteristic. Having done so it issues a notification to the other device, which has registered for notifications of changes in the value of the keepAliveCharacteristic. It is therefore necessary for the underlying value of the keepAliveCharacteristic to regularly change, although the actual value is unimportant - it is the notification message triggered by its change that is of interest in measuring the RSSI.</p>
<p>To implement the required regular change in value there is a class level byte counter, keepAliveValue, that is incremented once every 8 seconds. This counter has a maximum range of values between -128 and +127. As such, the counter will overflow after 34 minutes of connected time, where connected time is time when at least one other device is connected. Despite the overflow, the value of the counter can be predicted, provided there is an assumption that in the intervening period at least one other device has always been connected. This results in the potential to link an observable device across two days if the observer is either in its presence during the changeover (midnight) or within a period in which the device would always have had at least one other device connected to it.</p>
<p>The counter is not externally synchronised, so different devices will have different counters depending on when they first received connections and how many minutes they have been connected since the app was started. This results in the keepAlive value being able to act as a subset identifier for a range of devices that are observable. For example, if two devices, A and B are connected to the attacker, C, at midnight the BroadcastValue will be recreated and it should not be possible for C to link the new BroadcastValues with the old. In reality this might occur because the changeover is not synced with the MAC address cycling, but assuming that it is, it can still potentially be linked via the keepAlive counter. If device A has a count of 24 and device B has a count of 50 at 23:59:00. Immediately after midnight, when the BroadcastValue has been updated, device C can distinguish them from each other, knowing that device A should now have a count of 32 and device B should now have a count of 58, given the next keepAlive message after midnight will be 8 increments from 23:59:00. Even if device C is not observing A and B at midnight, provided both have remained connected to at least one other device their values can be predicted hours into the future. The only limit is the likelihood of a device remaining connected.</p>
<h4 id="correcting-the-problem-4">Correcting the problem</h4>
<p>The requirement is only that the value of the characteristic is changed in order to trigger the notification. As such, the values do not need to be incrementing. If they were selected at random - ensuring a different value to the current is selected - then there would be no way to predict the future values and it would no longer act as an identifier.</p>
<h4 id="ncsc-response-4">NCSC Response</h4>
<p>NCSC confirmed the vulnerability described and have committed to resolving it with a high priority.</p>
<p><a name="Unencrypted-log-uploads"></a></p>
<h3 id="unencrypted-log-uploads">Unencrypted log uploads</h3>
<h4 id="the-problem-5">The problem</h4>
<p>As described in the <a href="#Protocol-summary">Protocol Summary Section</a>, when an infected person’s logs are uploaded to the server, their integrity is protected by an HMAC, however, they do not seem to be encrypted.</p>
<h4 id="the-implications-5">The implications</h4>
<p>Submitting the data unencrypted allows a number of the attacks described above to be performed by anyone able to observe submissions, i.e. the a TLS Proxy or an attacker who is able to access or compromise the upload server. If the InstallationID - HMAC key list of users are stored on the same server, then a wholesale InstallationID recovery will be possible by undertaking the same HMAC dictionary attack. Even if that is not the case, the pairwise matching of timing data, or RSSI values, will facilitate recovery of InstallationIDs between pairs in the uploaded dataset. Furthermore, the leaking of lifestyle attributes described above will also be possible, all without access to the Authority Private Key.</p>
<h4 id="correcting-the-problem-5">Correcting the problem</h4>
<p>The uploaded data should be encrypted using the same technique as is used for the BroadcastValue. It is important to note that the claimed encryption, that does not appear to be present in the app, was with the shared symmetric key (the HMAC key). Even if this were present it would not be sufficient, because the shared secret may have been compromised. As such, it should use the more sophisticated ECDH key generation approach instead. Likewise, this would not have been a problem had the data been encrypted with the Authority Public Key when stored on the device in the first place. As such, correctly addressing the lack of protection of local storage, detailed <a href="local-log-protection">below</a>, will resolve this problem as well. Encrypting the data does not preclude sending the HMAC because being able to identify that an upload came from a particular user is not the same as being able to identify individual BroadcastValues for a user. As such the HMAC on the upload can still be retained to prevent injection of malicious content.</p>
<h4 id="ncsc-response-5">NCSC Response</h4>
<p>NCSC confirmed there was a plan to implement further protection on uploads, which will follow in a future app update.</p>
<p><a name="local-log-protection"></a></p>
<h2 id="inadequate-protection-of-local-log-files">Inadequate protection of local log files</h2>
<p><a name="Data-stored-unencrypted"></a></p>
<h3 id="data-stored-unencrypted-on-the-device">Data stored unencrypted on the device</h3>
<h4 id="the-problem-6">The problem</h4>
<p>Data stored on device is not encrypted, beyond the inherent BroadcastValue encryption. This allows anyone with access to a device to utilise the data for surveillance.</p>
<h4 id="the-implications-6">The implications</h4>
<p>Whilst the data is protected against some unauthorised access via the built-in app separation security, it is not protected against root level access, from either a party with control over the device, or law enforcement. This presents a number of problems:
1. Surveillance of individuals who are subject to domestic control or abuse;
2. Law enforcement access to detailed surveillance records of interactions.</p>
<p>Regarding point 1., the local log will provide evidence of interactions, including their length and closeness. Whilst they will not reveal identities, by analysing the timing information, it will be possible to extend monitoring beyond a control period. For example, an adversary could establish the set of BroadcastValues for known associates, by observing their initial interaction and matching the timestamps. Any interactions beyond that set will be discernible and could place the victim at risk. In effect it provides an easy tool for adversaries to assert their control beyond periods of direct observation.</p>
<p>One example of such would be if an abusive partner wants to monitor their spouse’s interactions. At the very least they will be able to interrogate their spouse about the details of every interaction. In a worse case, if they suspect the spouse of meeting with someone they do not want, they need only get within range of that person briefly on the same day to record the BroadcastValue themselves and subsequently cross-reference it with those recorded on their spouse’s device.</p>
<p>Regarding point 2., if law enforcement gain access to multiple device logs, they will be able to use the methods described above, either timing and RSSI analysis, or HMAC dictionary attacks, to determine interactions between devices. Putting aside the legitimacy of such access, the contact tracing app itself should not result in increases in potential surveillance.</p>
<h4 id="correcting-the-problem-6">Correcting the problem</h4>
<p>The log data should be encrypted with the Authority Public Key when on the device itself, and as already recommended, the detailed 8 second interaction data should be treated as sensitive and be equally protected.</p>
<h4 id="ncsc-response-6">NCSC Response</h4>
<p>NCSC confirmed there was a plan to encrypt log file data on the device, and that this will be part of a future app update.</p>
<p><a name="Disclosure-policy"></a></p>
<h2 id="source-code-access-and-responsible-disclosure">Source Code Access and Responsible Disclosure</h2>
<h4 id="the-problem-7">The problem</h4>
<p>Whilst it is admirable to make the app code available before wide deployment, the GitHub repository includes a disclosure policy we consider it to be incompatible with the principles of responsible disclosure.</p>
<h4 id="the-implications-7">The implications</h4>
<p>In particular, the clause that provides unilateral control to NHSX over publication of vulnerabilities, perhaps indefinitely, is wholly unacceptable within a responsible disclosure policy, and is not a clause we would be willing to agree to. Security researchers - and the NHS - have an ethical responsibility to make problems known to the people affected by them. The only genuine controversy is how much time should be allowed for correcting them before they are made fully and honestly known to the public.</p>
<h4 id="the-solution">The solution</h4>
<p>Adopt a more traditional responsible disclosure policy with a 30 day period, or for a more nuanced approach refer to the Hackerone guidelines <a href="https://www.hackerone.com/disclosure-guidelines">https://www.hackerone.com/disclosure-guidelines</a></p>
<h4 id="ncsc-response-7">NCSC Response</h4>
<p>NCSC confirmed this had been rectified within a few days of notification, the policy document on GitHub has been adjusted and they now refer to the Hackerone guidelines.</p>
<p><a name="Legal-commercial-political"></a></p>
<h2 id="legal-commercial-and-political-issues">Legal, Commercial and Political Issues</h2>
<p>Distinct from the technical analysis above, we also believe there are some broader issues to be considered. We acknowledge that these may fall outside of the technical remit of NHSX or NCSC. They are included here as contributions to the broader discussion that should be taking place in advance of the roll-out of the app. As such, these are opinions, not technical analyses.</p>
<p>There has been significant public discussion in Australia over whether the US CLOUD Act would compel Amazon, which hosts Australia’s central server, to share the information with the US government. In the Australian protocol, the AWS server’s complete visibility of all relevant information is not easily avoided.</p>
<p>We are not aware of any similar discussion in the UK, though the role of Cloudflare (and perhaps Google) seems just as fundamental. However, there is a crucial difference between the two countries’ situations: the modifications we describe in this report would allow the separation of the authority public key (to be held by the UK NHS presumably) from the TLS server’s information. This would mean that Cloudflare (or whoever compromised Cloudflare), or other TLS proxies (or whoever compromised them) would receive registration data and the metadata associated with each confirmed infection, but lose the ability to identify the user based on their BLE messages, or forge BLE messages that passed HMAC verification.</p>
<p>More broadly it is our opinion that the data associated with contact tracing should be protected by legislation from use by law enforcement, or any usage not directly related to COVID-19 prevention. There should be a legal requirement that at the end of the crisis all data collected by the app is securely deleted, and not just “anonymised” or repurposed. Australia has begun the process of providing some of these protections, and whilst we would argue there are still a number of critical gaps, it is considerably more than the non-existent protections in the UK. The app should not be a backdoor for data collection for any purpose other than helping address the current crisis.</p>
<p>Furthermore, requiring the public to enable Bluetooth on their devices will have an impact on their privacy overall, enabling commercial profiling and tracking as a side-effect. It is understandable that compromises must be made at this time, but suitable legislative protection should have been provided to ensure the public do not suffer a loss of privacy as a side-effect of installing the Contact Tracing app. In particular, there should be an absolute ban on use of any application data for purposes other than contact tracing. Australia has drafted some legislation towards some of these goals, although a number of gaps remain <a href="https://www.ag.gov.au/RightsAndProtections/Privacy/Documents/exposure-draft-privacy-amendment-public-health-contact-information.pdf">Australia COVIDSafe Exposure Draft</a>. So far the UK has not asserted that similar protections will be forthcoming. Any such legislation should prevent the usage of Bluetooth for profiling or tracking throughout the crisis period to best protect the privacy of users and encourage sign up and utilisation of any Bluetooth Contact Tracing app.</p>
<p><a name="Conclusion"></a></p>
<h2 id="conclusions">Conclusions</h2>
<p>The primary advantage of decentralised exposure notification, such as the Google/Apple API, is privacy of the contact graph against the central authority - that is, the government, NHS, or its contractors and service providers (or whoever gains unauthorised access to their databases). The decentralised model does reveal more about users to each other (though the Google/Apple API contains some mitigations for this). A full examination of the trade-offs is beyond the scope of this report - there are attacks on both integrity and privacy in both models: <a href="https://eprint.iacr.org/2020/399">https://eprint.iacr.org/2020/399</a>, but for us the advantage of contact-data privacy against a centralised authority is a sufficiently strong advantage to favour the decentralised approach. For others, we feel that this trade-off should be made more explicit in the UK’s public discussion of the centralised model, so that the relative merits of the different models can be better understood. It is also important to recognise that there are different security and privacy properties associated with the different implementations within each model. It is for this reason that it is so important to discuss the specifics of a particular implementation, and not just abstract models. The UK has assisted in that regard by making its code open source, and publishing technical details, before the national roll out. It is vital that process continues, in particular where changes to the fundamental approaches in the app are considered. Updates that alter those privacy or security properties should not be rolled out without sufficient time for consideration and consultation.</p>
<p>There are admirable parts of the implementation and once the already mentioned changes and updates are made, many of the concerns raised in this report will have been addressed. However, there remains some concern as to how privacy and utility are being balanced. The long lived BroadcastValues, and detailed interaction records, remain a concern. Whilst we understand that more detailed records may be desirable for the epidemiological models, it must be balanced with privacy and trust if sufficient adoption of the app is to take place. As such, it may have been beneficial to evaluate what data could be accurately collected by the technology, at sufficient population scale - considering the privacy and trust issues - and from that build the best epidemiological model. Otherwise there is a risk of placing the cart before the horse, and building a great model that will receive insufficient data, which is not going to help.</p>
<p>Obtaining sufficient scale will require building trust and protecting privacy. The open availability of the source code, and the NCSC’s positive interactions with security researchers, will go a long way towards this. However, the messaging around the app, and in particular suggestions of broadening the data collected, combined with insufficient legislative protections, a lack of siloing of the data, and no sunsetting of the data retention or usage, risk undermining the trust that has been earned.</p>
<p><a href="https://stateofit.com/UKContactTracing/">Security analysis of the NHS COVID-19 App</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on May 19, 2020.</p>
https://stateofit.com/kawaiicon2019-10-24T00:00:00-00:002019-10-24T00:00:00+11:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>Internet Voting continues to be pushed as the future of voting despite it continuing to be a bad idea. This talk will highlight some of the conceptual challenges and additional risks that Internet Voting brings. By looking at examples of Internet Voting that have been deployed we can see a pattern of poor decision making and skewed priorities. The talk will discuss deployments of iVote in Western Australia and New South Wales - in particular on their usage of TLS Proxies to provide DDoS protection and the impact that has on the security and trust of the system. More broadly the talk will look at the lack of transparency, and how what little transparency there is raises even more concerns about the integrity of the voting systems.</p>
<h2 id="presentation">Presentation</h2>
<video controls="" style="width:100%">
<source src="https://stateofit.com/video/Culnane_Online_Voting.mp4" type="video/mp4" />
</video>
<h2 id="slides">Slides</h2>
<p>Slides can be downloaded <a href="https://stateofit.com/assets/KiwiCon_Release.pdf">here.</a>.</p>
<p><a href="https://stateofit.com/kawaiicon/">Internet Voting - From bad idea to poor execution</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on October 24, 2019.</p>
https://stateofit.com/interceptionpart22018-08-30T00:00:00-00:002019-02-12T00:00:00+11:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>The Assistance and Access Bill, or TOLA (Telecommunications and Other Legislation Amendment), is back on the agenda. After the debacle that occurred on the last day of parliament in 2018, it is up to parliament to fix the bad legislation they passed in such a hurry. The first question should be was the rush justified? We’ve seen reports of the legislation being used for what appear to be criminal cases, but there has been no news of terror cells being busted – yet if you were to believe the rhetoric from some politicians last December there was an imminent danger that needed to be addressed.</p>
<h2 id="a-step-in-the-right-direction">A step in the right direction</h2>
<p>The legislation did not pass without some amendments. One such amendment covered one of the issues raised in a previous blog post, about Technical Assistance Requests (TARs) not being included in Division 7. Division 7 is the part of the legislation that imposes limitations on what can be asked for in Technical Assistance Notices (TANs) and Technical Capability Notices (TCNs). On a positive note the amendment causes TARs to now be covered in the same way as TANs. However, there remains a gap in the restrictions.</p>
<p>Broadly speaking TANs cannot ask for the development of new capabilities, whilst TCNs can. More specifically, TANs explicitly prohibit the building of new capability in part 317L(2A)</p>
<blockquote class="legquote">
<p>“(2A) The specified acts or things must not be directed towards ensuring that a designated communications provider is capable of giving help to ASIO or an interception agency”</p>
</blockquote>
<p>No such restriction exists in definition of TARs. As for TCNs they cannot ask for the development of a new capability to remove electronic protection, specifically they cannot require the listed act or thing covered by 317E(1)(a). The problem is that TARs fall between the two, they can request the development of new capabilities, and are therefore are more closely related to TCNs than TANs, but don’t include a restriction on building new capabilities to remove electronic protection (317E(1)(a)).</p>
<p>The amendment should have limited TARs in the same way as TCNs, not TANs. Without such restrictions, TARs can still ask for the development of new capabilities to remove encryption, and remain the most powerful tool in the legislation, with the fewest restrictions.</p>
<h2 id="when-is-a-weakness-not-a-vulnerability">When is a weakness not a vulnerability?</h2>
<p>Even with better restrictions the protection still comes down to the definition of systemic weakness or systemic vulnerability. The legislation, and the debate around it, has been dogged by ambiguity around the definition of systemic weakness. This definition is critical to the checks and balances within the legislation, and needs to be watertight to have any useful value. Unfortunately, what we have now is bordering on insane. It makes no technical sense, and quite frankly makes very little sense in terms of the English language.</p>
<p>One of the amendments introduced two new definitions, one for systemic weakness and one for systemic vulnerability. The two definitions are shown below:</p>
<blockquote class="legquote">
<p>“<strong>systemic vulnerability</strong> means a vulnerability that affects a whole class of technology, but does not include a vulnerability that is selectively introduced to one or more target technologies that are connected with a particular person. For this purpose, it is immaterial whether the person can be identified.”</p>
</blockquote>
<p><br /></p>
<blockquote class="legquote">
<p>“<strong>systemic weakness</strong> means a weakness that affects a whole class of technology, but does not include a weakness that is selectively introduced to one or more target technologies that are connected with a particular person. For this purpose, it is immaterial whether the person can be identified.”</p>
</blockquote>
<p>The first thing you’ll notice is that the two definitions are identical except for the word vulnerability is replaced by weakness in the latter definition, and there is good reason for that, weakness and vulnerability are synonyms. If the two were not identical it would create a conflict within the definitions themselves. This really highlights both the poor quality drafting, and the omnishambles that has been the passing of this legislation. We have a duplicate definition which is masquerading as being something different. It is completely pointless, yet is duplicated throughout the legislation.</p>
<h2 id="lacking-in-class">Lacking in class</h2>
<p>The bigger issue is that on first reading the definition it almost seems like it is providing some constraints or limitations, but when you start to look at it in more detail you realise it doesn’t actually do much at all, and could even be internally inconsistent. The definition blocks a vulnerability that affects a whole class of technology, but immediately excludes target technologies that are connected with a particular person. The first problem with that is there is no definition of what a “class of technology” even is. It is not a term that is exists in technology literature. If you search Google for that exact phrase you get only 38 results (and that is by asking to see all results including the ones Google thinks are duplicates). Of those 38, 4 are about the Assistance and Access bill, asking what on earth does class of technology mean. Several others are about someone who was in a class, about technology. (it is a shame that more politicians haven’t attended a class about technology, then maybe we wouldn’t be in this mess).</p>
<p>The only reasonable reading would be to assume that class of technology would be so broad as to be all Mobile Phones, or all ADSL connections, or all social media. The problem with such a definition is that no TAR/TAN/TCN could ever be issued to a single organisation that controlled an entire class of technology, since no such organisations exist. Therefore a TAR/TAN/TCN will never cover an entire class of technology. Crucially it does not state a class of technology offered by a provider. If it did, it would prevent a single provider having to weaken its entire service or network. Without such a clause it would appear that it will be perfectly legitimate to ask a telco to introduce a vulnerability to the whole of its network, since that will not cover an entire class due to other telcos not being included in the same TAR/TAN/TCN.</p>
<p>Obviously trying to guess what was intended by the definition is always going to be a challenge. But realistically we should assume it will be taken in its broadest sense, let’s face it, that is how it is going to be taken by an intelligence agency justifying its voracious appetite for data.</p>
<h2 id="a-target-so-big-no-one-could-miss">A target so big no one could miss</h2>
<p>The problem gets worse when we look at the longer definition of target technologies, it effectively covers pretty much anything, provided it is being targeted at a particular person. For example, part a of the definition states:</p>
<blockquote class="legquote">
<p>“for the purposes of this Part, a particular carriage service, so far as the service is used, or is likely to be used, (whether directly or indirectly) by a particular person, is a target technology that is connected with that person;”</p>
</blockquote>
<p>If a carriage service can be a target technology that would indicate our very broad reading of class of technology is correct. Since a carriage service would encompass all traffic going through a service provider’s network. As such, it would appear that the definition of systemic weakness and systemic vulnerability do not even preclude the building of bulk interception capabilities for a carriage service provider, providing at least one person of interest is using that service provider. Furthermore, the latter part of the definition of target technologies states:</p>
<blockquote class="legquote">
<p>“For the purposes of paragraphs (a), (b), (c), (d), (e) and (f), it is immaterial whether the person can be identified.”</p>
</blockquote>
<p>Think about that for moment. If it is immaterial whether the target person can be identified, that implies that the legislation would permit bulk interception. If the individual cannot be identified it seems very much like it is not targeted. Additionally, since the definition of target technology only requires that it is likely to be used by a particular person it could be entirely speculative.</p>
<p>There are many other problems with the legislation, for example, does the amendment to part 3LA of the crimes act – to increase the maximum sentence for not providing assistance in gaining access to a computer to 10 years - (which is already being used/threatened) infringe on the right to remain silent?</p>
<p>The only thing we can be certain of is that an omnishambles does not produce good legislation.</p>
<h4 id="references">References</h4>
<ul class="bibliography"></ul>
<p><a href="https://stateofit.com/interceptionpart2/">An Update on the Assistance and Access Bill in 2019</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on February 12, 2019.</p>
https://stateofit.com/interception2018-08-30T00:00:00-00:002018-08-30T00:00:00+10:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>UPDATE: An updated blog post on the passed bill is <a href="/interceptionpart2/">available here</a></p>
<p>The recently released exposure draft of the Assistance and Access Bill 2018 <a href="#InterceptBill:online"><span style="vertical-align: super">[1]</span></a> redefines the future of government interception of electronic communication. Left unchanged it will have far reaching consequences for the security and privacy of Australian’s. The legislation is both long and complicated; it raises a number of questions and concerns, which so far have not been adequately addressed. The following is a look at the legislation from the perspective of a techie; I am not a lawyer. My analysis is based on viewing the legislation as a technical document, looking for gaps and inconsistencies, since that is so often where the greatest threat lies. My opinion is that the greatest threat stems not from the compulsory notices, but the voluntary requests, which have greater scope and less oversight.</p>
<h2 id="overview">Overview</h2>
<p>I’ll warn you in advance that this is a long blog post, primarily because it looks at a long piece of legislation, Schedule 1, which is the subject of this post, is 68 pages, with the corresponding explanatory note a further 53 pages.</p>
<p>At a very high level, the legislation introduces two compulsory notices, and one voluntary request. Whilst the compulsory notices have gained the most attention, it is my view that the voluntary assistance requests are where the greatest danger exists. The assistance requests are not constrained by the same limitations as the notices in what they can ask for, neither are they part of the annual reporting. They appear to offer the greatest capability with the least oversight. Continue reading for a more detailed look at Schedule 1.</p>
<h2 id="legislative-objective">Legislative Objective</h2>
<p>The explanatory note <a href="#InterceptBillNote:online"><span style="vertical-align: super">[2]</span></a> released in conjunction with the exposure draft lays out the case for needing the bill, which is largely based on the claim that encryption is preventing effective intelligence gathering. This is demonstrated in statements like:</p>
<blockquote class="legquote">
<p>“…encrypted devices and applications are eroding the ability of our law enforcement and security agencies to access the intelligible data necessary to conduct investigations and gather evidence.”</p>
</blockquote>
<p>The explanatory note claims that the bill aims to provide a way to gain assistance from organisations to access data despite the existence of encryption, whilst claiming to not undermine cyber security. Whilst the first few pages repeatedly mention national security agencies or terrorism, the powers defined in the bill can be applied in far broader contexts, of particular concern is in “<em>protecting the public revenue</em>”. That is not to say that public revenue should not be protected, but intercepting private communication is a very serious invasion of privacy and should only be considered in the most serious of criminal and national security cases.</p>
<p>Not only is its application to the protection of public revenue concerning, but the definition of public revenue in the explanatory note is extremely broad:</p>
<blockquote class="legquote">
<p>“The concept of ‘public revenue’ includes State and Territory revenue in addition to Commonwealth revenue. Lawful obligations charged on a regular basis such as taxes, levies, rates and royalties are also included but occasional charges, such as fines, are not. ‘Protecting the public revenue’ also includes the activities of agencies and bodies undertaken to ensure that those lawful obligations are met; for example routine collection, audits, investigatory and debt recovery actions.<br /><br />The term ‘revenue’ is not limited to incoming monies from taxation but could also extend to ‘monies which belong to the Crown, or monies to which the Crown has a right, or monies which are due to the Crown’.”</p>
</blockquote>
<p>If this legislation had existed before the Centrelink debt recovery program would they have deployed interception as a tool in that process? Should that be considered a reasonable and measured response, or would it be an excessive use of power? What protections are there to stop such an abuse of power taking place in the future?</p>
<p>There are undoubtedly times when interception is necessary for national security, and there is always going to need to be a balance between privacy, transparency, and security. However, all three of those concepts are important, it is not that security trumps all other values, to do so will lead to the formation of a surveillance state, and ultimately a police state. We do not live in a risk-free world, with freedom comes the risk that some will abuse those very freedoms to cause harm and attack the open and free society we aspire to be. Whilst we should endeavour to prevent such attacks, we cannot sacrifice the very freedoms that are under threat, to do so would hand a victory to the attackers. It would be a perfect example of winning the battle but losing the war. Such approaches are becoming more common across the western world, as a vacuum of good leadership leads to the pursuit of objectives instead of the protection of values.</p>
<h2 id="is-anyone-not-a-communication-provider">Is anyone not a communication provider?</h2>
<p>The legislation is extremely broad in who it applies to. It includes the obvious entities like carriers or carriage service providers (think Telco’s). But what is more concerning is the broader categories that are included, for example, it covers a person that <em>“… provides an electronic service that has one or more end users in Australia”</em> which appears to cover every website that is accessible from Australia. Furthermore, the legislation also covers an individual if <em>“…the person develops, supplies or updates software used, for use, or likely to be used, in connection with: (a) a listed carriage service; or (b) an electronic service that has one or more end users in Australia”</em>, which appears to cover every piece of software, or mobile app, that connects to internet or produces content that is going to be used on the internet. That is an incredibly broad category, the justification for which is not clear. It goes further to cover any corporation that creates software that may run on a device that will be connected to a telecommunication network, irrespective of whether the software itself is intended for use over that communication network.</p>
<p>Since the legislation will be applicable to both individuals and companies, there is need for clarification as to whether an employee could be the subject of a notice or request as a result of their job. For example, could a notice be issued against a software developer in a company, but not the company itself? Where would an employee stand in terms of the secrecy requirements and the immunity clauses? Could a notice be issued against an individual to act against their employer? What safeguards are there for both the company and the individual?</p>
<h2 id="requesting-to-be-noticed">Requesting to be Noticed</h2>
<p>At the heart of the bill are three different types of request/notice, each one subtly different in scope, power, and accountability.</p>
<table class="gtable">
<thead>
<tr>
<th>Type</th>
<th style="text-align: center">Compliance</th>
<th style="text-align: center">Issued By<sup>1</sup></th>
<th style="text-align: center">Can Include New Development</th>
<th style="text-align: center">Included in Annual Audit Disclosure</th>
</tr>
</thead>
<tbody>
<tr>
<td>Technical Assistance Request</td>
<td style="text-align: center">Voluntary</td>
<td style="text-align: center">DGS, DGSIS, DGASD, COIA</td>
<td style="text-align: center">Yes</td>
<td style="text-align: center">No</td>
</tr>
<tr>
<td>Technical Assistance Notice</td>
<td style="text-align: center">Compulsory</td>
<td style="text-align: center">DGS, COIA</td>
<td style="text-align: center">No</td>
<td style="text-align: center">Yes</td>
</tr>
<tr>
<td>Technical Capability Notice</td>
<td style="text-align: center">Compulsory</td>
<td style="text-align: center">AG</td>
<td style="text-align: center">Yes, but not removing Encryption</td>
<td style="text-align: center">Yes</td>
</tr>
</tbody>
</table>
<p class="tablefooter"><sup>1</sup>The Director General of Security (DGS),
The Director General of the Australian Secret Intelligence Service (DGSIS),
The Director General of the Australian Signals Directorate (DGASD),
The chief officer of an interception agency (COIA),
Attorney General (AG)</p>
<h3 id="technical-assistance-notice">Technical Assistance Notice</h3>
<p>These appear to be intended to gain access to existing capabilities that a communication provider may have. If new developments are required, they should be requested using a Technical Capability Notice. A Technical Assistance Notice is a compulsory notice that must be complied with. There are limits on what is considered reasonable, but it is not clear how feasible it will be for a recipient to challenge the determination of reasonableness.</p>
<h3 id="technical-capability-notice">Technical Capability Notice</h3>
<p>A Technical Capability Notice is primarily intended to require the building of a new capability to be able to subsequently meet a Technical Assistance Notice. They can request the building of any capability listed in the legislation, with the exception of <em>371E(1)(a)</em>:</p>
<blockquote class="legquote">
<p>“(a) removing one or more forms of electronic protection that are or were applied by, or on behalf of, the provider;”</p>
</blockquote>
<p>The explanatory note, and even the legislation, draw particular attention to this apparent limitation.</p>
<h3 id="technical-assistance-request">Technical Assistance Request</h3>
<p>Technical Assistance Requests are described as voluntary requests, as such, there is no criminal or civil penalty for not complying with them, although they are covered by the same secrecy provisions. It is my view that these are the real objective of the legislation, not the compulsory notices. The requests are defined differently to both of the notices, and have few, if any, limitations on what they can request. Furthermore, they are excluded from essential oversight, by virtue of not being included in the annual report issued by the minister (see <em>317ZS</em>).</p>
<p>Of greatest concern is that the constraints in <em>Division 7 Limitations</em> do not apply to Technical Assistance Requests. As such, there is no restriction on a Technical Assistance Request asking for the implementation of a Systemic Weakness. Likewise, unlike Technical Capability Notices, there is no restriction on requesting the development of new capabilities to remove electronic protection (317E(1)(a)). The explanatory note states:</p>
<blockquote class="legquote">
<p>“A technical assistance request can ask a provider do a thing currently within their capacity or request that they build a new capability to assist agencies.”</p>
</blockquote>
<p>As such, Technical Assistance Requests are permitted to request more than both the notices combined, they are largely unbounded, being able to request assistance beyond that which is listed in <em>317E(1)</em> <em>“…provided that the assistance is of the same kind, class or nature as those listed.”</em></p>
<h2 id="hiding-in-the-shadows">Hiding in the Shadows</h2>
<p>In general, the contents of requests and notices, and even the receipt of such requests and notices cannot be publicly disclosed. In regard to Technical Capability Notices this is of particular concern. Capability notices are not operational notices, they are not being applied in the context of an active threat, they are about building capability. Any time the state is building secret capabilities is a cause for concern. In the past these capabilities were generally outwardly focussed - i.e. defence capabilities, that were never intended to be targeted on the population itself. In this case it is different, the state is building secret capabilities that are specifically being targeted at Australians. That presents a dangerous precedent, potentially shifting power and sovereignty away from the population. Whilst there is justification for not revealing active operations, keeping capabilities secret risks preventing public oversight, and is likely to lead to abuse. Governments need to remember they are the representatives of the people, not their rulers.</p>
<p>Whilst the details of notices and requests are protected from disclosure, there is permission for disclosing high level statistics. <em>317ZF(13)</em> permits the communication provider to disclose</p>
<blockquote class="legquote">
<p>(e) the total number of technical assistance notices given to the provider during a period of at least 6 months; or<br />
(f) the total number of technical capability notices given to the provider during a period of at least 6 months; or<br />
(g) the total number of technical assistance requests given to the provider during a period of at least 6 months.<br />
Note: This subsection authorises the disclosure of aggregate statistical information. That information cannot be broken down:<br />
<span>(a) by agency; or</span>
<span>(b) in any other way.</span></p>
</blockquote>
<p>Whilst some disclosure is good, it would be better if a communication provider was mandated to report such statistics, particularly in regard to Technical Assistance Requests. Declaring such information is in the best interests of transparency and the public, but is probably not in the best interests of the communication provider. Revealing voluntary assistance could have a negative impact on the public image of the provider. As such, without a mandatory disclosure requirement, the scale of the application of Technical Assistance Requests could disappear completely from public oversight. It is also worth noting that there is no ability to distinguish between technical assistance requests that were complied with and those that were refused. As such, excessive assistance requests could be issued to a provider in order to harm their public image should they wish to declare the totals.</p>
<p>I’m loath to point out there also appears to be a flaw in their attempt to hide the details of when notices were issued by requiring aggregate results to be declared for <em>“…a period of at least 6 months;”</em> An organisation could seemingly comply with that requirement by having a rolling 6 monthly declaration on their website. By differencing the total number between one day and the next it is likely to reveal the number of notices issued either on that day 6 months previous or the previous day.</p>
<h2 id="careful-what-you-say">Careful what you say</h2>
<p><em>317ZA</em> deals with compliance of notices but raises a number of questions. <em>317ZA</em> is specific to <em>“…carriers and carriage service providers”</em>, and their compliance with requirements contained within capability and assistance notices. However, <em>317ZA(2)</em> seems to be applicable to all persons, not just carriers and carriage service providers. It states</p>
<blockquote class="legquote">
<p>A person must not:<br />
<span>(a) aid, abet, counsel or procure a contravention of subsection(1); or</span>
<span>(b) induce, whether by threats or promises or otherwise, a contravention of subsection(1); or</span>
<span>(c) be in any way, directly or indirectly, knowingly concerned in, or party to, a contravention of subsection(1); or</span>
<span>(d) conspire with others to effect a contravention of subsection(1).</span></p>
</blockquote>
<p>On the face of it the above might seem reasonable, however, recall that a person cannot possibly know that a capability or assistance notice has been issued to a carriage service provider. This presents a particular issue to part (a), since there is no requirement to <em>knowingly</em> aid, abet, or counsel. As such, could part (a) be applied to teaching? What happens if a university subject teaches students how to implement secure end-to-end encryption, which in doing so aids an individual to cause the contravention of subsection(1)?</p>
<p>This part of the legislation needs far more clarity and precision, currently it is far too broad and risks impinging on good security practice through fear of contravening an unknowable notice. The explanatory note fails to provide any clarity, by being even briefer that the legislation itself, stating just</p>
<blockquote class="legquote">
<p>“Persons are prohibited from aiding, abetting, inducing or conspiring to affect a contravention of subsection 317ZA(1).”</p>
</blockquote>
<h2 id="systemic-weaknesses">Systemic Weaknesses</h2>
<p>The issue of System Weaknesses is made a big deal of in the legislation and explanatory note. It seems like it is an attempt to comply with the claim of not mandating backdoors. However, the term isn’t defined anywhere. Furthermore, what is described remains a backdoor, albeit a keyed backdoor. There is no requirement for backdoors to be universally exploitable to be considered a backdoor, it merely needs to provide an alternative entry point into the target system or protocol. The only compromise appears to be that they have realised that in fact the laws of mathematics do apply in Australia and that the backdoor needs to be relocated somewhere else. That isn’t really an improvement, it is just a technicality.</p>
<p>The description of a Systemic Weakness seems somewhat contradictory. At one point the explanatory note states:</p>
<blockquote class="legquote">
<p>“For the avoidance of doubt, this includes a prohibition on building a new decryption capability or actions that would render systemic methods of authentication or encryption less effective. The reference to systemic methods of authentication or encryption does not apply to actions that weaken methods of encryption or authentication on a particular device/s. “</p>
</blockquote>
<p>later it states:</p>
<blockquote class="legquote">
<p>“Notices may still require a provider to enable access to a particular service, particular device or particular item of software, which would not systemically weaken these products across the market. For example, if an agency were undertaking an investigation into an act of terrorism and a provider was capable of removing encryption from the device of a terrorism suspect without weakening other devices in the market then the provider could be compelled under a technical assistance notice to provide help to the agency by removing the electronic protection</p>
</blockquote>
<p>From a security perspective the fact that the provider has the capability to remove encryption from a device is a systemic weakness. It is not the application of the weakness that should be evaluated, it is the existence and capability to exploit it that should be considered.</p>
<h3 id="keyed-backdoors">Keyed Backdoors</h3>
<p>The usage of keyed backdoors is not new, that is exactly what the NSA attempted to create through the <a href="https://en.wikipedia.org/wiki/Dual_EC_DRBG">Dual Elliptic Curve Deterministic Random Bit Generator (Dual-EC DRBG)</a>. It was approved as a part of a NIST suite of Cryptographically Secure Pseudorandom Number Generators. However, it contained a keyed backdoor, which would have permitted the NSA to potentially recover randomness. Why is that important, because cryptography is particularly reliant on randomness to provide protection. If your generation of randomness is predictable, or recoverable, it leads to key recovery and ultimately message recovery, rendering the otherwise secure cryptographic protocol broken. It is one of the most effective and powerful cryptographic backdoors, because it is difficult to detect, it is applicable to current and new protocols, and can only be accessed by those with the secret key. Such approaches are not just the reserve of the NSA, an alternative keyed attack was found in <a href="https://eprint.iacr.org/2016/376.pdf">Juniper networks switches</a>.</p>
<p>Would such an approach constitute a <em>Systemic Weakness</em>, these aren’t theoretical attacks, they are practical and been used in the wild. The explanatory note should clarify where such techniques would fall within the legislation.</p>
<h3 id="backdooring-devices-or-systems">Backdooring devices or systems</h3>
<p>Outside of attacking the crypto, the obvious avenue for attack, and the one described in the explanatory note, is attacking the data before it is encrypted.</p>
<blockquote class="legquote">
<p>“…a notice may require a provider to facilitate access to information prior to or after an encryption method is employed, as this does not weaken the encryption itself. A requirement to disclose an existing vulnerability is also not prohibited.”</p>
</blockquote>
<p>This is where the future threat could ultimately lay. If I was asked what I would target to backdoor a device I would say the keyboard interface. It is currently the weakest point on most devices, be they laptops, tablets, or mobile phones. The keyboard device, or software equivalent, captures everything entered before it is encrypted. It sees every username, password, and message entered. It would be an easy and low-cost point to attack, and would be difficult for most users to detect.</p>
<h2 id="the-future">The Future</h2>
<p>Legislation does not get written by accident, everything is written for a reason, with a specific objective. That objective may not be made clear in the legislation, or possibly not even publicly, and it is for this reason that we must look not just at what is written in the legislation and explanatory note, but also what is not written. This is even more pressing for legislation whose outcomes will be hidden from public view. Badly written legislation is never a good thing, but abuse of badly written legislation becomes self-evident through its application in the public judicial system. Legislation that will be enforced in secret is rare, it is even rarer for such legislation to be applied to the population itself. As such, the only real chance of oversight we have of this legislation is before it is enacted, after that it will be shrouded in so much secrecy that it will be almost impossible to challenge or oversee. It is essential that the public, advocacy groups, and businesses makes their thoughts known during the <a href="https://www.homeaffairs.gov.au/about/consultations/assistance-and-access-bill-2018">feedback period</a>: this might be your only chance!</p>
<p>In terms of my opinion, I believe the Technical Assistance Requests are the real objective of the legislation. They are incredibly broad, and their exclusion from what little oversight that does exist in the legislation is particularly concerning. It is my opinion that the Technical Capability Notices are there as a distraction, which will draw the focus of challenges, but will ultimately be sacrificed/compromised by the government, leaving the Technical Assistance Requests largely untouched. The argument that the Technical Assistance Requests are voluntary is highly debatable, the government wields enormous amounts of soft power, the suggestion that companies, whose revenue could be impacted by the exercising of that soft power, are going to sacrifice profit for their users is fanciful. Not only are those companies some of the most prolific players in surveillance capitalism, but they had no qualms in sharing their user information with <a href="https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data">intelligence agencies in the past</a>. They only claimed to be champions of user privacy when it became public how they had so conclusively betrayed those very same users. I view that as little more than superficial repentance in a cynical attempt to save their brands. I have no doubt those same organisation would have no issue in voluntarily complying with interception requests if they could guarantee the public would never find out, which is exactly what this legislation provides.</p>
<p>How could the legislation be improved (apart from being shredded):</p>
<ul>
<li>Technical Assistance Requests should be covered by the same limitations as described in <em>Division 7</em>, namely they should not be able to request systemic weaknesses, nor develop new techniques for removing electronic protection</li>
<li>Technical Assistance Requests should be included in the annual report</li>
<li>Recipients of notices or requests should be mandated to provide transparency reports, including all requests and notices</li>
<li>Clarity must be provided on aiding and abetting contravention of notices issued against carriers</li>
<li>The scope of the legislation should be further restricted to only the most serious of crimes or threats to national security</li>
<li>The application of secrecy provisions to capabilities should be reviewed</li>
<li>A more precise definition and description of Systemic Weakness is also required</li>
</ul>
<h4 id="references">References</h4>
<ul class="bibliography"><li><span id="InterceptBill:online">[1] A. Government, “Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018.” <a target="_blank" href="https://www.homeaffairs.gov.au/consultations/Documents/the-assistance-access-bill-2018.pdf">https://www.homeaffairs.gov.au/consultations/Documents/the-assistance-access-bill-2018.pdf</a>, Aug-2018.</span></li>
<li><span id="InterceptBillNote:online">[2] A. Government, “Assistance and Access Bill 2018 Explanatory Document.” <a target="_blank" href="https://www.homeaffairs.gov.au/consultations/Documents/explanatory-document.pdf">https://www.homeaffairs.gov.au/consultations/Documents/explanatory-document.pdf</a>, Aug-2018.</span></li></ul>
<p><a href="https://stateofit.com/interception/">Assistance and Access Bill 2018</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on August 30, 2018.</p>
https://stateofit.com/data-privacy2018-03-21T00:00:00-00:002018-03-21T00:00:00+11:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>The recent news of Cambridge Analytica’s<a href="#ca_files:online"><span style="vertical-align: super">[1]</span></a> alleged usage of facebook data should act as wake-up call to us all. It may seem like the amount of data in question is large, the number of individuals certainly seems to be, however, when taken in the context of wider data collection it is just the tip of the iceberg. We are increasingly leaving ever more detailed digital footprints, it’s not just the data we choose to share; every aspect our digital lives is monitored, recorded, and profiled in excruciating detail. From what websites we visit, to what we buy, the music we listen to, through to the people we know. It is all used to build an in-depth profile of who we are and what we can be influenced by.</p>
<p>Sadly there is nothing new about this level of profiling, it has been going for years. However, what has changed is the context in which it is being deployed. Previously it was the preserve of advertising, trying to persuade us to spend our money on something we didn’t know we wanted or needed. What Cambridge Analytica have done is weaponise those same techniques to influence behaviour, and in particular voting.</p>
<p>The scale and depth of data collection continues to grow, as more and more devices pervade every aspect of our lives, from fitness devices through to our phones, and digital home assistants. They are all data collection devices, adding more, and finer grained, data to profiles held by commercial organisations. Such organisations are at pains to be perceived as caring about the privacy of their users and protecting their data. However, those same organisations have business models that are based on selling access to their users. The more accurate the profiles are, the more valuable the advertising is, and the more profit the organisation makes. We cannot be surprised that profit driven businesses look to exploit the only asset they have, which is our data.</p>
<p>Since this is not news to many people, why do we all still engage with the services or organisations that are profiling us? If we look at how these businesses start they are somewhat akin to a digital drug pusher. They provide their product for free, despite the high financial costs, with the hope of getting sufficient users on board to become fashionable, and ultimately build a dependence. If the service can become a de-facto provider, whether it be a social network, video service, or communication platform, once a critical mass of users start using the service not only do they become dependent on it, but those who try to remain outside of it face potential real world social exclusion.</p>
<p>In essence the users become addicted to the service, and will tolerate ever more invasive profiling and ever decreasing privacy. Such services will frequently overhaul their privacy settings, in the guise of making it better for the user, but in fact forcing the user to incur a time cost should they wish to maintain their current level of privacy. The longer a user is on the platform the harder it is for them to leave, as the greater their dependence is on the service. An individual user is almost powerless against such a powerful organisation. Where such a power in-balance exists it is beholden on the government to legislate and regulate to protect consumers, sadly, that has not happened.</p>
<p>Even if we were able to wean ourselves off our dependence on free services, in which we are the real product, the problem of data collection would not be resolved. The meta-data collected just through our online existence and usage of modern technology would still present a significant risk. The risk is not purely commercial either, governments are increasingly collecting and linking vast datasets, often of data that has been collected via compulsory instruments. That is without even considering the growing quantities of surveillance data being collected by security services.</p>
<p>We need to completely re-evaluate how we view and treat our data. All too often we are persuaded to evaluate whether data should be collected based on the intent of the collecting organisation, when we should evaluate whether data should be collected based on its potential usage, both good and bad. To do otherwise imparts complete trust on the collecting organisation to perpetually look after your data, to never sell it to anyone you do not want it sold to, and to never use it in a way you do not want, and may never have even imagined of. There is no organisation or government that has earned, or deserves, that level of trust.</p>
<p>Data about us should not be a commodity, it is our digital shadow, an electronic manifestation of us, so intrinsically linked with us that trading in it should be as prohibited as the trading in our physical manifestations is. It should not be up to a commercial organisation, or a government, to decide how to use our data, or who should have access to it, that is a personal decision for each of us, and one that is dynamic, and must be revocable.</p>
<p>Much like a new digital service, the protection of privacy requires a critical mass of supporters, without it, there is insufficient pressure on governments to enact change, particularly given their own increasing dependence on data collection. If there is one good thing that can come out of this whole affair it is the raising of awareness of the problem, and that’s why, we need to talk about your data.</p>
<h4 id="references">References</h4>
<ul class="bibliography"><li><span id="ca_files:online">[1] The Guardian, “The Cambridge Analytic Files.” <a target="_blank" href="https://www.theguardian.com/news/series/cambridge-analytica-files">https://www.theguardian.com/news/series/cambridge-analytica-files</a>, Mar-2018.</span></li></ul>
<p><a href="https://stateofit.com/data-privacy/">We need to talk about your data</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on March 21, 2018.</p>
https://stateofit.com/crypto-wars2017-06-15T00:00:00-00:002017-06-14T00:00:00+10:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>The public are largely unaware of the losing battle security researchers have been fighting against western governments for the last 30 years. Ever since the creation of modern cryptography, western governments have sought to undermine and outlaw its use and distribution. The tactics have ranged from subverting standards, in order to require short keys sizes (GSM<a href="#SourcesW96:online"><span style="vertical-align: super">[1]</span></a>, DES<a href="#KeepingS52:online"><span style="vertical-align: super">[2]</span></a>), banning the export and publication of cryptographic algorithms (PGP <a href="#httpswww84:online"><span style="vertical-align: super">[3]</span></a>), and more recently creating back-doored cryptographic components (DRBG<a href="#NewYorkT82:online"><span style="vertical-align: super">[4]</span></a>).</p>
<p>All of these actions have had a detrimental impact on the security of individuals. For example, the LogJam attack has its roots in degrading the security of an SSL connection to Export Grade cryptography. Browsers had to support this weak cryptography because it was the only permitted security that could be freely exported. The more recent WannaCry attack was based on a vulnerability discovered and hoarded by the NSA for its own use, which was leaked as a result of a breach of the NSA<a href="#Wannacry37:online"><span style="vertical-align: super">[5]</span></a>.</p>
<p>Governments have become accustomed to being able to invade the privacy of their populations. Cryptography provides an essential check on this power, it empowers the individual to protect the contents of their communications. That is not say that it provides absolute security. In 2001 Australia successfully passed the Cybercrime Act, giving the government the power to compel individuals to disclose their decryption keys, failure to do so could result in a prison sentence. A similar law was passed in the UK, in the form of the Regulation of Investigatory Powers Act in 2000. Given that the government already has this power, why is it requesting further powers to intercept encrypted communication<a href="#Ozgovern14:online"><span style="vertical-align: super">[6]</span></a>?</p>
<p>One possible explanation is that the government is not satisfied with the open nature of key disclosure, and wishes to be able to covertly invade privacy. Such power is deeply troubling, and liable to abuse. We have already seen examples of abuse of the power to access meta-data<a href="#Australi52:online"><span style="vertical-align: super">[7]</span></a>, and the Snowden revelations revealed the extent to which western intelligence agencies had abused their power. Those same revelations also gave an insight into the capability of western intelligence agencies to compromise everything from mobile phones, through to smart TVs <a href="#Spiesdos12:online"><span style="vertical-align: super">[8]</span></a>. Their ability to undertake such actions fundamentally undermines the government’s argument that terrorist communications are inaccessible to them. Those intelligence agencies are more than capable of compromising target devices, and in doing so the encryption keys and messages sent from those devices. However, such actions have onerous warrant requirements and are costly and time consuming to deploy. That isn’t unintentional, the law was written to ensure that the act of invading the privacy of an individual was not an easy or quick task, and that it would only be done where absolutely justified and not on a whim. The proposed changes are an attempt to dismantle these checks and balances, to make the invasion of privacy both quick and easy.</p>
<h2 id="end-to-end-encryption">End-to-End encryption</h2>
<p>In recent years the use of end-to-end encryption has become commonplace. End-to-end encryption is neither new, nor anything special in cryptographic terms. PGP (Pretty Good Privacy), over which one of the most significant crypto wars battles was fought, is a tool that provides end-to-end encryption over email, it has been around since the early 90’s and is still popular today. The challenge for end-to-end encryption is not a mathematical one, it is in the efficient and secure distribution of keys. Closed systems such as WhatsApp or Signal make key distribution easier, since everyone is using the same protocol and app to communicate with. In more open settings the distribution of keys can be prohibitively difficult, resulting in use of a simpler approach, whereby the communication between each client and the server is encrypted, but not being end points.</p>
<p>This is simpler and more flexible to implement because each client device need only share a single key with the server. However, crucially it requires total trust in the server to protect the privacy of the message.</p>
<p>Unfortunately, trust in those servers proved to be misplaced when it became apparent intelligence agencies had compelled many operators to provide access to the plaintext messages, often in a way in which the operator was not even allowed to disclose the fact that messages were being intercepted. When the extent of this interception was revealed by Snowden there was a significant push back from the public. Suddenly privacy and trust become major differentiators for end-users. The solution to this problem was for the operators to deploy end-to-end encryption. In such setups the message contents is encrypted between the two end-points, i.e between the sender and the receiver. The server still processes the message, but cannot read its contents. This allowed operators to regain trust from end-users, since they couldn’t be compelled to breach their users’ privacy because they themselves did not have access to the messages.</p>
<p>Intelligence agencies could still compel operators to reveal who is communicating with whom, but could not gain access to messages without resorting to the existing key disclosure legislation.</p>
<h2 id="when-a-backdoor-is-not-a-backdoor">When a backdoor is not a backdoor</h2>
<p>The government has been at pains to stress that their recent proposal does not advocate backdoors. So what is a backdoor? In essence a backdoor is a hidden way to gain unauthorised access to a system or encryption scheme. They are particularly controversial because any method that allows unauthorised access could be exploited by criminals as well as authorised government agencies. Furthermore, there is no way to know, or have oversight of, the extent to which the backdoor is being used.</p>
<h2 id="what-the-government-is-asking-for">What the government is asking for</h2>
<p>The government is asking for something analogous to the technical capability notices proposed by the UK government. These notices are not warrants to intercept, instead they require a service provider to maintain the technical capability to comply with any subsequently issued interception warrant. The problem is that the technical capability notice could require a service provider to maintain the capability to remove any electronic protection of messages. Compliance with this requirement would preclude the use of end-to-end encryption. When debated in the House of Lords this very issue was raised without a satisfactory response being given by the UK government. If we look at comments made to the media, a key target of these capability notices are providers of communication services that utilise end-to-end encryption.</p>
<p>The most likely consequence of these capability notices is that end-to-end encryption will effectively be banned from use by service providers, with them having to revert to older and more insecure client server models. It becomes a matter of semantics as to whether banning the use of end-to-end encryption is equivalent to requiring a backdoor. The end result is the same, private communication becomes more susceptible to both government and criminal interception.</p>
<h2 id="would-banning-end-to-end-encryption-solve-the-problem">Would banning end-to-end encryption solve the problem</h2>
<p>The banning of end-to-end encryption will primarily impact on law abiding citizens, who are the very people intelligence agencies are not supposed to be intercepting. Cryptography, and the mathematics it relies on, are common knowledge, it is too late to close Pandora’s box. Criminals and terrorists could simply switch to utilising alternative communication channels, or apply their own encryption over the top of the service provider. For example, encrypting the message using a</p>
<p>separate app, sending it through the client server architecture, before the receiver decrypts the received, encrypted, message in the same separate app. Whilst such a process would be possible for the public, it would be hugely inconvenient and require considerable effort to setup. For a terrorist it would be a minor inconvenience.</p>
<h2 id="consequence-of-the-ban">Consequence of the ban</h2>
<p>There will be two likely consequences to the changes, firstly, terrorists will modify their operating procedures to counter the new interception capabilities. It will be a minor inconvenience, but will have no lasting impact. The second, and more concerning consequence, is that the general public will be exposed to easier and greater interception. They will be more vulnerable to breaches of service providers by cyber criminals, and overall they will have exchanged a portion of their privacy for almost nothing in return.</p>
<p>The fact that such a damaging policy could even be suggested should concern us all. More worryingly it is justified on the grounds that giving up our liberties and privacy is a reasonable exchange for security. There are two critical flaws to this argument, firstly, it incorrectly assumes we are gaining protection, when in fact we are not. Secondly, and more importantly, the very thing we are fighting to protect from terrorism is our way of life, our freedoms, our values, including our privacy. To even suggest we should sacrifice, to any degree, the very thing we are fighting for is abhorrent.</p>
<h4 id="references">References</h4>
<ul class="bibliography"><li><span id="SourcesW96:online">[1] A. Færaas, “Sources: We were pressured to weaken the mobile security in the 80’s - Aftenposten.” <a target="_blank" href="http://www.aftenposten.no/verden/Sources-We-were-pressured-to-weaken-the-mobile-security-in-the-80s-98459b.html#.UtBeNpD_sQs">http://www.aftenposten.no/verden/Sources-We-were-pressured-to-weaken-the-mobile-security-in-the-80s-98459b.html#.UtBeNpD_sQs</a>, Jan-2014.</span></li>
<li><span id="KeepingS52:online">[2] H. Corrigan-Gibbs, “Keeping Secrets – Stanford Magazine – Medium.” <a target="_blank" href="https://medium.com/@stanfordmag/keeping-secrets-84a7697bf89f">https://medium.com/@stanfordmag/keeping-secrets-84a7697bf89f</a>, Nov-2014.</span></li>
<li><span id="httpswww84:online">[3] P. Zimmermann, “Testimony of Philip R. Zimmermann to the Subcommittee on Science, Technology, and Space of the US Senate Committee on Commerce, Science, and Transportation.” <a target="_blank" href="https://www.philzimmermann.com/EN/essays/Testimony.html">https://www.philzimmermann.com/EN/essays/Testimony.html</a>, Jun-1996.</span></li>
<li><span id="NewYorkT82:online">[4] M. Geuss, “New York Times provides new details about NSA backdoor in crypto spec | Ars Technica.” <a target="_blank" href="https://arstechnica.com/security/2013/09/new-york-times-provides-new-details-about-nsa-backdoor-in-crypto-spec/">https://arstechnica.com/security/2013/09/new-york-times-provides-new-details-about-nsa-backdoor-in-crypto-spec/</a>, Nov-2013.</span></li>
<li><span id="Wannacry37:online">[5] I. Thomson, “Wannacry: Everything you still need to know because there were so many unanswered Qs • The Register.” <a target="_blank" href="https://www.theregister.co.uk/2017/05/20/wannacry_windows_xp/">https://www.theregister.co.uk/2017/05/20/wannacry_windows_xp/</a>, May-2017.</span></li>
<li><span id="Ozgovern14:online">[6] R. Chirgwin, “Oz government says UK’s backdoor will be its not-a-backdoor model • The Register.” <a target="_blank" href="https://www.theregister.co.uk/2017/06/12/australia_considers_copying_uk_investigatory_powers_act/">https://www.theregister.co.uk/2017/06/12/australia_considers_copying_uk_investigatory_powers_act/</a>, Jun-2017.</span></li>
<li><span id="Australi52:online">[7] S. Sharwood, “Australian Federal Police accessed metadata without warrant, broke law • The Register.” <a target="_blank" href="https://www.theregister.co.uk/2017/04/28/australian_federal_police_breached_metadata_law/">https://www.theregister.co.uk/2017/04/28/australian_federal_police_breached_metadata_law/</a>, Apr-2017.</span></li>
<li><span id="Spiesdos12:online">[8] J. Leyden, “Spies do spying, part 97: Shock horror as CIA turn phones, TVs, computers into surveillance bugs • The Register.” <a target="_blank" href="https://www.theregister.co.uk/2017/03/07/wikileaks_cia_cyber_spying_dump/">https://www.theregister.co.uk/2017/03/07/wikileaks_cia_cyber_spying_dump/</a>, Mar-2017.</span></li></ul>
<p><a href="https://stateofit.com/crypto-wars/">Crypto Wars - A hidden war fought for nearly 30 years</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on June 14, 2017.</p>
https://stateofit.com/threshold-crypto-library2016-10-18T00:00:00-00:002016-10-18T00:00:00+11:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>Back in 2013/2014 I was working on <a href="https://www.computer.org/cms/Computer.org/ComputingNow/issues/2016/09/msp2016040064.pdf">vVote Verifiable Voting System</a>, which involved implementing a number of threshold cryptographic protocols. At the time there was very little by way of examples or frameworks to learn/play with threshold crypto. Recently I had some time available to take what I had learnt over those years, and since, and put together a library of threshold cryptographic protocols. The library is <a href="https://bitbucket.org/chrisculnane/thresholdcrypto">open source</a> and written in Java. It is not intended as a commercial use library, more something for those interested in threshold cryptography, and fellow academics, to play around with.</p>
<p>It combines an abstract communication framework to allow a diverse range of distributed communication protocols to be run. It is still very much a work in progress, and any contributions/feedback are welcome. I should also note that the entire vVote system is also <a href="https://bitbucket.org/vvote/">open source</a></p>
<h2 id="protocols-implemented">Protocols Implemented</h2>
<p>A number of different protocols have been implemented, some with caveats around their real world use. The <a href="https://www.sharelatex.com/project/578db1b9ff27fa4474fd4461">documentation</a> provides an overview of each protocol as well as instructions for running the samples. The currently implemented protocols are as follows:</p>
<p>The protocols that are currently implemented are as follows:</p>
<ul>
<li>Distributed Coin Toss</li>
<li>Distributed Pedersen Commitment</li>
<li>Distributed Threshold ElGamal Key Generation (Feldman)</li>
<li>Distributed Threshold BLS Key Generation (Feldman)</li>
<li>Threshold ElGamal Decryption</li>
<li>Distributed Threshold ElGamal Plaintext Equivalence Tests</li>
</ul>
<h2 id="communication-layer-framework">Communication Layer Framework</h2>
<p>The basic goal of this framework is to provide an abstract communication layer, which can be instantiated with various different underlying communication channels. This allows users of the Communication Layer Framework to be agnostic of the underlying channel, and even to change channels without requiring any modification of their code. For example, during development it could use an in-memory channel, before moving towards a socket based channel.</p>
<p>Additionally, the underlying message structure is also abstracted away. This allows modification of the underlying channel without impact on the higher level application. For example, the Communication Layer can be switched from using JSON to XML by just passing a different instantiation of the relevant CommunicationLayerMessage classes.</p>
<p>There is separate <a href="https://www.sharelatex.com/project/578f254c37108bf42e40617d">documentation</a> for the Communication Layer Framework.</p>
<h2 id="contributing">Contributing</h2>
<p>Currently the source code is still structured as an Eclipse project - it is currently only me working on it so that suffices. When I get the time, or if I get any interest from others wanting to work on it, I will move it to a more collaborative friendly build system - probably gradle, although I’d be open to suggestions.</p>
<p><a href="https://stateofit.com/threshold-crypto-library/">Threshold Crypto Library</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on October 18, 2016.</p>
https://stateofit.com/about-the-blog2016-10-18T00:00:00-00:002016-10-18T00:00:00+11:00Chris Culnanehttps://stateofit.comblog@stateofit.com
<p>After many years of thinking about writing a blog I finally decided to do it. Primarily because I am increasingly finding that I have something to say about what is going on. The blog will focus on information security, privacy and electronic voting - the areas I currently do research on. There may be the occasional post about life as an academic and travelling, but they will be the exception not the norm.</p>
<h2 id="blogging-platform">Blogging Platform</h2>
<p>Rather than using WordPress of any other CMS I have decided to host the blog myself and use <a href="https://jekyllrb.com">Jekyll</a> to create a static website from my posts. Why a static website? Simply because the overhead required to maintain a dynamic site or CMS is just too much. Likewise, I have not enabled comments due to the amount of time it would take to moderate spam and the security vulnerabilities that they could introduce.</p>
<p><a href="https://stateofit.com/about-the-blog/">About the Blog</a> was originally published by Chris Culnane at <a href="https://stateofit.com">State of IT</a> on October 18, 2016.</p>