Privacy implications of using digital signatures to enable APIs

The following part of my forthcoming position paper for the W3C Workshop on Privacy and advanced APIs. Because my paper focused on implementation of geo-location, this sections had to be cut out. However, I think the following is relevant to the discussion about privacy and packaged Web applications, which is why I am publishing it here.

When it comes to privacy, it is obviously insufficient to simply define an API in terms of an Interface Definition Language (IDL), such as  WebIDL and OMGIDL, within specifications. IDLs are limited in that they only allow one to express simple inputs, outputs, and data type constraints. Nevertheless, implementations exist based on specifications that only provide IDL definitions, which are agnostic to privacy.  To overcome these limitations, some implementers leverage  digital signatures as the means of enabling privacy-sensitive APIs in an application. For example, if application “X” is signed by company “Y” then allow application “X” to access API “Z”.

Java digital signature OCSP validation fail
Java treats an application as unsigned, and reduces its privileges: meaning some APIs will not be available.

Such an approach to privacy is limited in that it hands control of privacy matters over to a third-party (the signer) and implicitly assumes that the end-users unquestionably, or via a End User License Agreement (EULA), trusts the signer as the authority to enable an API without necessarily informing an end-user as to what is going on “under the hood” – such a model is commonly seen in the Java application space.

Feature Requests

Others have extended the digital signature to enable API model by having software developers explicitly declare what functionality an application will use (lets call them “feature requests“). Upon installation, the end-user is presented with a dialog informing them of the capabilities the application will use, and if they wish to proceed. An example is Chrome’s browser extensions, seen on the right.

Install lastPass on Google Chrome
Chrome's browser extensions show the capabilities of a packaged application, but lacks information about consequences.

From a privacy perspective, this model is significantly better then simply enabling APIs based on digital signatures. However, this model is also problematic in that it often does not provide any meaningful information about, for instance, what “can access your browsing history” coupled with “access your data on all websites” means. It can be argued that this model unfairly puts the consequences of consent on the end-user, by entering them into an agreement with an application without recourse (i.e., “Yes website/application X, you can access my history data even though I don’t know what you will do with it.”).

W3C Workshop on Privacy and Advanced Web APIs

The W3C is hosting a workshop on Privacy for Advanced Web APIs and is currently calling for position papers. Although I don’t know what an “advanced Web API” is, it’s great to see members of the W3C taking an active interest in privacy, and emphasizing that the architectural design of APIs have a fundamental role to play in protecting the privacy of individuals.  The W3C has opened the workshop up to the public: it would be great to get a diverse range of people together to discuss the role and limits of APIs in protecting privacy, particularly from the academic community.  I don’t know how much marketing drive there was behind the workshop, but hopefully a variety of people will submit papers.

Papers are  due on the 7th of June. I’m currently trying to put a position paper together which I will post here once I’m done.