Blog has moved (with new posts)

I have been posting on new site https://www.gdpr360.com/blog.

Recent blogs; separate WP29 submissions on DPOs (constructive criticism) and portability (explaining why it’s easy despite what lawyers think), and damages case law (awards for distress up 50-fold in three years in two of the UK jurisdictions). More soon on implementing GDPR. given that yesterday was the exact half-way point.

Other material posts on Linkedin at https://www.linkedin.com/in/sritchieprivacylawprivacyit/

This blog’s posts will remain but the blog will be discontinued.

Business, Data Privacy, GDPR

“Circle the Wagons, they’re coming for the Information Tribunal”

I refer to the latest excellent post on the Panopticon blog http://www.panopticonblog.com/2015/07/24/circle-the-wagons-they-are-coming-for-the-information-tribunal/

I confess to being amused by this. Of course the UK is nothing if not relentlessly consistent. Not content with sabotaging the Directive and then the regulator for 17 years (or with further restricting poor defendants’ access to counsel even in the criminal “justice” system), it thus now attempts to constrain access to the civil tribunals in freedom of information, even in respect of public interest statute, and data protection.

However… the worm is about to turn. I hope all this goes through exactly as the Cabinet Office plans. The reason I’m amused is the old saw: beware of unintended consequences. It’s a bit like squeezing eggs in the hope they’ll get smaller (yes in my disreputable student youth I actually observed such “experiments”). The same applies to the information economics of information within the free market.

This classic government anti-free-market move may significantly worsen the (economic) impact of data privacy on the UK, by way of increasing costs both to the economy generally and to the government in particular. To avoid undue influence on the GDPR trilogues I won’t say more until it’s enacted (or not – same difference), but you saw it here first.

Prime Minister Cameron, forgive your Cabinet Office: not for the first time, it knows not what it does to the market.


English Court strikes down UK data retention law

11. The extent of the State’s powers to require the retention of communications data and to gain access to such retained data are matters of legitimate political controversy both in the UK and elsewhere… To take one example from abroad, on 2 June 2015 the US Congress passed one statute (the USA FREEDOM Act) restricting the data retention powers previously conferred by another statute passed in 2001 (the USA PATRIOT Act). It is not our function to take sides in this continuing debate, nor to say whether in our opinion the powers conferred by DRIPA are excessive or not. We have to decide the comparatively dry question of whether or not they are compatible with EU law as expounded by the CJEU in Digital Rights Ireland.” Bean LJ, R(Davis et al) v Home Secretary (approved judgment kindly published by Robin Hopkins of 11KBW on the Panopticon blog http://www.panopticonblog.com/2015/07/17/dripa-2014-declared-unlawful/ )

It seems only yesterday that the CJEU struck down the Data Retention Directive. The UK responded by re-legislating its own statute, Data Retention and Investigatory Powers Act 2014. The ensuing  judicial review has just succeeded. This had a number of interesting twists.

First, it was brought in the first instance by two Members of Parliament: one Conservative (David Davis MP – he’ll be popular on the government benches) and one Labour, the trademark contrarian Tom Watson MP.

Secondly, when it came to the Divisional Court it was heard not only by a mere High Court judge. Sitting alongside Collins J was the Court of Appeal judge Bean LJ, who delivered the joint judgment of the Court.

Next, lacking the support of any written constitution, and as carefully noted in the judgment, it is impossible at common law alone for the English judiciary to strike down primary legislation. However, it is possible to review that same legislation against articles of the new EU Charter of Fundamental Rights (introduced by the Treaty of Lisbon) almost as US judges review US legislation against provisions of the US written constitution. As the judgment makes clear, Article 8 – an EU citizen fundamental privacy right – was a key factor; as, again, it was the reasoning in the CJEU case Digital Rights Ireland that struck down the Data Retention Directive. It seems a pattern is beginning to emerge.

Most interestingly of all, this is the second time in just a few months that the Charter has been wielded in this way by the English judiciary. In Vidal-Hall v Google the Court of Appeal also used the very same Article 8 privacy right to strike down s.13(2) Data Protection Act 1998 (http://www.bailii.org/ew/cases/EWCA/Civ/2015/311.html, paras 97ff) and, in passing (aided by its predecessor Article 8 European Convention on Human Rights), to create a novel privacy tort of near-unlimited scope. It appears the English judiciary, after years of apparent disinterest, is now consistently and actively grappling with privacy issues.

In theory R(Davis) will have very little practical effect: the particular statute was always going to be superseded in the near future. However, to recycle a popular phrase from the Leveson inquiry into phone hacking in the context of data privacy legislation, such strong signals being sent from the judiciary may have a “beneficially chilling” effect on the appetite of future Parliaments to enact unlawful, sometimes even odious, legislation. What works against Google seems to work equally well against legislatures.


Privacy Governance/Engineering project EGPLib

Organizations today need to have both lawyers and engineers involved in privacy compliance efforts” (Dennedy, Fox, and Finneran, The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value, 2014, p90. This is echoed in Detterman’s Field Guide to Data Privacy Law, (2nd ed., Elgar, 2015) at 6.114 by Professor Detterman’s recommendation that Counsel be integrated into the PbD process.

In the past, every motivated enterprise has had to “reinvent the wheel” of privacy engineering. However, for some time I’ve been working  on a project integrating privacy governance (Legal/Compliance) and privacy engineering (IT) for any enterprise, into a relatively “hard” template with software support. The objective is to simplify, structure, facilitate, and (so far as possible) automate a collaborative law/compliance/IT multi-disciplinary approach to Privacy-By-Design (“PbD”) engineering, putting multi-jurisdictional privacy impact / risk assessments at the heart of the architecture.

In this post I’ll work through a topical case study, define “privacy architecture” schematically and by reference to capability, explore practical usage, and define the architecture’s “privacy metadata” by reference to a linked data model document.

Don’t worry if this document takes you a little out of your own professional field’s comfort zone – being multi-disciplinary, that’s inevitable. The first section “Case Study” should be reasonably accessible to all – bear in mind difficult components are explained later.

Case Study – with breach table output

A simple but topical case study may be found from late last week, in the breaking news on the Xora fiasco (and accompanying USD $500k Californian lawsuit for intrusion upon seclusion and consequential losses) http://arstechnica.com/tech-policy/2015/05/worker-fired-for-disabling-gps-app-that-tracked-her-24-hours-a-day/ . Even just looking into the supplier’s own marketing(!) video (helpfully supplied in the link) at the 16-second mark, there seems sufficient information right there to populate PrivacyImpactAssessment (“PIA”) metadata triggering a risk assessment “failure” with an estimated cost of breach. (As a trial lawyer I need hardly point out the same frame’s possible use as evidence for the case – and doubtless for the supplier’s interesting marketing future).

Let us assume our enterprise is in a similar scenario but lacks last week’s hindsight. We wish to avoid similar fates by proactively defining a “privacy architecture” for the enterprise and conducting a risk assessment upon it. For simplicity, we store just one PII dataset, the Xora employee information; and one process/dataflow – the Xora web application itself. All we really need do is think carefully and identify the salient point: that the “data scopes” of the dataflow (in legal terms, subject matter) include information on “logged-out” employees – by contextual inference, persons who are not ordinarily acting in the capacity of employees. (For completeness, later the video also identifies tracking of employees’ statutory breaks – it’s unclear to me whether the surveillance applies to those periods as well, or whether surveillance within breaks would be legitimated by Californian statute, in any event I haven’t yet encoded any metadata for Californian statutes, so we’ll ignore that aspect)

For detailed table and field definitions and annotations, as required it may be useful to refer to subheading “Metadata Schema” below.

In the PrivacyImpactAssessment metadata table (picking out the most important fields for current purposes), from the information in the Xora marketing video I probably would classify the “data scopes” of the dataflow (chosen from a long overlapping list of legal subject matter) as “employment, surveillance, location, tracking” (at a stretch, possibly “eavesdropping” as well). I would enter “US-CA” (i.e. California) as my best-guess pseudo-ISO coding of the data subjects’ domicile(s), and nominally “US-CA” for the data storage jurisdiction(s). There are no transfer jurisdiction(s) as I assume we are not transferring the data outwith California.

Then the “application programming interface” (API) is run, requesting a risk assessment of the dataflow. By “API is run” I mean the project software’s “risk assessment component” is invoked, by way of being embedded into any software product, web service, etc. (It has to be made available in API form because only in that form can the enterprise directly inject its transactional-level breach reporting into the enterprise’s Operations IT systems by way of a PbD “layer”)

Commencing the risk assessment, the API determines from the metadata we entered that there is only one jurisdiction of immediate interest, US-CA. The API now wants to check through all the laws matching California or its “super-jurisdictions” to see if any of them have anything interesting to say about our dataflow. It determines the full list of jurisdictions by first checking the super-jurisdictions registered against the Jurisdiction table’s entry for US-CA. As it happens California has only one super-jurisdiction registered: the United States. So, initially, the API selects all Federal and Californian statutes and torts as theoretical “candidate” laws for relevance to this dataflow.

As luck would have it, the US variant of the tort of intrusion upon seclusion is entered in the ApplicableLaw metadata table under the less-than-imaginative code “US-SECLUSION”, unsurprisingly registered against the jurisdiction-code “US”. It therefore is a candidate. Unlike some non-US variants of the tort, there is only one sub-jurisdiction excluded from this law, Louisiana (this is because until someone corrects me on the point, with the a priori exception of Louisiana as a civil law jurisdiction I am unaware of any State judiciaries that have excluded the tort).

As US is US-CA’s super-jurisdiction from which US-CA is not excluded in respect of intrusion into seclusion, the API therefore will recognize US-SECLUSION as a candidate law (if you prefer, metalaw) to be tested against this dataflow’s metadata “facts” (the opposite of the way lawyers normally think, but bear with me). One of the first (of many) tests the API applies for each candidate law is to match the “data scopes” (aka legal subject matter) coverage of the dataflow, against the data scope coverage of the candidate law.

As it happens US-SECLUSION’s data scopes are classified (ultimately modelled on Professor Prosser’s Second Restatement) as “eavesdropping, film, surveillance, photographic, privatecorrespondence, sexualpractices, sexualorientation”. The API compares the law’s scopes with the PIA’s scopes determined by us earlier, and discovers one common element: “surveillance”. This causes the API to recognize the existence of a subject matter overlap between the dataflow and the tort of intrusion upon seclusion, so it provisionally decides the law may be applicable to this dataflow. (if no match was found, the API simply discards the tort as a candidate law applicable to this dataflow, and moves on to check the next candidate law).

For brevity, I’ll stop there. Of course the testing process from end-to-end is a whole lot more complicated than that, as you will infer from the metadata schema set out below, but I trust you get a high-level feel of how the architectures are processed by the API.

The API’s breach table delivery from this particular exercise is just one record, that may be found at https://www.dropbox.com/s/w1yjfpo61tfv1sv/Breach16.xls (I’ve inverted / ”transposed” the spreadsheet’s columns to rows so you can see each field of the record more clearly on Dropbox)

In summary, the idea is that the risk assessment process is a sausage machine. You just plug in your architecture, crank the handle, and then decide to which jurisdictions’ lawyers (if any) you need to run screaming – or not.

Metadata Schema

Of particular importance to the engineers, lawyers, other privacy professionals and other disciplines is the terminology and establishing a common framework which allows disparate teams of people to operate” – Dr Ian Oliver, Privacy Engineering: a Dataflow and Ontological Approach, 2014, p244.

Anyone wishing to view EGPLib’s current metadata model (computer-generated, refreshed occasionally) can look at https://www.dropbox.com/s/npy1jrmxbpvox63/PBDDataModel-Logical-Annotated.txt.  This comprises one part of the project’s “common language” – its frame of reference to objects both within and outwith the enterprise. Most other parts of the “common framework” are taxonomies of atomic elements, for example the “data scopes” encountered above.

Metadata quality (essential) is guaranteed by the API which will decline to operate against non-compliant metadata (giving extremely specific contextual feedback as to which metadata it doesn’t like and precisely why – usually typos or user misunderstanding). For that reason the enterprise, auditors, insurers, regulators, etc can be confident that any relied-upon published compliance reports generated by the API are founded upon a semantically consistent and conceptually coherent privacy architecture.

Regulators and insurers may be particularly interested in the enterprise-specific  PrivacyImpactAssessment table, which allows the enterprise formally to classify dataflows of privacy interest and  “drives” the rest of the architecture including transactional PbD.

Compliance and Audit professionals may be reassured by the AcceptedRisks and CustomRules tables – custom rules being particularly useful for simply “dis-applying” laws or analytics with whose effects you disagree (at the price of “dis-apply” rules etc being automatically written into audit documents)

Legal and IT geeks may wish to pore over the Jurisdiction, ApplicableLaw, and Analytic tables, which model the (multi-jurisdictional) legal context against which the privacy architecture is evaluated, for both architectural risk and transactional breach.

The linked metadata model prima facie looks like a hybrid data architecture / design document, indeed others theoretically could use it as such. However it emerged as an internal “sanity-check” API deliverable to verify the source code remains in sync with the live metadata tables, dynamically auto-generated thus always up to date (providing I remember to refresh the web copy).

Privacy Architectures – EGPLib Schematic Definition

An enterprise-wide “Privacy (Governance) Architecture”:

  • Is expressed in and distinguishable by formal “privacy architecture metadata”
    • Populating tables PrivacyImpactAssessment, AcceptedRisks, and CustomRules*;
    • Engaging material parts of the enterprise (typically Compliance/IT/Legal)
      • Building on (or creating) IT’s pre-existing Data/Information Architectures
      • Articulating the enterprise’s privacy architecture to stakeholders
    • Is formally validated and used by Governance IT (built, web service, or off the shelf)
      • Against customizable metadata tables Jurisdiction, ApplicableLaw, Analytic*, thus
        • forwards-compatible with emerging statutes/torts (eg GDPR, coming-into-force sections of PIPEDA, etc)
      • Embedding the architecture as PbD into enterprise transactional level IT
        • Implemented by IT via non-intrusive calls to “Application Programming Interface” (API) wrappers
        • Dynamically providing transaction-by-transaction breach identification;
        • Retrofitting legacy IT as easily as embedding into new systems
        • Providing “notification list” capability for external data breaches (theft etc).
      • Providing on-demand quantified Privacy Architecture Risk/ Impact Assessments
        • Predicting cost of anticipated breaches across multiple jurisdictions
        • Modelling
          • Current, future, and legacy IT projects and systems, against law
          • Effect of future (changes to) law, on current and legacy projects

* note that the asterisked (*) metadata and other tables are particularized under subheading “Metadata Schema” above, by reference to a hyperlinked data model

Don’t Panic! This sounds more complicated than it is. For example the metadata tables (where not gold-plated as SQL or XML) are articulated by default as ordinary spreadsheets which are then read as input (and tightly validated) by the API.

(Intentionally, this approach renders privacy architectures accessible to relatively small enterprises. Even those altogether lacking an IT department can set out their architecture in spreadsheet(s), and then can conduct risk assessment on that architecture without even needing to procure software: I or anyone else using the API could provide that as an automated web service)

Practical usage by the enterprise

“Typically, it is too difficult and time-consuming to determine the exact nature and details of formal and substantive compliance obligations in other countries, where laws may be presented in unfamiliar formats and languages.” – Lothar Determann, op. cit., 2.05.

The project and its API cannot provide legal advice – as a logical category mistake, no metadata codification of law could ever do that. What it can do is give heads-up “canary-in-the-coalmine” quantified risk warnings against specific dataflows relative to specific jurisdictions. In turn, this obviously implies that the enterprise should consider consulting Counsel in the identified jurisdiction(s) on data flows whose risk assessment deliverables for those jurisdictions exhibit significant financial risk, whether from public or private risk (the latter growing more and more prominent in proportion with class actions and the gradual de-coupling of tort remedy from proof of material damage). With or without legal advice, the enterprise logically then has about five clear non-exclusive options:

  • To re-engineer the architecture (together with dependent IT systems), and re-assess;
  • To alter relevant metadata on which the assessment is based, and re-assess; *
  • To “disapply” jurisdiction(s), law(s), or analytic(s) (in response to legal advice); *
  • Formally to authorize acceptance of specified risks in context; * or
  • (ignore any insights and carry on regardless).

The API may record (depending on exact context)  in audit documentation any asterisked (*) options chosen. This provides maximum transparency for auditors and stakeholders, including regulators. Ultimately the  API’s production of such documents, with notes recording the embedded “due diligence” enterprise derogations from the standard, is designed to promote minimization of Court risk (in the context of aggravation versus mitigation and thus quantum of awards).

Forum-shopping – race to the bottom?

Going beyond trivial examples such as the case study set out above, I would be remiss if I did not mention the opportunities for savagely anti-social misuse of such risk reduction technology. Want to plan the best times to shift your health data between jurisdictions, ahead of citizens acquiring a private right of action against you? No problem. When hiring Canadians, want to know how and when (currently lawfully) to discriminate against them based on their province of birth? No worries: something for legislators, regulators, and judges to ponder. Sadly, facilitating such arbitrage / risk reduction techniques necessarily facilitates such externalization / rent-seeking behaviors.

Governance framework and extensibility

Though an “parent” enterprise governance framework sits at the conceptual center of the project, for current purposes its only significance is that privacy governance is one of its “child” architectures (hence “EGPLib” – Enterprise Governance – Privacy Library).

For example, a possible next-step complementary risk assessment / transactional analysis library, indeed inheriting and reusing most of the same metadata tables and common reference framework with suitably altered taxonomies, might be Basel III-compliant financial governance (which on any level seems far simpler to implement than privacy).

Alternative to “schematic definition” view – capabilities

In terms of the API’s technical capabilities, if they prefer people can think of this project as a multi-jurisdictional law-based Java API facilitating

  • semantically validated specification of enterprise Privacy by Design architectures
  • automated compliance/audit reporting and financial risk assessments for specified PbD architecture(s)
  • seamless embedding of specified PbD architectures (including dynamic compliance reporting) into enterprise IT systems, including legacy
  • automated preparation of breach reports/notification lists
  • web-publishable standardized presentation of privacy architectures to regulators and consumers alike
  • extensibility by enterprises or regulators to other purposes as they arise


The IT has been coming together for some time. Currently I hope to release the initial API code into open source in circa six weeks (such release is predicated on “critical-mass” interest from IT developers) to get it off my hands. The default-populating of ApplicableLaw and Analytic metadata tables for the many jurisdictions on which I haven’t yet started obviously has to be an ongoing activity, but I’m happy to continue with that (my primary two drivers being “interesting” laws and course delegate jurisdictions).


Driven by the imminent EU General Data Protection Regulation, Preterlex www.preterlex.com last year commissioned what they believe to be the world’s first in-depth course on an architectural approach to corporate privacy governance. The preferred audience profile is a multi-disciplinary mix of IT, compliance, and legal professionals (of necessity the course imparts to each of them an improved understanding of the others’ fields and concerns). EGPLib is used to illustrate the case studies and a practical approach to privacy governance. Currently the course is available privately to companies, but Preterlex plans a series of publicly available sessions worldwide. I declare an interest as the course developer and initial primary presenter (any enquiries should be addressed to info@preterlex.com please, rather than myself).


As this is a cross-cutting project, comments or clarification requests are expected as well as welcome – please respond to the linkedin copy at .

Big Data, Data Privacy, EU, Privacy

Big Data viability? Vidal-Hall’s equity bombshell

Following on from my earlier Google v Vidal-Hall post, I thought I’d reverse-engineer the pleadings for the new tort of misuse of private information from three sources: the excerpt appended to the Court of Appeal judgment http://www.bailii.org/ew/cases/EWCA/Civ/2015/311.html; stray judicial remarks in that judgment; and Tugendhat J’s own supplementary remarks in the lower Court [2014] EWHC 13 (QB) http://www.bailii.org/ew/cases/EWHC/QB/2014/13.html. Initially this was merely out of academic curiosity. From the top…

Inferred ingredients/pleadings of the tort of misuse of private information

  • Defendant processed Plaintiff’s private information
  • Relating to which Plaintiff has reasonable expectation(s) of privacy
  • Wrongfully, further in such a way as
    • unjustifiably to infringe Plaintiff’s right to privacy [though per se this looks suspiciously like a Convention-related pleading]; further or alternatively
    • to misuse Plaintiff’s private information
  • Without Plaintiff’s foreknowledge ; alternatively
  • Irrespective of Plaintiff’s foreknowledge of Defendant’s intentions;
  • Causing
    • damage to personal dignity, autonomy and integrity; further or alternatively
    • anxiety and distress

Common law remedy claimed: general damages, presumably small (though aggravated damages also were pleaded in Vidal-Hall).

Equitable remedy claimed: account of profits

Account of profits

Here’s the rub. Per Tugendhat J at 40: “There is a claim for an account of profits which, it is alleged, Google Inc made as a result of the misuse of each of the Claimant’s private information…”.

At first sight this seems innocuous. Of course there is no reason why accounting for profit should not, as a free-standing equitable remedy (indeed one quite popular with the Googles of this world in IPR infringement and e-commerce disputes generally) be applied by this tort.

That said, while of little legal interest (and ignored by the upper court as irrelevant to its deliberations), the possibility for accounting for profit may have a devastating strategic commercial effect on Big Data projects past, present, and future in any re-identification context. By that I mean business intelligence or any other projects that seek to aggregate from different sources data about individuals/consumers. The reason is that accounting for profit by definition will not only eliminate profit made by any such unlawful project. By creating an additional and substantial cost, the forensic accounting exercise itself, it may render loss-making any project in relation to which consumers successfully pursue a action, either in substantial numbers or as a class action. In turn any ability to remain competitive, of any companies running what in hindsight are unlawful projects, or even to continue to exist where unlawfully processed Big Data is the company’s raison d’etre, may be severely compromised.

If that is right and such remedy is sustained, by automatically confiscating any profit this remedy alone would tend to destroy cost-benefit analyses for any and all Big Data analytics projects contemplating unlawful re-identification of English consumers anywhere in the world. In turn that reverses the economics of compliance: the traditional “we’ll just pay the damages/fine and move on” may no longer be viable.

Other issues arising from equity

In any event, as an equitable remedy, interesting issues are raised by accounting of profit. Normally plaintiffs cannot simultaneously claim damages and equitable reliefs from the same cause of action, except in the alternative. However in this instances plaintiffs theoretically need not forgo a right to general damages, now that the same Court has struck down s.13(2) Data Protection Act 1998. Specifically, s.10 can secure injunctive relief, while s.13 now can secure general as well as special damages under the statutory provisions rather than common law, which would free the tort of misuse of private information to furnish the remedy of accounting for profit. For avoidance of doubt that’s entirely theoretical, I can’t see Courts going for the “double”in respect of special damages – although general damages may be open. Regardless, it does facilitate flexible remedy-shopping based on plaintiff circumstances, and injunctions will be available whether damages or accounting is claimed.

This is early days yet. But it seems the times we live in have suddenly become even more interesting.

Disclaimer: nothing said above is legal advice.

Data Privacy, EU, GDPR, Privacy, Uncategorized

Google, Vidal-Hall, and the future of misuse of private information

Whether or not appealed, the Court of Appeal result in Vidal-Hall et al v Google will be explored by many more competent lawyers than myself, notably the QCs and juniors hired by Google, the plaintiffs, and the intervening Information Commissioner http://www.bailii.org/ew/cases/EWCA/Civ/2015/311.html . However I thought it might be useful to provide some tactical analysis and strategic thoughts specific to the future for the privacy “industry”. Of course nothing here constitutes any professional or legal advice.

What is the judgment about?

Briefly, the plaintiffs have the Court’s permission to serve their claim upon Google in California, the matter to be heard by English courts under English law; their claim is in a newly discovered tort called “misuse of private information”; and a small but significant part of English privacy statute has been struck down by the Courts. Note it’s all just procedural. The substantive case hasn’t actually started yet apart from at least one statement of case (which now likely will be amended anyway).

Data scopes and other aspects of the “new” tort

The scopes of privacy torts in other jurisdictions typically are defined narrowly in what could be called lists of data scopes. For instance the list for the Canadian version of the increasingly popular “intrusion upon seclusion” tort appears to comprise employment information, financial records, health records, personal diaries, private correspondence, sexual orientation, and sexual practices. These lists usually are finite, short, and closed.

In contrast, here the scopes of misuse of private information technically have not been defined at all: as the Court says at 43 and 51: “…We do not need to attempt to define a tort here… We are conscious of the fact that there may be broader implications from our conclusions, for example as to remedies, limitation and vicarious liability, but these were not the subject of submissions, and such points will need to be considered as and when they arise “.

However, from 18 onward the Court also explicitly relies upon the European Convention on Human Rights (“ECHR”): “The problem the courts have had to grapple with during this period has been how to afford appropriate protection to ‘privacy rights’ under article 8 of the Convention, in the absence… of a common law tort of invasion of privacy“. In turn this further suggests that the data scopes may well be keyed to the Convention and thus in a sense not closed all, alternatively that any “list” of data scopes will be very wide. Paradoxically this actually would simplify matters for the Courts and advocates, because the ECHR has the advantage of being relatively well understood.

It also suggests that tests for liability, if geared to existing ECHR jurisprudence, may be relatively low compared to the “highly offensive” liability test from some similar torts: so the level of offense seems more likely relevant to quantum. If that is right then it would follow that low levels of harm, while resulting in low awards (as also confirmed almost angrily by the Court at 139 “…the damages may be small, but the issues of principle are large“), may have similar costs consequences to that of any other litigation, which although not always so favorable to the successful party in England as elsewhere, may encourage individual as well as class actions.

Significantly, in respect of statutes the Court also declared the application of the EU Charter of Fundamental Human Rights (“Charter”) in this instance to be “horizontal”, that is to say it (unusually) applies in this instance to private bodies as well as public bodies. Although it did not repeat this reasoning explicitly in respect of the tort in respect of ECHR, that may be taken as read simply because this case did not involve public bodies.

Immediate consequences

The procedural issues ain’t over until they’re over. Google may appeal, they have another week in hand. Due to certain interesting points that might be made about the “tea-leaves”, Google may have placed themselves in a position they have little further to lose – except (possibly) a more savage mugging by the Supreme Court. Tactically this is not a good place to be: however strategically Google may find it profitable to take it all the way, risking the odium because of Fabian tactics alone; every month of delay potentially being of immense financial value.

Likewise Google simultaneously might try for a reference to the Court of Justice of the European Union (“CJEU”) regarding the interpretations of the Charter ought apply in this case, but this might be very high risk: given their current high profile in Europe, they might achieve nothing but facilitate or encourage another few hundred million potential additions to plaintiff classes in Europe.

The substantive case might not even be a High Court case (except for the international aspects): it hasn’t even been allocated to a track. But that may not matter: if the plaintiffs’ solicitor Mr Tench or others swing more cases into its slipstream, they might all be swept up into one of the English equivalents of a class action, and/or adjourned pending the “test case”. If the defendant runs a Fabian strategy, as is not unusual for such defendants, it could take years to come to trial.

Reading the tea-leaves

The parties cannot of course discuss this, and they’re the only people in a position to know or suspect anything. However I make some uninformed observations, which may be quite wrong.

Wherever the case may have started, because the plaintiffs forced the long-arm jurisdiction issue front and center from the beginning, the procedure probably had to be run up to the High Court Masters anyway (for reasons of international comity as much as expertise, it’s normal practice for contested international issues to be referred to the center, then referred back if the international issues are only procedural, as here). As I personally know to my cost (!), the Queen’s Bench Masters jealously guard a collective reputation as perhaps the most ferociously robust judges in England short of the Court of Appeal. Nevertheless the solid Master Yoxall, to whom it fell to hear the plaintiff’s application, gave leave to serve the claim outside the jurisdiction.

On appeal from the Master’s judgment to a full High Court judge, the matter was heard by Tugendhat J: who fortuitously is the jurisdiction’s leading media/privacy judge, and extremely experienced in his previous law practice. In perhaps the key underlying jurisprudence, the Douglas v Hello cases, Michael Tugendhat was the successful advocate, and as a judge he seems unafraid of grasping the nettle (eg disposing of the notorious super-injunctions a few years ago). Tugendhat J did however refine the claim by allowing some elements of the appeal, refusing to permit service of claim for injunctive relief (overtaken by events so that makes no precedent) or breach of confidence.

Google’s appeal of several remaining points now has been heard by a full Court of Appeal, which unanimously dismissed the appeal. It may have been mere coincidence that the senior justice of the Court of Appeal, the Master of the Rolls, presided over the case: the authority of the Master of the Rolls being second only to the Lord Chief Justice. It was, after all, complex. Otherwise this may be a hint of how firmly the judiciary wishes to take this in hand: in which event one wonders how long they’ve been waiting for the right kind of case to roll down the track.

Why did Google appeal the matter in the first place?

In this instance there seems no longer anything particularly sensitive about the identity of the claimants (by inference from their lack of anonymity) nor, by extension, the private information: perhaps all the alleged damage already had been done. Normally defendants would compromise substantive cases as fast as possible under the radar, because making publicity and, worse, getting to a level at which a legal precedent is set, is in neither side’s interests.

One exception is where the defendants think they have such a strong case that the prospect of shooting some plaintiffs to encourage the others outweighs any brand damage – particularly if the case is unfunded by external players and they can bleed the plaintiffs with the death of a thousand procedural cuts. For reasons given later I don’t think this is such a case.

Another, complementary, exception is where it’s a class action, or where the defendants sense a class action in the offing, which essentially is what the plaintiffs’ solicitor openly told the lower court: possibly keeping in reserve other clients while he tested the wind with this case. Though evidently this does not seem to have endeared him to the judge, it otherwise seems rather a good tactic with such untested law.

Alternatively, Google simply may have thought it worth the risk to kill off the action entirely, or have it run under Californian law and/or in the Californian jurisdiction. In one sense they were unknowingly rolling double or nothing: they might not have expected the Court, given the historical English reluctance to address data privacy in common law, to sandbag them by confirming a tort in a procedural case.

Whatever their reasoning, it was in appealing the matter to the High Court where it all went wrong for Google. (the initial loss to Master Yoxall was a kind of “doesn’t matter” free pass for Google who could have settled at that point, and in any event that was initiated by the plaintiffs so it wasn’t down to defendant choices)

Particulars of claim –inferences

Unusually, the Court of Appeal judgment appends partial pleadings. In and of themselves these may be interesting to any in-house Counsel, as they amount to a precedent of what we all might face (or be pleading) down the line. However, due to costs considerations the pleadings seemed to me more audacious than ordinarily might be contemplated for non-corporate clients. Of course that audacity has been richly rewarded here and much of the inherent costs downside is minimised from the weak points being traversed at the beginning. However my point here is an observer might hazard a guess that the case is funded (ie by people other than the parties). If that is right, then the costs basis and thus legal tactics of the case become quite different as the defendants will not be able to bleed the plaintiffs to death, so the case is unlikely to go away. On the other hand, Fabian tactics still may be useful to defendants in any cases where deferring publicity is profitable.

Consequences for the General Data Protection Regulation?

Whatever the form of the final Regulation, there may be at least one rather ironic possibility of general European significance. The cosy December 2014 negotiations permitting Member States to, in effect, derogate from the Regulation in a Directive-like fashion in respect of their public bodies (only), thus throwing business and foreign governments alike to the wolves, now may be slightly derailed in practice. Every plaintiff lawyer may be tempted to try analogous reasoning to that identified here by the English Court of Appeal – Articles 7,8, 47 of the Charter – to strike down any unusually gratuitous exemptions generously self-legislated by their local Member State. After all, if a human rights argument can persuade an English Court, it might work anywhere.

In summary

In place of the old ramshackle combination of breach of confidence and human rights, the 57 million data subjects in England and Wales suddenly have acquired a newly discovered tort, whose territorial scope is limited only by the English domestic rules of international law, and whose subject matter data scope is limited only by the articles of the European Convention on Human Rights, applied to both public and private bodies alike; whose elements are as yet unknown and the Court declined to define. Potentially this tort is both powerful and flexible, though not quite floodgates.

One of the many curious things about the common law is that, unlike regulation, it is timeless: plaintiffs may seek remedy for linkable civil wrongs performed prior to “discovery” of those wrongs by the Courts. This is the way the common law always has worked: political lobbying is powerless, as is the regulatory revolving door. Despite the fact they may take years to define, common law torts thus have no “transition period” (a purely statutory concept), indeed they apply “retrospectively” and thus time is already running. We live in interesting times indeed.

Entirely independently, In England there also is a Data Protection Act 1998 whose prohibitions on compensation for non-pecuniary damage in its statutory torts at last have been struck down as unlawful, greatly to the benefit of consumers in the UK generally, not only those in England and Wales. Along with that the statutory torts have become easier to plead, by elimination of the comically contorted pleadings of one pound in ordinary damages in the schedule of loss, as a device or “gateway” to legitimize remedy in distress. Further, awards for distress are traditionally tiny in England: thus another constraint on quantum has been indirectly removed.

Notably the Information Commissioner intervened to support these outcomes, despite itself having no regulatory stake in private remedy. That itself may confirm several things: that the regulator sees private remedy as fully complementary to regulation; and indeed that the regulator has rather more understanding of the legal viability of the statutory torts than certain public statements on the regulator’s web site would seem to suggest.

As with the Spanish Inquisition, at the beginning of the case few could have expected such outcomes from these long-arm jurisdiction procedural antics. In the event, it now hardly matters what happens in the substantive case. In this interim judgment, as well as with the data processing parallel, Vidal-Hall v Google thus bears a strange resemblance to the rather less polite procedural savaging of the defendant by the Court of Appeal in Ferguson v British Gas http://www.bailii.org/ew/cases/EWCA/Civ/2009/46.html, before that substantive case disappeared back to the county Court. Even if the substantive case is settled here, as with Ferguson this nominally procedural case law may remain topical for a long time. Many will not thank Google for this triple whammy: a newly discovered tort with international impact; dis-applied statutory constraints on remedy for statutory torts; and potential class actions in respect of either or both.

Article 29 Working Party, Big Data, Data Privacy, EU, GDPR, Privacy, Privacy Directive

Privacy-by-Design made easy: how to retrofit PbD to legacy systems

This post exhibits (sample but working) Java code used in a privacy governance law/architecture/IT course to demonstrate from my laptop how easy it is to embed a PbD API “wrapper” around pre-existing legacy systems (for avoidance of doubt, naturally the same technique will work on new IT initiatives). This particular example is for the first case study in the course, the alpha-test version of which I also post here DRAFT preterlex privacy governance course – 31-33. If you read nothing else, read the third page.

The Java file is available from DropBox here: EGMonoPulley.java. The IT piece really doesn’t have to be any more complicated than that.

As for the API, which abstracts all the hard work to itself, I’m afraid it would be misleading to release even the javadoc at this stage (for example it’s now in two packages that I have to merge back together after a misconceived attempt to make life even simpler for implementation by further abstraction of connectors etc). However I’ll comment on matters arising from the posted java file. I apologize that, out of necessity, this commentary is in part technical.

Readers will note the data structures chosen to hold the metadata repositories (as set out below), spreadsheets, assumes that the governance architects – for a trivial example my course attendees doing their case studies – will not necessarily have IT support/buy-in for their privacy governance activities, so they have to be self-sufficient. That said, the metadata spreadsheets amount to a small ten-table relational database and others can build connectors to taste. For example anyone can lob in XML connectors with ease, or even gold-plate it with SQL, once I’ve released it to open source.

Enterprise metadata inputs are supplied by the enterprise: PIAs (mandatory), optional custom rules (usually specified by or or in consultation with the Legal function to override legal analytics, change risk profiles, model future legislation, etc), optional accepted risks (usually specified by or in consultation with the Compliance/Audit function), optional enterprise data-set dictionary, and optional internationalization (added after alpha-testing the associated course) which also can be used to redefine terms. These are all publishable in principle to regulators/data subjects/Courts without compromising any commercial sensitivities (other than, perhaps, the existence of the processing).

Non-enterprise inputs at the moment are supplied by myself: metadata spreadsheets for jurisdictions (about 380 so far), laws (a relatively few “interesting” or iconic statutes and torts done so far), and analytics (ditto).

Standard outputs: configurable-depth audit trail (text file or stdout), transaction privacy analyses, and breach/risk reports including quantified legal risks.

I intend to release the API platform itself to open source later this year when I’ve finished adding functionality and it has a chance to stabilize. Hopefully this will be circa June 2015, by which time I understand my Preterlex privacy governance courses (of which the API is the final component) will be live after a final beta test. The API platform is much smaller than I expected last year, currently only about 70-80 classes, many of which are utility or data-source connector classes anyway. Most of the work is done by the metadata content. As it should be.

I welcome any expression of interest from IT privacy professionals with Java skills who might be minded to develop such an API further, or might wish to build front-end privacy products using such an API platform as a foundation.

Some Q&A

Why have I built this?

For the 12 years since I conceived the technology I’ve looked on with increasing disbelief as nobody else has built one. It dawned on me last year that its absence has crippled privacy law generally, and in particular the ability of companies to comply and regulators to regulate, leaving data subjects powerless. As Alan Rickman in full evil mode says, if you want to get something done, you have to do it yourself. So I’ve done it.

I’m hoping that from June 2015 much of the privacy legislation palaver and angst, for example in the GDPR negotiations, might now become redundant. We can easily and cheaply do privacy-by-design. We can easy self-notify quickly. (We already knew data portability was technically easy, because of our rock-solid data feed formats and our 360 view of the customer about which I myself was speechifying ten years ago). We don’t need Unsafe Harbor anymore because, for the same reasons, compliance is easy (though I am most grateful to the member of the Article 29 Working Party who helpfully reminded me, I paraphrase very unfairly, that Safe Harbor is all about politics and the safety of controllers rather than common sense and the safety of PII or data subjects). Etc, etc.

Why do I want to release the API platform into open source?

  1. As a technology platform (as distinct from products which others are free to build on top, there’s room for plenty of snouts in the trough) it’s far too important to be proprietary.

  2. Like all interesting IT it will need ongoing development, refactoring, and even (cough) correction, and I have neither the time nor the inclination to maintain it myself in the long run. After all it took 11 years for me to twist my own arm to get it done. I don’t even want to run the project or administer the repository, so someone else can have that dubious glory.

  3. That will free me up to focus on finishing up the “metalaw” – the legal metadata, which as a lawyer I find fun and relatively easy and so I’ll continue to maintain (and thus inevitably the material logical data model API classes as well, but they’re easy too).

  4. It’ll also free me up to revert to some seriously disruptive legal technologies that I’ve had to mothball since last year. When that’s done I can return to spending enough time on real lawyering to reap the whirlwind 😉

I apologize for any mis-communication and welcome comments/queries. I’ll add to the Q&A as necessary.