
Recent judgments have demonstrated the difficulties for claimants in sustaining mass opt-out claims for data misuse. However, AI may offer a solution.
Under CPR 19.8 a claim may be pursued on an opt-out basis by a representative who has the same interest as the other claimants. In Lloyd v Google and more recently Prismall v Google, the appellate courts have twice engaged with representative claims for data misuse, with both cases resulting in victories for the defendants. On each occasion the claimants failed because the class representative was unable to demonstrate a ‘lowest common denominator’ of harm uniting all the potential claimants.
To solve this problem, an AI system could be trained on the data of claimants to provide an individualised rating and valuation for the damage caused by the relevant data misuse. It may then be possible to plead a representative claim with sufficient specificity so as to include only the class of individuals whose claims overcome a scoring threshold established by the deployers of the AI system. User-generated data is collected at scale is highly valuable and important in terms of driving insights for the companies that collect it. In this article it is suggested that the same data, once analysed by a specialised AI system, might also be used on the claimant side to overcome the evidential and logistical difficulties which have to date rendered mass opt-out data misuse claims largely unworkable.
Jacob’s article has been published in The Law Society Gazette, here.
A longer version of the article may be found here.
The views in these articles are entirely those of the author and are not attributable to any other Members of Chambers or clients.