The triad of human rights, democracy and the rule of law are the core elements of western, liberal constitutions. All actions of government, legislators and indeed societal reality are measured against these principles. Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, the question needs to be asked how this new technology must be shaped to maintain and strengthen the constitutional triad, rather than weakening it.

The principle of rule of law, democracy and human rights in treating big tech and AI is necessary because on the one hand the capabilities of AI, based on big data and combined with the pervasiveness of devices and sensors of the internet of things, will eventually govern core functions of society, from education via health, science and business right into the sphere of law, security, political discourse and democratic decision making.

On the other hand, it is also high time to bind new technology to the basic constitutional principles. The absence of such framing has already led to a widespread culture of disregard of the law and put democracy in danger – the Facebook Cambridge Analytics scandal was only the latest wake-up call in that respect.

It would be naive to ignore that for most in our societies today, the reality of how they use the internet and what the internet delivers to them is shaped by a few mega corporations, as it would be naive to ignore that the development of AI is dominated exactly by these mega corporations and their dependent ecosystems.

Five companies dominate the field

In particular, the activities of the ‘frightful five’ are shaping our experience with the internet and digital technologies like AI: Google, Facebook, Microsoft, Apple and Amazon. These corporations, together with a few others, shape not only the delivery of internet-based services to individuals. They are extremely profitable, leading to rises in stock market valuations, and therefore wield economic power which does not only guarantee disproportionate access to legislators and governments, but also allows them to hand out freely direct or indirect financial or in kind support in all areas of society relevant to opinion building in democracy: governments, legislators, civil society, political parties, schools and education, journalism and journalism education and—most importantly—science and research.

Today, the frightful five are present in all these fields, to gain knowledge and learn for their own purposes, but also, to put it in diplomatic terms, to gain sympathy and understanding for their concerns and interests.

Four sources of power

The accumulation of digital power, which shapes the development and deployment of AI as well as the debate on its regulation, is based on four sources of power.

First, deep pockets, money being the classic tool of influence on politics and markets. Not only can the digital mega players afford to invest heavily in political and societal influence as already mentioned, but they can also afford to buy up new ideas and start-ups in the area of AI or indeed any other area of interest to their business model—and they are doing just that.


The absence of constitutional framing has already led to a widespread culture of disregard of the law and has put democracy in danger – the Facebook Cambridge Analytics scandal was only the latest wake-up call
in that respect.

Second, these corporations increasingly control the infrastructures of public discourse and the digital environment decisive for elections. No candidate in democratic process today can afford not to rely on their services. And their internet services increasingly become the only or main source of political information for citizens, especially the younger generation, to the detriment of the Fourth Estate, classic journalist publications, with the ambition to control power, so important to democracy.

You might also like:  Manufacturing Dissent? Fake-news, Micro-targeting and Democracy

Third, these mega corporations are in the business of collecting personal data for profit and of profiling any one of us based on our behaviour, online and offline. They know more about us than ourselves or our friends — and they are using and making available this information for profit, surveillance and security and election campaigns. They are benefitting from the claim to empower people, even though they are centralising power on an unprecedented scale.

Fourth, these corporations are dominating development and systems integration into usable AI services. While their basic AI research may, in part, be publicly accessible, the much more resource intensive work on systems integration and AI applications for commercial use is taking place in a black box with resources surpassing the public investments in similar research in many countries.

Resisting rules & regulation

At the same time, the internet giants are the single group of corporations in history which have managed to keep their output largely unregulated, to dominate markets and be most profitable at the top of the stock exchange, to command important influence on public opinions and politics, and at the same time stay largely popular with the general public. It is this context of a unique concentration of power, the experience with the absence of regulation for software and internet-based services and the history of technology regulation by law, which must inform the present debate about ethics and law for AI, together with the potential capabilities and impacts of this new technology.

Famously, in his ‘Declaration of the Independence of Cyberspace’, John Perry Barlow rejected the idea that any law might suit the internet, claiming that traditional forms of government, those which we would argue can only be based on the rule of law, ‘have no sovereignty where we (the actors of cyberspace) gather’. It is no coincidence that this declaration was presented in 1996 at the World Economic Forum.

The teaching of disruptive innovation, widespread in business schools, eventually legitimised even the disruption of the law. The heroes of the disruptive internet did not just speak out against governments and parliamentary law, break intellectual property rights and transport law, but it also became a fashion to trick the system of tax collection based on national jurisdiction, making necessary decisions by the European Commission as that on Apple having to pay 13 billion Euros of previously unpaid taxes in Ireland, or to disrupt regulators by not telling the truth, as it happened in the Facebook/ WhatsApp merger case, which led the European Commission to impose a fine of 110 million Euro on Facebook.

Avoiding the law or intentionally breaking it, telling half-truths to legislators or trying to ridicule them, as we recently saw in the Cambridge Analytica hearings by Mark Zuckerberg of Facebook, became a sport on both sides of the Atlantic in which digital corporations, digital activists and digital engineers and programmers rubbed shoulders.

Their explicit or implicit claim that parliamentarians and governments do not understand the internet and new technology such as AI, and thus have no legitimacy to put rules for these in place, is not matched with a self-reflection on how little technologists actually understand democracy and the functioning of the rule of law as well as the need to protect fundamental rights in a world in which technology increasingly tends to undermine all these three pillars of constitutional democracy.