Part One|Political Lens: Between Laissez-Faire and the 5-Year Plan?

Riemann Network
8 min readOct 7, 2020

State of Play

AI and Big Data have already proven to be powerful forces for good and bad, but both technologies pose inherent policy challenges that make some contemplate regulation:

The (self-) learning process of AI is opaque, as is Big Data’s ability to piece together conclusions from disparate sources. Sometimes these processes behave like a black-box, where input and output correlations are difficult to understand. This means their actions are no longer open to scrutiny by humans to the extent they were used to with “normal” machines. The report by the European Group on Ethics states that “it is impossible to understand how exactly AlphaGo managed to beat the human Go World champion.”

Biases and errors — introduced via the data pools AIs learn from — can become engrained in an AI, leading to “online ads that show men higher-paying jobs; delivery services that skip poor neighbourhoods; facial recognition systems that fail people of colour; recruitment tools that invisibly filter out women”, according to Powles and Nissenbaum. This is not necessarily the AI’s fault, but the designers’, and less a technical rather than a social problem. Proof of this can be found in platforms without AI: for example, guest applications with typically African-American names have a 16% lower chance of being accepted in Airbnb.

This makes it difficult to know how powerful an AI is, both in terms of the ability to do its job, and in terms of its implications. Policy makers and experts advocating regulation are often ridiculed as exaggerating, but there are plenty of examples of AI specialists being surprised by the power or weakness of the AIs they have built. The probably biggest case for this is Facebook. The company seemed genuinely surprised by the power of its newsfeed-AI that had quickly polarised its users and had facilitated political earthquakes.

AIs possess two forms of power. One is comparable to the power of the brain: AI algorithms can find novel solutions to all kinds of intellectual problems. AIs and Big Data can, however, also be part of an autonomous system (quasi-synonyms are robot, cyber-physical system, advanced mechatronic or embodied AI). These are physical systems guided by AI that also include sensors and mechanics and therefore become part of the physical world. Examples are autonomous cars, cleaning robots orautonomous weapons. For now they are, by many, considered the most dangerous. Autonomous systems might be not only opaque, hard to scrutinise, probably biased and difficult to estimate in their power, their real-world decision-making independence and physical lethality is also deeply unnerving to many.

Most current policy responses to these issues fall into two camps. One camp has a laissez-faire approach assuming that the AI market (and angered consumers) would control companies to not get too unsafe. The US has mostly followed that approach. Meanwhile, authoritarian states such as China have strengthened control over most important aspects of AI research and development (more on the US and China in the geopolitical vignette). The EU seems to have decided to take the (probably best, but most complicated) middle road: preparing policy options that are sufficient to guarantee beneficial AI, but not to — as MEP Giménez Barbat stated — “give in to the temptation to legislate on non-existent problems.” This leaves many policy options:

Accountability: This includes the democratic oversight over government-AI, different forms of human oversight, verification and control depending on the sensitivity of AI, the disclosure of AI decisions affecting humans, the setting up of appeal mechanisms (e.g. GDPR establishes the right for individuals to know and challenge automated decisions), control mechanisms ensuring data privacy of AI data pools, and registering/certifying AIs with certain tasks or capabilities.

ESPAS Ideas Papers Series

Transparency: In an effort to minimise the black-box problem, which has created much of the political and social backlash, researchers (e.g. MIT, Google) are working to translate the decisions and the reasoning of AIs to humans. This needs human input that could make AI more expensive, but it would also make AI more efficient and safer. Another option would be a “right for explanation” or making developers keep historical records of issues and what went in and out of the AI.

Safety and control: This comes in many facets like for example ensuring consistent decision making of AI, denying hacking, reducing possible errors and risks, in case something goes wrong. Another facet is limiting AIs power through e.g. competition law and taxation (see next chapter). Some also call for banning or severely limiting R&D in fields of AI such as autonomous weapons, superintelligent AI and offensive cyber capabilities.

Public debate: This includes providing sufficient education about AI, paying for AItraining for natural and social scientists and policy makers, facilitating science translation, and creating incentives for AI experts to work for NGOs, government and in politics.

Policy should be based on ethical AI principles. These are, however, no technical solutions. The often-used example of an autonomous car deciding on how to crash is an inherently political question, as there is no agreement in the world on what ethical behaviour entails. Balancing different moral approaches is therefore a political task that cannot be relegated to entities like companies, programmers or AIs themselves (a list of ethical guidelines by expert groups is provided at the back).

Main Trends

AI will have the potential to improve government and governance by increasing effectiveness and oversight. For example, police use predictive tools to assess e.g. where to patrol, or what policeman are likely to engage in misconduct.

Just like social-media, freedom-of-internet and data-privacy debates today, AI is very likely to become a more central topic in future political discourses. Most likely this will be due to affecting the daily lives of ordinary citizens, causing political scandals, job loss (see next chapter), perceived loss of control, and societal divisions. This can lead to political crisis and violence, especially if AI is seen as oppressive, business-controlled or technocratic.

AI will give rise to new policy actors and will rebalance the power of existing ones. New expertise will become essential, new policy entrepreneurs will emerge. Technology companies seem to have a mixed track record as European policy actors. Some, like Google, which runs over 2000 commercial AI processes, are again leading the industry, and have already partnered up to contribute to the policy debate. New powerful government actors that are central in verification and control might emerge too.

The policy cycle or process will change too, with new ways to come up with, formulate, decide and implement policy due to AI. AI will help in finding policy gaps and political strategies, and in assessing probable policy impacts.

There will be more of the already ongoing political battle between top-down and centralising, andbottom up and distributive AI development and usage. Especially authoritarian countries want to centralise political power in AIs as a way to keep control of an ever-more complex society. Europe’s strength and values lie in bottom up AI, and it could make a “brand” of it. It could, for example, export GDPR and other regulation, provide AI for SMEs, government watch-dogs and news corporations, and invest and become a market leader in less data-hungry, decentral or blockchain-based AI.

The Future of Artificial Intelligence and Big Data

The EU will lag behind in AI for some more time, because it has a more complicated task than others. On the other hand, with a resilient and free economy, a balanced regulatory system, an interested public, intact societies and world class research it will be well-placed in the medium term.

Key Uncertainties

The need for future regulation will depend a lot on the technological progress of AI. Some experts believe that the advances in machine learning are plateauing and that AI will only develop slowly and incrementally from now on. Others see much more change coming, even revolutionary jumps like superintelligent AIs that are able to be employed in many fields at the same time (see the technology chapter). A more powerful AI would also need new forms of control. One theoretical way would be to align AI values with ours. For value-aligned AI to become a reality, humanity would need to better understand its own and the AIs intelligence, values, goals and modes of learning.

What will be the political division in 2030? State versus market might be replaced by community versus algorithm.

Which positions on AI will become left and right-wing mainstream and populist positions? For now they are incoherent.

Will mainstream politicians catch up with fringe parties’, populists’ and nationalists’ use of social media and Big Data analytics? Like with previous media revolutions, non-status quo powers seem better at exploiting them.

How will political belief systems and myths adapt to AI? The advents of general purpose technologies have had profound impacts on ideologies before (e.g. the internet’s effect on liberalism or nationalism). Liberalism’s emphasis on freedom could be challenged by the good life in an AI-led world. Even humanism’s focus on humans could be challenged by an appreciation for AI.

Most importantly, will polities be able to find a way to harness the positive power of AI without becoming dependent on it? Maybe a future resilient deliberative democracy with a new generation in charge will integrate and embrace the power of AI. Maybe bogged down democracies will be undermined by AIs superior way of decision making. Paula Boddington stated that ‘The quintessentially bad excuse of the twentieth century was, “I was only following orders,” the quintessentially bad excuse of the twenty-first century may yet be, “I was only following the dictates of a machine (or algorithm)”’.

Possible Disruptions

Political parties could be even more sidelined than today, should a direct, AI-controlled line between politician and the electorate exist. Even political leaders themselves could fall victim to a more decentralised real-time direct democracy simplified by Big Data and regulated by AI.

And what if…

the rights to the AI leading this decentral democracy, with its vast knowledge and power, would be in private sector possession?
political deliberation would be dominated by bots with superhuman abilities to persuade you of a political position?

Imagine you have a very powerful AI version of Siri that has a friendly relationship with, and listens to most of your citizens. Wouldn’t that AI know which policy would satisfy the biggest group of people? One rather scary idea is that the perfection of data analysis might replace democracy organically. Instead of voicing your opinion directly at elections or in direct democracy, you voice it indirectly, by leaving data traces. Your public and private communication and the changes in your daily routine might tell more about your political position and wishes than you could verbalise.

Written by Leopold Schmertzing

See follow-up articles for detailed analysis

--

--

Riemann Network

Based on blockchain technology, various types of data are used to build a new and complex big data world. Official email: riemann.network@gmail.com