IN DIALOGUE WITH OUR ELONTECH EXPERTS | January-March 2019: Lio Ploum

Lio Ploum, Researcher UCL, Universite Catholique de Leuven, ELONTech Advisor

1)      Do disinformation and digital media constitute a challenge to democracy?

To answer the question, we would have to define democracy first. We have an intuitive notion but, when we study it closely, we realise that the word “democracy” in itself is a huge challenge. But even with a proper definition, we would need to define “disinformation”.

What we witness is that digital media are challenging the notion of “truth” that was previously held by a few centralised medias. What was printed in the newspaper or seen on the television was the “truth”, nobody questioned that. But this phenomenon is quite recent as, before the French revolution, “truth” was mostly a monopoly of religions. Religions were a by product of the writing technology. Religions managed to write down things and people didn’t questioned it. Religions were essential to keep an aristocratic society. If it was written, it was the truth. Thanks to the printing press, writing became more common. Medias became the truth. Attending religious ceremony was replaced by the news on television. Our sacred duty switched from “obey god’s commandements” to “be informed, listen to the news”. Aristocracy was replaced by something we call “democracy” but which is mainly the power of a few people that we believe to have chosen through elections.

But, after the printing press, we are currently witnessing the second major evolution of the technology known as “writing” : Internet. We are in the middle of the transition so it is very hard to predict what will happen but there’s one thing we can be sure.

1. Traditional medias will be tomorrow what religion is today. Something still existing out of habit but obsolete and decaying.
2. Democratic power will be tomorrow what aristocracy is today. Something people don’t really care about anymore.

 
2) Can we hold algorithmic decision-making systems accountable?

The main problem is that most modern algorithms are evolving and learning. That’s why we often talk about “IA”. This means that, in its simplest case, there are at least four elements involved in any decision taken by an algorithm:

1. The algorithm itself, as written by the programmer.
2. The learning data, fed by the software producer to “initialise” the algorithm.
3. The data encountered by the algorithm during its normal use.
4. The input by the user.

If an algorithm takes a really bad decision, it is nearly impossible to know “why” it took that decision. So who will we make accountable?

Maybe, in the future, algorithms will have their own “life”. They will be considered as legal entity and we could judge them. An algorithm could be deleted forever. We may start to think about algorithms like we think about animals : entity that can help us but that could harm us.

3)      Is it possible or desirable to build moral principles into AI systems?

In facts, to some degree, a coder already build its own moral system into any software he writes. That’s not something new. When you write geotracking software for Google, you know perfectly that it will be used to display targeted advertising.

But, in the long term, AI systems will build their own morals.

Just like we realised that we could not write a software that can drive a car but we could write a software that learn how to drive a car, we will realise that we can’t write a moral software but we can build software that learn about morality. The problem is that there’s a huge difference between the moral principles we believe are true and the moral principles we apply in everyday life. Software are not learning what we preach but what we really do. That’s probably why we are scared of unethical software: it’s simply our own reflection in the mirror.

4)      What do you consider the next 5 years challenge /trend for the research and  innovation in your field of expertise?

The social choice theory demonstrated, decades ago, that there’s no election systems which could always elect the “true choice” of the voters. The biggest challenge in that area will be to think outside the box and build system which are not “election systems” anymore. After all, why should we elect someone to take a decision in our name while we could take that decision in real time? Or we could ensure that only the people affected by the decision have a voice. The simplest example is: why should men be able to decide the laws about abortion?

Thinking outside the box will require a true interdisciplinary cooperation. Technologies like blockchains and IA allow to think in different ways about sociology and politics. But most engineers don’t know a dime about sociology or anthropology. Why sociologists have few gateways to understand why those technologies may change the society. Maybe we need a new kind of scientists that I call “techno-sociologists”.

I like to illustrate this with an anecdote: as far as I know, nobody truly understand how Bitcoin works, from a techno-sociological point of view.

5)     What are the latest updates on “Liquid Democracy”? How would you comment the impact of disruption of institutions in general, up to this point?

Liquid democracy is not a technology nor a clear idea. Is more a broad concept like the universal basic income. Like basic income, liquid democracy is still an idea in its infancy. In the short term, nothing will change, no disruption will happen. People will try democracy.earth or work with colony.io and say “it’s cool but it’s not a revolution”.

Then, one morning, we will realise that we don’t really care about institutions. Like religion and aristocracy, those institutions will still be there. They will still have a niche market. We will not really care about them anymore.