Currently the Global Privacy Officer at Wipro, Ivana Bartoletti is an Italian AI researcher specializing in privacy issues and personal data security. Author of An Artificial Revolution in Power, Politics, and Artificial Intelligence and co-founder of the Women Leading in AI Network, the expert is committed to an ethical revolution in AI. interview.
Can you tell us more about your academic background?
My research topics have always focused on technical and security issues around technology. My research at Oxford University – more specifically at the Oxford Internet Institute – or even at Virginia Tech, for example, has allowed me to explore progress in sharing information globally with all relevant issues. Privacy, data protection and human rights.
In 2018, I published my book An artificial revolution on power, politics and artificial intelligence (Artificial Revolution in Power, Politics, and Artificial Intelligence) I recently finished a report for the Council of Europe on the issue of respect for human rights. The question is how digital products, platforms and algorithms can be more respectful to internet users because it turns out that the largest social networks have a social scoring system that is not what we would call virtuous.
Does a recent resurgence of interest in generative AI also catch your eye?
Yes, and it must be said that these algorithmic paradigms are not new – and I have no qualms about it. What’s new is that hundreds of millions of people are now using it and it dramatically speeds up access to information. But this is the same question that arises: would you be ready to drive a car whose brakes have not been previously tested? This is what happens with these new tools, although they clearly help us with tasks in an unusual way.
The first risk remains the fact that these AI systems rely exclusively on algorithmic systems and this can lead to breaches if human control is not provided. The second risk in my opinion is related to information manipulation and we’ve already seen some examples of that with Midjourney. Tighter oversight must be ensured to prevent these AI systems from being used to spread false information or sow discord in public debate. The third risk relates to the issue of privacy and data protection at the European level. And recently, the European Data Protection Council has just established a working group on ChatGPT and all AI systems of the same type.
Finally, future European regulations on AI called the “Artificial Intelligence Act” – which are due to come into force at the end of 2024 – will also include these new models to ensure that all risks are taken into account.
Italy recently lifted the ban on ChatGPT, in part due to its compliance with age verification rules for minors… Have you identified any other issues that could qualify as GDPR violations?
Yes, this decision is not enough and the Italian regulator has used the means at its disposal to better regulate ChatGPT from the point of view of transparency. The latter could have moved more European rules but this requires more in-depth investigations, particularly on data use.
In the end, should we be wary of AI or can we find a balance? Is there a danger of depriving us of its potential if its control is too wide?
Yes, it is possible to find a balance and we must also distance ourselves at all costs from this dichotomy between AI and privacy, or AI and regulation. Because at the end of the day, if a product is not reliable and safe, companies will not use it. This explains the large gap between the rapid development of AI and its effective deployment. The european approche in the matière is selon moi la plus pertinente car même si il existe une mondiale à l’IA, il n’en demeure pas moins que ce sont les produits les plus fiables et éthiques qui s’inscriront durablement sur le Long term.
For generative AI, the risks have also been known for a long time. This mimics a human like parrot in an annoying way but that doesn’t give them a ‘soul’ for all that. Journalists, reporters, and politicians should stick to this narrative and ignore the idea of AI awareness. It is better to educate the public about these issues than to instill fear.
Too many big tech executives today are playing with the idea of a doomsday scenario where AI takes over. These same giants we give so much power to are because they develop all the tools we commonly use today. We know that these technologies need direction and we have learned very well throughout history that the market can never regulate itself. More and more companies in the sector themselves are arguing for this: regulation is necessary to avoid excesses but also the risk of solutions remaining in the treasury due to a lack of regulatory certainty.
Why is the AI law so unique? Can Europe find this balance and impose it on the rest of the world?
The big question is: Will the AI law have the same impact as the General Data Protection Regulation? Meanwhile, this regulation remains the first in the world in terms of protecting privacy in the face of artificial intelligence. It offers a kind of framework based on the product itself and its risks. Similar projects have recently been presented by the Cyberspace Administration of China (CAC) and the National Communications and Information Administration (NTIA) of the US Department of Commerce, but the European peculiarity lies in its desire to standardize rather ambitious standards for artificial intelligence.
When the Internet was born, the ambition was to create a space of freedom and peace that would escape the control and censorship of power. That ambition remains the same today, but we haven’t calculated how serious the excesses can be in terms of misinformation. We are at the crossroads of the relationship between humanity and technology, and in this context European texts such as the Digital Services Law, the Digital Markets Law, the Data Management Law, or the Artificial Intelligence Law were born. We have realized that technology, data and algorithms play a dominant role from an economic point of view, and it is only natural that countries are interested in a sovereign view of these issues.
In your book “An Artificial Revolution in Power, Politics, and Artificial Intelligence” published in 2018, you also advocate the need to protect human rights and democracy, going so far as to denounce the use of AI as a topic of populism in political debate. … Where does this day lead us?
This book has intrigued many students and scholars with their work and I am really grateful for that. This danger fascinates me and I have always been convinced that it is important to highlight it because it has a direct impact on society and the collective imagination. Populism around artificial intelligence is becoming commonplace, and companies are increasingly taking on it. I got to see it at this year’s International Press Festival, an event that takes place every year in Perugia, Italy.
It’s also clear that there’s an ongoing fear and some ban ChatGPT from their practices, for example, fearing in particular that it could replace trades. But it must be demonstrated that a responsible paradigm can be reinvented, a safe space that makes it possible to maximize certain products or help employees in a benign way. If we allow governments and corporations to use technology solely for profit, we risk losing our humanity. We now need a political response to this issue to ensure that democracy and human rights are protected.