05/06 Lecture: Atcs of Democracy
Abstract
During the industrial era, as Gilles Deleuze conceived it, governance was based on restriction. Democracy’s instruments were regulation, prohibition, censorship. But restrictions fail in digital reality. Banned websites easily leap to new domain names, information labelled hazardous increases instead of decreases incoming links. We have all seen it happening. Attempts to stifle political views, to block deepfake access, to muzzle the outputs of language models, they all collapse.
So, how can democracy function today, in a world without prohibitions and with AI?
Three possibilities are proposed. These are acts of democracy meant for collaboration between ethicists and computer scientists. One addresses recommendation algorithms. It asks how they can be re-engineered to promote curiosity and unfamiliar interests instead of reinforcing established preferences and polarizing echo-chambers. The second reconfigures the human relation between authenticity and freedom, and then applies the reconception to AI reality. The third defines and applies acceleration AI ethics.
The larger conclusion will be that technology is shifting the logic of governance in the area of society and information. We used to restrict information so that there could be democracy. Now, democracy is the act of generating information, and it is driven by ethics and AI engineering.
So, how can democracy function today, in a world without prohibitions and with AI?
Three possibilities are proposed. These are acts of democracy meant for collaboration between ethicists and computer scientists. One addresses recommendation algorithms. It asks how they can be re-engineered to promote curiosity and unfamiliar interests instead of reinforcing established preferences and polarizing echo-chambers. The second reconfigures the human relation between authenticity and freedom, and then applies the reconception to AI reality. The third defines and applies acceleration AI ethics.
The larger conclusion will be that technology is shifting the logic of governance in the area of society and information. We used to restrict information so that there could be democracy. Now, democracy is the act of generating information, and it is driven by ethics and AI engineering.
Speakers
James Brusseau (Pace University, NYC & University of Trento)
James Brusseau (PhD, Philosophy) is author of books, articles, and media in the history of philosophy and ethics. He has taught in Europe, Mexico, and currently at Pace University near his home in New York City. He is also a visiting professor in the Department of Information Engineering and Science at the University of Trento in Italy. His academic research explores the human experience of artificial intelligence in the areas of personal identity, authenticity, and freedom.
05/06 Lecture: Onine Content Moderation: Technical, Social and Normative Challenges
Abstract
TBA
Speakers
Stefano Cresci (IIT-CNR)
Stefano Cresci is a Researcher at the Institute for Informatics and Telematics of the National Research Council (IIT-CNR) in Pisa, Italy. Stefano’s scientific interests broadly lay at the intersection of Web, network, and data science, with a specific focus on content moderation, coordinated online behavior, and human-centered AI. On these topics, he published more than 100 peer-reviewed articles in venues such as PNAS, WebConf, ICWSM, CSCW, and WebSci. Currently, he leads a prestigious ERC project on data-driven and personalized content moderation. Previously, he received multiple awards including the IEEE Next-Generation Data Scientist Award and the ERCIM Cor Baayen Young Researcher Award.
06/06 Lecture: AI as a tool for democracy: AI methods supporting democratic governance
Abstract
In my first session, I’ll look at how AI can be used as a tool for democratic processes. There are several ways in which AI methods could potentially improve participatory democracy, supporting better public discussions, and helping to give citizens a voice in policymaking.
I’ll begin by looking at the Pol.is system, which provides some genuinely new ways of supporting public discussions. I’ll focus on some very recent extensions to Pol.is, that incorporate AI. Then I’ll consider whether AI methods can provide improved ways of running focus groups, or of analysing public submissions organised by governments.
I’ll begin by looking at the Pol.is system, which provides some genuinely new ways of supporting public discussions. I’ll focus on some very recent extensions to Pol.is, that incorporate AI. Then I’ll consider whether AI methods can provide improved ways of running focus groups, or of analysing public submissions organised by governments.
06/06 Lecture: AI as a target for democracy: Democratic methods supporting AI governance
Abstract
In my second session, I’ll focus on AI as a target for democratic processes. Large-scale AI systems must be governed, and citizens arguably need a voice in this governance too. How can citizens and users shape the way AI systems operate?
I’ll begin by focussing on generative AI models, where a key form of governance is is provided through the alignment process, that teaches a model about the kind of responses it should give. I’ll introduce the main alignment methods currently in use, and discuss a range of ideas about how citizens/users could participate. I’ll then introduce some newer ideas about governance of the AI content moderation tools that are used in social media platforms. These tools can also perhaps be aligned through democratic processes: I will introduce some work that our group at the Global Partnership on AI has been doing, to explore this idea.
I’ll begin by focussing on generative AI models, where a key form of governance is is provided through the alignment process, that teaches a model about the kind of responses it should give. I’ll introduce the main alignment methods currently in use, and discuss a range of ideas about how citizens/users could participate. I’ll then introduce some newer ideas about governance of the AI content moderation tools that are used in social media platforms. These tools can also perhaps be aligned through democratic processes: I will introduce some work that our group at the Global Partnership on AI has been doing, to explore this idea.
Speakers
Alistair Knott (Victoria University of Wellington)
Alistair Knott is Professor of Artificial Intelligence at Victoria University of Wellington. He has been an AI and computational linguistics researcher for 30 years. He studied Philosophy and Psychology at Oxford University, then obtained MSc and PhD degrees in AI at the University of Edinburgh. He then moved to New Zealand, working first at Otago University and now at Victoria University of Wellington. Ali’s AI research has mostly been in computational modelling of cognition, in particular with the New-Zealand-founded AI company Soul Machines, where a longstanding project is the development of a model of embodied baby cognition, BabyX. Currently, Ali’s work mostly focusses on the social impacts of AI, and on AI regulation. He co-founded Otago University’s Centre for AI and Public Policy, where he worked on Government uses of AI, and on the impact of AI on jobs and work. He now co-leads the Global Partnership on AI’s project on Social Media Governance, with Susan Leavy and Dino Pedreschi. Work on this project has had impacts on EU tech legislation, both in the AI Act and in the Digital Services Act. Most recently, the project founded an alliance of researchers to help organise the programme of work done by external researchers within large online platforms under the provisions of the EU’s DSA: the Social Data Science Alliance. Separately from these initiatives, Ali has contributed to the Christchurch Call’s Algorithms Workstream, the Global Internet Forum to Counter Terrorism, and the Forum for Information and Democracy.
06/06 Lecture: Human-AI coevolution
Abstract
Human-AI coevolution, the process in which humans and AI algorithms continuously influence each other, increasingly characterises our society but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, permeating many facets of daily life and influencing human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users’ choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction, giving rise to complex and often “unintended” systemic outcomes. This talk will discuss relevant studies on the impact of AI-driven recommendations on human behaviour in social media, geographic mapping, online retail and chatbot ecosystems. Moreover, the talk will discuss the human-AI feedback loop and the challenges of measuring and modelling it.
Speakers
Luca Pappalardo (ISTI-CNR)
Luca Pappalardo is a Senior Researcher at the National Research Council of Italy (CNR) and an Associate Professor at Scuola Normale Superiore of Pisa, Italy. Luca is also a member of SoBigData.eu, the European research infrastructure on big data analytics and social mining. With a degree in Computer Science, Luca started exploring massive datasets of human movements, publishing several papers on human mobility analysis and modelling. In a natural evolution of his research trajectory, Luca shifted focus to the pressing issue of urban congestion, pioneering efforts centered on designing routing strategies that balance individual travel optimization with the collective well-being of a city. Luca is now broadening his scope to address the profound impact of AI on complex systems in realms like social media, conversational systems, online retail and, of course, urban environments. His overarching goal is to design next-generation algorithms that balance the individual needs of users with broader collective objectives using tools at the intersection of computer science, network science, and computational social science.
09/06 Lecture: AI, Data and Rights: the new grammar of European Digital Regulation
Abstract
This talk explores the transformation of the traditional dichotomy between data protection and data circulation, a tension that has long shaped European digital regulation. Far from being mutually exclusive, these two dimensions are increasingly intertwined in the age of artificial intelligence and large-scale data ecosystems. Rather than dissolving the paradox, the European Union is reframing it through a new legal grammar rooted in trust, innovation, and rights.
The presentation analyzes how recent EU initiatives — including the AI Act, the Digital Services Act, and the broader European Data Strategy — have shifted the focus from defensive regulation to enabling governance. Artificial intelligence plays a catalytic role, reshaping legal categories such as autonomy, responsibility, and transparency.
In the final part of the talk, special attention is given to electronic health data governance, where the promise of personalized, data-driven care must be balanced against risks of inequality, exclusion, and misuse. The One Digital Health paradigm emerges as a test case for a more integrated, cross-sectoral approach to data and fundamental rights.
Ultimately, the European response does not eliminate the challenges posed by the data paradox — it transforms them into a constitutional agenda for the digital age.
The presentation analyzes how recent EU initiatives — including the AI Act, the Digital Services Act, and the broader European Data Strategy — have shifted the focus from defensive regulation to enabling governance. Artificial intelligence plays a catalytic role, reshaping legal categories such as autonomy, responsibility, and transparency.
In the final part of the talk, special attention is given to electronic health data governance, where the promise of personalized, data-driven care must be balanced against risks of inequality, exclusion, and misuse. The One Digital Health paradigm emerges as a test case for a more integrated, cross-sectoral approach to data and fundamental rights.
Ultimately, the European response does not eliminate the challenges posed by the data paradox — it transforms them into a constitutional agenda for the digital age.
Speakers
Marco Orofino (University of Milano)
Marco Orofino is Full Professor of Constitutional and Public Law at the University of Milan. His research focuses on fundamental rights, digital regulation, data governance, and the constitutional implications of emerging technologies — including AI, health data, and platform regulation. He is the author of numerous publications on the legal frameworks shaping the European digital space and participates in national and EU-funded research projects. He also serves on the doctoral board in Public, International and European Law at the University of Milan.
09/06 Lecture: Cultivating pluralism in algorithmic monoculture
Abstract
How can large language models respect diverse and possibly conflicting preferences of users on a global scale? To advance this challenge, we establish four key results. First, we demonstrate, through a large-scale multilingual human study with representative samples from five countries (N=15,000), that humans exhibit significantly more variation in preferences than the responses of 21 state-of-the-art LLMs. Second, we show that existing methods for preference dataset collection are insufficient for learning the diversity of human preferences even along two of the most salient dimensions of variability in global values, due to the underlying homogeneity of candidate responses. Third, we argue that this motivates the need for negatively-correlated sampling when generating candidate sets, and we show that simple prompt-based techniques for doing so significantly enhance the performance of alignment methods in learning heterogeneous preferences. Fourth, based on this novel candidate sampling approach, we collect and will open-source Community Alignment, the largest and most representative multilingual and multi-turn preference dataset to date, featuring over 200,000 comparisons from annotators spanning five countries. We hope that the Community Alignment dataset will be a valuable resource for improving the effectiveness and cultural relevance of LLMs for a diverse global population.
Speakers
Smitha Milli (Meta FAIR NYC)
Smitha Milli is a Research Scientist at Meta FAIR in the AI & Society group. They received their BS and PhD in Electrical Engineering & Computer Science from UC Berkeley where they were supported by an NSF Graduate Research Fellowship and an Open Philanthropy AI Fellowship. Their research focuses on pluralistic alignment, i.e., ensuring that machine learning systems are effective for and inclusive of people who hold diverse and differing viewpoints. Their work has been discussed in live television, in policy outlets such as Tech Policy Press and the Knight First Amendment Institute, and in testimony to the House Financial Services Committee.
10/06 Lecture: Growing Trust: A Developmental Lens on Human-Robot Interaction
Abstract
TBA
Speakers
Cinzia Di Dio (Università Cattolica del Sacro Cuore)
TBA