24/06 Lecture: Ethics and AI
Abstract
We will present and discuss digital ethics issues, foundations and applications. We will present the basics of Ethics by Design (fairness, privacy, democracy, etc. by design) and Responsible Innovation with AI. These recent developments regarding Moral Philosophy and AI will be situated against a background of economic and geo-political developments.
Speakers
Jeroen van den Hoven (Delft University of Technology)
Jeroen van den Hoven is University Professor at Delft University and Technology and professor of Ethics and Technology. He is permanent member of the European Group on Ethics and New Technologies to the president of the European Commission, and he serves as commissioner on the Global Commission of Global Governance of Military AI (GC REAIM), as a member on the WHO Ethics and AI Expert Group. Van den Hoven is Founding Editor in Chief of Ethics and Information Technology (Springer Nature) and Scientific Director of Delft Digital Ethics Centre (www.tudelft.nl/digitalethics ). See also www.jeroenvandenhoven.eu
25/06 Lecture: Exploring the Artificial Intelligence ACT
Abstract
In the module “Exploring the Artificial Intelligence Act,” we will delve into the logic, structure, and content of the AI Act. After analyzing the risk-based approach, we will highlight the strengths and weaknesses of the regulation. We will also explore its connection to AI ethical principles and the imperative to enhance the protection of fundamental rights, in accordance with the principles of digital constitutionalism. The lessons will be conducted interactively, with students encouraged to engage actively.
Speakers
Carlo Casonato (University of Trento)
Carlo Casonato, Professor of Comparative Constitutional Law at the Faculty of Law of the University of Trento, holds the Jean Monnet Chair in AI EU Law (T4F). He is the founder and chief editor of the BioLaw Journal, director of the BioLaw Laboratory and serves as the delegate of the rector and vice-president of the Ethics Committee for Research at the University of Trento. He is also a member of the Commission for Ethics and Integrity in Research at the CNR (Italian National Research Council).
He has served as a Visiting Professor at the Illinois Institute of Technology (Chicago) and the Universidad del Pais Vasco, as well as a Visiting Fellow at the Universities of Oxford and Yale. He has been involved in the OECD Global Partnership on Artificial Intelligence (GPAI) and the National Committee for Bioethics. He is the Principal Investigator for numerous national and European research projects and is the author or editor of over 160 publications, including more than 20 books.
He has served as a Visiting Professor at the Illinois Institute of Technology (Chicago) and the Universidad del Pais Vasco, as well as a Visiting Fellow at the Universities of Oxford and Yale. He has been involved in the OECD Global Partnership on Artificial Intelligence (GPAI) and the National Committee for Bioethics. He is the Principal Investigator for numerous national and European research projects and is the author or editor of over 160 publications, including more than 20 books.
26/06 Lecture: Responsible AI
Abstract
TBA
Speakers
Riccardo Guidotti (University of Pisa)
Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He is currently an Assistant Professor (RTD-B) at the Department of Computer Science University of Pisa, Italy and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data.
Anna Monreale (University of Pisa)
Anna Monreale is an associate professor at the Computer Science Department of the University of Pisa and a member of the Knowledge Discovery and Data Mining Laboratory (KDD-Lab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. Her research interests include big data analytics, social networks and the privacy issues raising in mining these kinds of social and human sensitive data. In particular, she is interested in the evaluation of privacy risks during analytical processes and in the design of privacy-by-design technologies in the era of big data.
27/06 Lecture: Responsible Social Media analysis: recommender algorithms, harmful content classifiers, and AI-generated content detection
Abstract
Social media platforms are one of the main gateways through which AI technologies reach the global public. 59% of people in the world were social media users in 2023; of those people, the average user spent over 2.5 hours on social media per day. Social media platforms run on AI: recommender systems are responsible for pushing content to users, and harmful content classifiers are responsible for moderating content that’s deemed unacceptable. These AI tools are having profound impacts on the way information flows in the world. But we know far too little about these impacts. We don’t know enough about how they are trained, or tested. And we don’t know enough about these systems affect platform users, as individuals or as collectives. A new uncertainty is how AI-generated content diffuses through social media platforms.
Studying how AI systems impact citizens through social media is best done in international projects: the techologies in question are deployed by large multinational companies, and many aspects of regulation are international in scope. I’m involved, with Dino Pedreschi, in running a project on social media governance for the Global Partnership on AI – an international grouping of AI researchers. In this session, I’ll outline three topics we are working on. Regarding recommender systems, our main focus is on how we can enable access for external researchers to the methods companies use themselves to study the effects of recommender systems on platform users. Regarding harmful content classifiers, our main focus is piloting an idea about how the training of these systems can be moved outside companies, into a more public and accountable domain. Regarding AI-generated content, our main focus is on how to build detection tools that can reliably identify such content. This work has involved advocacy with EU and US policymakers, and pilot studies in India.
Studying how AI systems impact citizens through social media is best done in international projects: the techologies in question are deployed by large multinational companies, and many aspects of regulation are international in scope. I’m involved, with Dino Pedreschi, in running a project on social media governance for the Global Partnership on AI – an international grouping of AI researchers. In this session, I’ll outline three topics we are working on. Regarding recommender systems, our main focus is on how we can enable access for external researchers to the methods companies use themselves to study the effects of recommender systems on platform users. Regarding harmful content classifiers, our main focus is piloting an idea about how the training of these systems can be moved outside companies, into a more public and accountable domain. Regarding AI-generated content, our main focus is on how to build detection tools that can reliably identify such content. This work has involved advocacy with EU and US policymakers, and pilot studies in India.
Speakers
Alistair Knott (University of Otago)
Alistair Knott is Professor of Artificial Intelligence at Victoria University of Wellington. He has been an AI and computational linguistics researcher for 30 years. He studied Philosophy and Psychology at Oxford University, then obtained MSc and PhD degrees in AI at the University of Edinburgh. He then moved to New Zealand, working first at Otago University and now at Victoria University of Wellington. Ali’s AI research is in computational modelling of cognition, most recently with the New-Zealand-founded AI company Soul Machines, where a longstanding project is the development of a model of embodied baby cognition, BabyX. Currently, Ali’s work mostly focusses on the social impacts of AI, and on AI regulation. He co-founded Otago University’s Centre for AI and Public Policy, where he worked on Government uses of AI, and on the impact of AI on jobs and work. He now co-leads the Global Partnership on AI’s project on Social Media Governance, with Dino Pedreschi. Ali has also contributed to the Christchurch Call’s Algorithms Workstream, the Global Internet Forum to Counter Terrorism, and the Forum for Information and Democracy.
28/06 Lecture: Responsible Generative AI
Abstract
The rapid development and deployment of generative AI models and applications has the potential to revolutionise
various domains which brings about the urgency to use these models in a responsible manner. Generative AI refers
to creating new content in different modalities of digital text, images, audio, code and other artefacts based on already
existing content. Text generator models such as GPT-4, and ChatGPT and text to image models such as DALL-E 3 and
Stable Diffusion are popular generative AI models. Although these models have significant implications for a wide spectrum
of industries, there are several ethical and social considerations associated with generative AI models and applications.
These concerns include the existence of bias, lack of interpretability, privacy, fake and misleading content such as
hallucinations. Thus, it is very crucial to discuss these risks with their corresponding potential safeguards (if any) in
addition to the technical details of these powerful models. In this tutorial which is composed of lecture and hands-on parts,
we aim to provide a brief technical overview of text and image generation models, and point out the key responsible AI
desiderata. In light of the risks of generative AI, we will further describe the technical considerations and challenges
to achieve the desiderata in practice.
various domains which brings about the urgency to use these models in a responsible manner. Generative AI refers
to creating new content in different modalities of digital text, images, audio, code and other artefacts based on already
existing content. Text generator models such as GPT-4, and ChatGPT and text to image models such as DALL-E 3 and
Stable Diffusion are popular generative AI models. Although these models have significant implications for a wide spectrum
of industries, there are several ethical and social considerations associated with generative AI models and applications.
These concerns include the existence of bias, lack of interpretability, privacy, fake and misleading content such as
hallucinations. Thus, it is very crucial to discuss these risks with their corresponding potential safeguards (if any) in
addition to the technical details of these powerful models. In this tutorial which is composed of lecture and hands-on parts,
we aim to provide a brief technical overview of text and image generation models, and point out the key responsible AI
desiderata. In light of the risks of generative AI, we will further describe the technical considerations and challenges
to achieve the desiderata in practice.
Speakers
Gizem Gezici (Scuola Normale Superiore)
Gizem Gezici (female) is an Assistant Professor of Artificial Intelligence at Scuola Normale Superiore of
Pisa, Italy. Her research focuses on the ethical dimensions of AI, with a particular interest in
investigating the long-term impacts on sociotechnical systems. She has expertise in information
retrieval (IR) and natural language processing (NLP), with a proven track record of analysing and
improving search platforms, leveraging state-of-the-art IR and NLP techniques. Her research also
delves into applying deep learning-based architectures, such as Large Language Models (LLMs), to
downstream tasks in NLP. Prior to joining academia, she worked in an R&D Centre where she applied
cutting-edge IR and NLP approaches from academia to real-world search products on a large scale.
Pisa, Italy. Her research focuses on the ethical dimensions of AI, with a particular interest in
investigating the long-term impacts on sociotechnical systems. She has expertise in information
retrieval (IR) and natural language processing (NLP), with a proven track record of analysing and
improving search platforms, leveraging state-of-the-art IR and NLP techniques. Her research also
delves into applying deep learning-based architectures, such as Large Language Models (LLMs), to
downstream tasks in NLP. Prior to joining academia, she worked in an R&D Centre where she applied
cutting-edge IR and NLP approaches from academia to real-world search products on a large scale.
Fosca Giannotti (Scuola Normale Superiore)
Fosca Giannotti is Full Professor at Scuola Normale Superiore, Pisa, Italy. Fosca Giannotti is a pioneering scientist in mobility data mining, social network analysis and privacy-preserving data mining. Fosca leads the Pisa KDD Lab – Knowledge Discovery and Data Mining Laboratory, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab on data mining. Fosca’s research focus is on social mining from big data: smart cities, human dynamics, social and economic networks, ethics and trust, diffusion of innovations. She is author of more than 300 papers. She has coordinated tens of European projects and industrial collaborations. Fosca is the former coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation. Recently she became the recipient of a prestigious ERC Advanced Grant entitled XAI – Science and technology for the explanation of AI decision making.
Giovanni Mauro (Scuola Normale Superiore)
Giovanni Mauro was born in 1995 in Catanzaro (CZ). He holds a BSc in Computer Science from University of Pisa (during which he partecipated in a one-year Erasmus+ grant at Universidad Autónoma de Madrid) and a MSc in Data Science from Universitat Politècnica de Catalunya – BarcelonaTech. Before joining the PhD in Artificial Intelligence he worked as Data Engineer and cooperated with KDD-lab at ISTI-CNR for projects regarding Sport Analytics and Human Mobility Analysis. Currently, his main research interest are the development of algorithms for understanding and predicting human mobility flows, both in terms of daily mobility, and in terms of housing relocation phenomena such as segregation and gentrification. He is a Research Associate at ISTI-CNR and collaborates with the Networks Research Unit at IMT Lucca. He is Community Activist for the Task “Sustainable Cities for Citizens” of the research infrastructure “SoBigData++”” Soccer, tennis, sea, journeys and motorcycling are his main passions.