EXUS as partner
Multi-Attribute, Multimodal Bias Mitigation in AI Systems
MAMMOth, aims to address the critical issue of bias in artificial intelligence (AI). As AI continues to integrate into various sectors of our lives, from healthcare to finance, the risk of perpetuating and amplifying societal biases becomes increasingly evident.
The goal of Project MAMMOth is to address and mitigate the multi-discriminatory impacts of artificial intelligence (AI) in various sectors by developing a fairness-aware AI-data driven foundation. Recognizing the limitations and constraints of current bias mitigation methods, which do not adequately reflect real-world complexities, MAMMOth focuses on multi-discrimination mitigation for different data types including tabular, network, and multimodal data. It aims to develop tools and techniques for detecting and mitigating bias, especially in areas where bias emerges in complex ways, such as network and visual data. The project emphasizes the need for fairness definitions beyond single protected attributes, the emergence of bias in network and multimodal data, and the development of explainability methods that are accessible to affected communities and a broad audience.
MAMMOth engages with vulnerable and underrepresented groups in AI research to ensure that their needs guide the research agenda. It adopts a co-creation approach, integrating social science and ethical principles into its research methodology. The project plans to communicate its findings to academics, researchers, data scientists, and practitioners, offering training and dissemination activities. It also intends to integrate its research outcomes into open-source tools and frameworks to enhance the impact of open science.
Finally, MAMMOth aims to demonstrate the effectiveness of its solutions through pilots in key sectors such as finance, identity verification, and academic evaluation, ensuring that its developments are tested and applicable in real-world scenarios.
11/2022 - 10/2025
Project duration
3,304,975.00 €
Overall Budget
CL4-2021-HUMAN-01-24
Topic
Impact
Working with computer science and AI experts, the project creates tools for fairness-aware AI which ensure accountability with respect to protected attributes like gender, race, and age. The project also engages with communities of vulnerable and/or underrepresented groups in AI research to ensure that user needs and pains are truly at the centre of the agenda. The end goal is to develop pilot projects where bias is studied, evaluated and mitigated for finance/loan applications, identity verification and academic rating.
ETHNIKO KENTRO EREVNAS KAI TECHNOLOGIKIS ANAPTYXIS (Greece)
UNIVERSITAET DER BUNDESWEHR MUENCHEN (Germany)
COMPLEXITY SCIENCE HUB VIENNA CSH -VEREIN ZUR FORDERUNG WISSENSCHAFTLIWISSENSCHAFTLICHER FORSCHUNG IM BEREICH KOMPLEXER SYSTEME (Austria)
ALMA MATER STUDIORUM - UNIVERSITA DI BOLOGNA (Italy)
Partners
RIJKSUNIVERSITEIT GRONINGEN (Netherlands)
EXUS (Greece)
TRILATERAL RESEARCH LIMITED (Ireland)
IDNOW (France)
CSI CENTER FOR SOCIAL INNOVATION LTD (Cyprus)
ASSOCIACIO FORUM DONA ACTIVA 2010 (Spain)
VSI DIVERSITY DEVELOPMENT GROUP (Lithuania)
IASIS (Greece)
TRILATERAL RESEARCH LTD (UK)