Ongoing Projects

BBforTAI: Biometrics and Behavior for Unbiased and Trustworthy AI with Applications (2022-2024)

Title: Biometrics and Behavior for Unbiased and Trustworthy AI with Applications
Type: Spanish National R&D Program
Code: PID2021-127641OB-I00
Funding: ca. 273 Keuros
Participants: Univ. Autonoma de Madrid and Univ. Politecnica de Madrid
Period: January 2022 – December 2024
Principal investigator(s): Julian Fierrez and Aythami Morales

The development of Artificial Intelligence (AI) has enabled great advances in various fields such as Computer Vision or Natural Language Processing. Nowdays, these automatic systems are increasingly being used in decision-making processes that affect our daily lives. However, different voices from the academy and the industry are raising concerns about the unforeseen effects and behaviors of AI agents, not initially considered during the design phases, as a consequence of the training schemes of such systems. Recent research has demonstrated that some biases in state-of-the-art AI systems dealing with human data exacerbate existing inequalities and reinforce gender, racial and other stereotypes in different such as the labor market, education, justice, or online advertising systems. This situation has led researchers to study the fairness and trustworthiness of AI systems, in order to prevent the reproduction of biases in such systems that may discriminate against demographic groups on the basis of gender, race or other protected attributes.

In this context, BBforTAI aims to investigate the prevention and mitigation of bias in AI, focusing in four realistic use cases: (i) e-learning; (ii) e-health; (iii) hiring tools; and (iv) media and content analytics. Most fairness research in AI focus in both (i) providing bias free solutions and (ii) creating mechanism to detects and reduce biases when they appear. In line with these two approaches, BBforTAI will work on (i) integrating the ethical, legal, and social aspects as design criteria of AIs (which is usually known as ELSA by design), and (ii) developing bias-preventing solution that consider the possible presence of bias in the training data. For this purpose, this work will be based in the state-of-the-art of different research areas related to fairness in AI, such as the detection of bias in machine learning algorithms, discrimination-aware learning approaches or the explainability and transparency in machine learning systems.

More specifically, the general objectives of BBforTAI can be grouped in four main points:

  1. The development of new methods for measuring and combating biases in multimodal AI frameworks. In this sense, the aim is to extend existing bias analysis and explainability studies in AI, as well as to develop bias prevention methods that are general enough to be applied to different data-driven learning architectures regardless of the nature of the data.
  2. Develop new methods for improving the trust in multimodal AI systems, by integrating the developments in objective 1 with recent advances in secure and privacy-preserving AI.
  3. Develop core technologies for incorporating the main advances in the previous points into the value chain of practical application areas of social importance. In this regard, we propose four case studies based on some pillars of the welfare society: (i) e-learning platforms; (ii) multimodal biometrics for e-health; (iii) equal opportunities in the access to the labour market; and (iv) media and content analytics.
  4. The technical cooperation with ELSA and social/human experts to generate new knowledge on the behavior of people in the contexts portraited in BBforTAI. The project aims to contribute to legal developments, standards, and best practices of use of AI systems, as well as analyzing human behavior patterns in the four cases studied in the previous point.

Selected Publications:

  • I. Serna, A. Morales, J. Fierrez and N. Obradovich, “Sensitive Loss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning”, Artificial Intelligence, April 2022. [PDF] [DOI] [Dataset] [Tech]
  • R. Daza, D. DeAlcala, A. Morales, R. Tolosana, R. Cobos and J. Fierrez, “ALEBk: Feasibility Study of Attention Level Estimation via Blink Detection applied to e-Learning”, in AAAI Workshop on Artificial Intelligence for Education (AI4EDU), Vancouver, Canada, February 2022. [PDF] [DOI] [Dataset]
  • I. Serna, D. DeAlcala, A. Morales, J. Fierrez and J. Ortega-Garcia, “IFBiD: Inference-Free Bias Detection”, in AAAI Workshop on Artificial Intelligence Safety (SafeAI), CEUR, vol. 3087, Vancouver, Canada, February 2022. [PDF] [DOI] [Dataset] [Code]

TRESPASS: Training in Secure and Privacy-preserving Biometrics (2020-2024)

Title: Training in Secure and Privacy-preserving Biometrics
Type: H2020 Marie Curie Initial Training Network
Code: H2020-MSCA-ITN-2019-860813
Funding: ca. 502 Keuros
Participants: UAM, Univ. Applied Sciences H-DA (Germany), Univ. Groningen (Netherlands), IDIAP (Switzerland), Chalmers Univ. (Sweden), Katholieke Univ. Leuven (Belgium)
Period: January 2020 – December 2023
Principal investigator(s): Massimiliano Todisco (Julian Fierrez and Aythami Morales for UAM)

Objectives:

  • To combat rising security challenges, the global market for biometric technologies is growing at a fast pace. It includes all processes used to recognise, authenticate and identify persons based on biological and/or behavioural characteristics.
  • The EU-funded TReSPAsS-ETN project will deliver a new type of security protection (through generalised presentation attack detection (PAD) technologies) and privacy preservation (through computationally feasible encryption solutions).
  • The TReSPAsS-ETN Marie Sklodowska-Curie early training network will couple specific technical and transferable skills training including entrepreneurship, innovation, creativity, management and communications with secondments to industry.

Selected publications:

  • J. Hernandez-Ortega, J. Fierrez, L. F. Gomez, A. Morales, J. L. Gonzalez-de-Suso and F. Zamora-Martinez, “FaceQvec: Vector Quality Assessment for Face Biometrics based on ISO Compliance”, in IEEE/CVF Winter Conf. on Applications of Computer Vision Workshops (WACVw), Waikoloa, HI, USA, January 2022. [PDF] [DOI] [Code] [Tech]
  • L. F. Gomez, A. Morales, J. R. Orozco-Arroyave, R. Daza and J. Fierrez, “Improving Parkinson Detection using Dynamic Features from Evoked Expressions in Video”, in IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops (CVPRw), June 2021, pp. 1562-1570. [PDF] [DOI]
  • O. Delgado-Mohatar, R. Tolosana, J. Fierrez and A. Morales, “Blockchain in the Internet of Things: Architectures and Implementation”, in IEEE Conf. on Computers, Software, and Applications (COMPSAC), Madrid, Spain, July 2020. [PDF] [DOI]

Web: https://www.trespass-etn.eu/


AI4Food:  Inteligencia Artificial para la Prevención de Enfermedades Crónicas a través de una Nutrición Personalizada

Title: Inteligencia Artificial para la Prevención de Enfermedades Crónicas a través de una Nutrición Personalizada
Type: CAM Synergy Program
Code: Y2020/TCS6654
Funding: ca. 310 Keuros UAM (620 Keuros in total)
Participants: Univ. Autonoma de Madrid and IMDEA-Food Institute
Period: July 2021 – June 2024
Principal investigator(s): Javier Ortega-García (UAM technical lead: Aythami Morales)

Objectives:

  • AI4Food will develop a series of enabling technologies to process, analyze and exploit a large number of biometric signals indicative of individual habits, phenotypic and molecular data.
  • AI4Food will integrate all this information and develop new machine learning algorithms to generate a paradigm shift in the field of nutritional counselling.
  • AI4Food technology will allow a more objective and effective assessment of the individual nutritional status, helping experts to propose changes towards healthier eating habits from general solutions to personalized solutions that are more effective and sustained over time for the prevention of chronic diseases.
  • AI4Food will also advance knowledge on three questions using these new technologies: 1) WHICH are the sensor dependent and sensor independent biomarkers that work best for nutritional modelling of human behavior and habits? 2) WHEN, that is, under what circumstances (e.g., user habits, signal quality, context, phenotypic and molecular data), and 3) HOW can we best leverage those signals and context information to improve nutritional recommendations.
%d bloggers like this: