Center for AI Certification

The Center for AI Certification contributes to the advancement of AI technologies and its responsible use. The center conducts fundamental research in data lineage, fairness, interpretability, and robustness of AI models. The center supports the responsible and secure deployment of AI technologies and contributes thought leadership and technical expertise to the evolving landscape of AI standards and regulations. The General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act) also emphasize the importance of AI model conformity.

The activities of the center encompass legal and standards analysis, metric formalization, software development, toolbox creation, and data collection. The center will play a key role in providing validation and certification services for AI technologies that are becoming integral to various sectors, and thus the demand for safe, certified AI models is on the rise.

Idiap's expertise and reputation in key domains such as security, human interaction, health and robotics, and the alignment of the center with international standards, including ISO/IEC JTC 1/SC 42 Artificial Intelligence together with its expertise, positions the center as a leader in AI testing and certification. The commitment to international standards and collaboration across all research groups will ensure that the center remains at the forefront of AI technology development and certification.

 

 

Selected projects

  • SAFER - reSponsible fAir FacE Recognition

This project addresses the issues of fairness and ethics in face recognition. Strategies to assess and close the fairness gap are investigated by working on training and scoring time. The generation of synthetic datasets that are diverse and large-scale is explored to close the ethics gap.

  • SOTERIA - uSer-friendly digiTal sEcured peRsonal data and prIvacy plAtform

This project aims to drive a paradigm shift on data protection and enable active participation of citizens to their own security, privacy and personal data protection. It will develop and test in 3 large-scale real-world use cases, a citizen-driven and citizen-centric, cost-effective, marketable service to enable citizens to control their private personal data easily and securely.

  • FairMl - Machine Learning Fairness with Application to Medical Images

This project addresses three important challenges in the domain of machine learning (ML) fairness for medical imaging. Create novel ways to train ML models for medical imaging tasks, that can be automatically adjusted to become more useful (maximize performance), group or individually fair. Quantify fairness boundaries of ML models and associated development data. Build systems whose joint performance with humans in the decision loop is fair towards various individuals and demographic groups.

  • AI4EU - A European Excellence Centre for Media, Society and Democracy

The project will efficiently build a comprehensive European AI-on-demand platform to lower barriers to innovation, to boost technology transfer and catalyse the growth of start-ups and SMEs in all sectors through Open calls and other actions. The platform will act as a broker, developer and one-stop shop providing and showcasing services, expertise, algorithms, software frameworks, development tools, components, modules, data, computing resources, prototyping functions and access to funding.

  • BEAT - Biometrics Evaluation and Testing

The reliability of biometric technologies remains difficult to compare. There are no European-wide standards for evaluating their accuracy, their robustness to attacks or their privacy preservation strength. A dedicated online and open platform fills the gap by evaluating transparently biometric systems, designing protocols and tools for vulnerability analysis and developing standardization documents for Common Criteria evaluations.

 

Are you interested in the expertise of the center? Schedule a meeting:

Contact us