Reasoning & Explainable AI

The Reasoning & Explainable AI group aims at developing systems which are capable of complex, abstract and flexible inference.

We operate at the interface between neural and symbolic AI methods aiming to enable the next generation of explainable, data-efficient and safe AI systems. Our research investigates how the combination of latent and explicit data representation paradigms can deliver better inference over data.
Our current research areas include:

  • Inference & Explanations
    • Natural language inference
    • Abstractive inference
    • Explanation generation
    • Explainable question answering
    • Scientific inference & explanations
  • Neuro-symbolic models
    • Multi-hop reasoning
    • Semantic control
    • Semantic probing
  • Extraction & Representation
    • Sentence & discourse representation
    • Open information extraction
    • Knowledge Graphs
    • Scalable Knowledge-based inference
  • AI applications in cancer research

Group News

Four new researchers join the effort to shape the future of AI
institute — Feb 26, 2021

Idiap is developing its research capacities by hiring four new senior researchers. Two women and two men, whose goal will be to work on topics with great potential in AI and to continue to progress in areas that have already contributed to the reputation of the institute.

Group Job Openings

Several Openings for Cross-Disciplinary Senior Researcher positions by admin — last modified Mar 01, 2021
With the growth of the institute, in addition to increased Federal funding to further support its activities, Idiap is opening several additional permanent senior research scientist positions.

Current Group Members

FREITAS, André
(Research Scientist)
- website