AI for medias: less fake news, more ethics

The European project AI4MEDIA will start in September and gather 30 partners. Idiap and HES-SO Valais-Wallis are part of it. Researchers and media actors want to contribute to a more ethical artificial intelligence in the field of medias.

Billions of tweets, online articles, and shared videos are available online. To identify suspicious contents, algorithms are indispensable tools for an automated analysis. Based on artificial intelligence, these screening technologies are often lacking transparency about their underlying processes and the databases of examples used to teach them to recognize undesirable content. The media diversity – videos, texts, and pictures – makes it even more challenging and highlights the need for a more ethical approach, which can’t only depend on the Tech Giants good will. The European project AI4MEDIA aim is to ensure a transparent and ethical approach. The project gathers 30 partners. Idiap Research Institute and HES-SO Valais-Wallis are the only Swiss partners.

The ability to understand and trust

Idiap’s Social Computing research group will bring a crucial expertise by evaluating the public’s comprehension and trust about these technologies based on artificial intelligence. “Whether it be a person or an organization, users must trust the tools they use,” explains professor Daniel Gatica-Perez, head of the group. “It implies that the technology must be transparent about where it comes from and how it works. This way the user can really evaluate its reliability.” The professor’s research group is currently involved in a study about trust in media with the support of the Swiss Initiative for Media Innovation. They are also part of another national study to measure the social and psychological impact of the COVID-19 related confinement. For these studies, the team uses an app whose platform was developed at Idiap.

A sensitive issue

« Media production seems easier to access and to use in comparison with medical data, but its automated analysis requires as much work,” warns professor Henning Müller from the Research Institute of Information Systems of the HES-SO Valais-Wallis. “When we develop an algorithm, constrains are as much technical as ethical. The aim is not to create censorship, rather to create a context checking tool.” Thanks to the European dimension of the project, the integration of various sources, languages, and cultures will allow to dissipate as much as possible biases. For example, it will be possible to imagine propose a kind of label certifying a website’s information reliability, as it is done in the medical field. “The aim is to offer an artificial intelligence with a human touch, more centred on our needs, while maintaining quality and reliability standards for the production of media contents,” concludes professor Müller.

More information