Enhanced Medical Multimedia data Access

This project builds upon the results obtained in the first two years of activity to further enhance the access to large databases of medical images accompanied by medical reports. The goal is achieved through the improvement of the two main functionalities developed during the first part of the project: Multimodal Relevance Feedback (MRF). Modeling of user feedback using both medical images and related textual annotations to improve the performance of retrieval systems. Automatic Metadata Extraction (AME). Automatic assignment of predefined metadata to images for which such metadata are missing, exploiting the hierarchical structure of the data. The MRF will be improved as follows: the approach developed in the first part of the project requires the users to explicitly select among the keywords proposed by the system during successive Relevance Feedback iterations. The continuation of the project aims at identifying such keywords implicitly, i.e. by inferring the best keywords from the images that the users have actually selected as relevant. This reduces the burden for the users and makes the interaction with the system faster (select a relevant image out of N samples is faster that finding the best keyword out of N propositions). The AME will be improved as follows: the approach proposed during the first year of the project assigns a single metadata to images on the basis of multiple visual cues. The continuation of the project will extend the current algorithm to exploit the natural hierarchical structure of the data. This will make it possible to assign metadata with a higher confidence, reducing therefore the possibility of mistakes and ultimately improving the search results. It will also permit to assign multiple metadata to images in a single step, therefore providing eventually a richer set of query keywords. The approaches proposed to realize the above functionalities are both based on the joint modeling of images and associated medical reports. In the MRF case, the approaches are based on data exploration techniques modifying the a-priori probability (in a Bayesian framework) of an image being relevant based on the user feedback provided by the users. In the AME case, the association between visual characteristics and hierarchical metadata is modeled using discriminative classifiers and Error Correcting Output Codes. The above approaches will be tested on the 2009 version of the ImageCLEFmed database, the biggest publicly available collection of medical images associated to medical reports (around 75000 samples). The approaches will be assessed not only from a technical point of view (in terms of search improvement for the MRF and in terms of correct metadata assignment rate for the AME), but also from a user point of view: the systems will be tested at the University Hospital of Geneva and at the University Hospital of Rome 'Sant'Andrea' by medical personnel which will evaluate the effectiveness as a support in routine medical activities.
Application Area - Health and bioengineering, Social and Human Behavior
Idiap Research Institute
Hasler Stiftung (Hasler Foundation)
Jan 01, 2010
Dec 31, 2011