A new software to create holograms is now freely available

During their master’s thesis at Idiap, Rémi Clerc developed a new software to process photos and generate 3D pictures with less data. The aim was to regroup several codes that can generate an image dataset, train and test transformed pictures and make those models run on a microcomputer. The result is now freely available on Github.

Rémi graduated in February 2021 after doing their master's thesis in the Computational bioimaging group at Idiap. Their thesis was entitled 'Deep Learning Methods for Digital Holography in an Embedded System' and focused on two things: designing a deep learning model and training method to solve an inverse problem of digital holography, and building an end-to-end embedded imaging setup on which the model would run directly. This software is now available under the BSD-2 License.

We met them to tell us more about it.

“Digital Holography is about reconstructing 3-D images from an acquired 2-D image. You cannot usually do that because when you take a picture with a camera, you only get half of the data that the light carries. In Digital Holography, techniques are used to acquire or infer the other half of the data, thus allowing to reconstruct the 3-D image.

In my thesis, I tackled the reconstruction of a very thin object using only the half of the data a camera captures, with specific light conditions, using Deep Learning to do the reconstruction with the incomplete data. I used a mathematical model to create a large dataset to train my Neural Networks which were a home-made Auto-encoder architecture, and SRCNN, a Neural Network originally designed for Super-Resolution applications (imagine in CSI: Miami when they zoom on a blurry license plate and say "enhance!", that would be Super-Resolution). Both models performed quite well at this task, but SRCNN's performance beat my home-made network.

In the past few years, micro-computers such as the popular Raspberry Pi have become more and more powerful, such that they can now handle running a Machine Learning model, or even like the NVIDIA Jetson Nano, being specifically designed to run such models. We took advantage of these advancements by building an imaging setup that would contain a light source, the sample we want to observe, a camera, and the NVIDIA Jetson Nano only, where the Jetson Nano would receive the inputs from the camera and reconstruct them on the fly using the Deep Learning model we trained.

This kind of very simple setup is great to make science more accessible, because they are cheap and easy to use. It is in the spirit of accessibility that we made the software open-source, with the code to generate the dataset, to train the models, and to run them on the Jetson Nano, bundled with a comprehensive tutorial on how to set everything up.”



More Information