Experience in medical image processing with a strong focus on machine learning. Research interests are concentrated around the design and development of algorithms for processing and analysis of three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) images. I am also interested in computer vision topics, like segmentation, recognition and reconstruction.
We are constantly looking for students with an interest in medical image analysis, as well as the use of machine learning and computer vision in novel and established clinical and forensic applications. This page lists specific open student projects on a master and bachelor level. Students coming with their own research ideas are also welcome to get in contact! Please be aware that work on students projects is usually not financially covered by our side.
As a consequence of a bacterial infection, tooth associated infection is very common. Those pathologies are usually located in the surrounding of the root of the teeth. They can vary in diameter from a simple widening of the periodontal space up to several millimeters or more, being completely bone surrounded or perforating the adjacent anatomical borders. Furthermore, they potentially affect each of the around 30 roots per jaw. The manual location of those frequently requires a large amount of work, depending on the number of investigated teeth and the quality of the data set as well as on the education and experience of the doctor doing an examination. The aim of the project is to train deep convolutional neural networks (DCNN) to automatically recognize all the infected teeth in the 3D Cone Beam Computed Tomography (CBCT) image.
To start answering fundamental questions for understanding how the brain works, we need to look at the brain structure on the cell levels. Reconstruction of cell morphology and building connectivity diagram requires that all instances of neuron cell are segmented. Differently, to semantic segmentation, instance segmentation does not only assign a class label to each pixel of an image but also distinguishes between instances within each class, e.g., each individual cell in an electronic microscopy image gets assigned a unique ID. This work will investigate interesting direction for simultaneous segmentation of all instances by automatically encoding the individual instances as pixel-wise embeddings.
Deep convolutional neural networks (DCNN) have recently shown outstanding performance on image classification and object detection tasks due to their powerful multiscale filters. The dominant filters used in building DCNN architectures are only transitionally invariant, which is not optimal when the problem is rotation equivalent, as it is the case in e.g. cells detection and tracking task. Thus, by explicitly encoding the expected rotational invariance of the object in the image, the complexity of the problem is decreased, leading to a reduction in the size of the required model.
Computed tomography (CT) is a widely used medical imaging modality to generate a volumetric image representing the interior structure of a subject. To reconstruct a three dimensional (3D) CT image, a series of two dimensional (2D) X-ray based projections are acquired from different views of the subject. While the Filtered Backprojection (FBP) method yields an analytical solution to reconstruct a 3D CT image from these 2D X-ray projections, it also relies on a large number of them which correlates to the amount of ionizing radiation the subject is exposed to. In order to decrease the amount of ionizing radiation and consequently the subject’s risk to develop cancer, new CT reconstruction approaches that yield a decent image quality even from a low radiation dose need to be investigated. Recent research in low-dose CT reconstruction employed deep convolutional neural networks (CNNs) to find low-dose solutions to this problem. The goal of this project is to explore and evaluate deep learning based low-dose CT reconstruction approaches to investigate new solutions to this demanding problem.
The application of Bayesian theory to the deep learning framework recently has attracted the attention of both the computer vision and medical imaging community and is a currently growing field of research. By extending the mathematically grounded theory of neural networks with Bayesian theory, the ability to capture the uncertainty present in the data the model’s weights is gained. With this, not only comparable performance to current state-of-the-art results in applications like classification, segmentation, and regression, can be reached, but also the quality of the predictions can be assessed by their predictive uncertainty. The ability to reason about the data and model uncertainty [1,2] is of crucial importance for many applications that are related to decision making.