Deep Learning Methods for Biological Image Translation and Registration

  • Ruixiong Wang

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

Microscopy is one of the most prevalent techniques in biological research for observation in microcosms. Driven by advances in microscope technology, biological image processing techniques have become increasingly relevant. The recent achievements in image-processing approaches based on machine learning help make inestimable contributions to biological research. It is a promising avenue for breaking the boundaries between biological research and state-of-the-art engineering techniques.

Biological images have nevertheless limitations in practical applications. The work in this doctoral thesis aims to leverage deep learning methods to find solutions that mitigate such limitations and extend the application of microscopy. I focus specifically on the tasks of biological image translation and registration.

We start with an example application of microscopy to observe specimens raised in culture dishes, which can exhibit random displacement. Image registration for a series of images can contribute to the observation of detail changes over time in such a situation. A primary challenge for biological image registration is that the rotation angle and translation distance are completely arbitrary. I provide a model that extracts features of interest (FOI) from pairs of images and calculates the correlation among these FOIs. I introduce a feature-detector-free model based on feature point matching predicting parameters to constitute the affine transformation matrices and transform the moving images into fixed images. %This approach aims to implement registration by accounting for the spatial restriction.

Furthermore, fluorescent microscopy is a special type of optical microscopy that labels specific structures of specimens for observation. However, the process of sample labelling is time-consuming and laborious and occasionally subsequent sample damage arises from these labels. In particular cases, fluorescent images are necessities, but labelling experiments are infeasible. Image translation approaches constitute one possible solution. I proposed a deep-learning network based on generative adversarial networks to translate bright-field to fluorescent images, and simultaneously provide semantic segmentation for quantification of cell/nuclei health states.

Subsequently, I extend the achievements from previous chapters to a more challenging scenario: multimodal image registration, which enhances the observation of the same target from multiple perspectives. Image translation is proposed to overcome the challenges posed by different image modalities, and an enhanced registration model is employed to facilitate feature alignment. The feasibility and effectiveness of the model have been demonstrated in cytological and histological datasets.
Date of Award18 Jun 2024
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorAlin Achim (Supervisor), Stephen Cross (Supervisor) & Mark A Jepson (Supervisor)

Keywords

  • Biomedical Imaging
  • Biological Imaginig
  • Microscopy Imaging
  • Deep Learning
  • Image Translation
  • Image Registration
  • Multimodal Imaging

Cite this

'