Computed Tomography (CT) segmentation with deep learning (part 2)

In the last post, I talked about the problem of obtaining data for studies of x-ray CT imaging. In this post I’m going to mainly talk about two pieces of work addressing this issue, the work of Kim & Kim (2014) and the work of Stanislav Žabić (2013) which was followed up by Daniela Muenzel et. al (2014). Both works attack the problem of taking a high-dose CT image and producing a set of synthetic ‘reduced-dose’ CT images (which we’ll call RDCT from now on). As I mentioned in the last post, doing this is useful because it allows you to develop methods for analyzing CT scans of various dosage levels without actually having to perform multiple CT scans on the same subject (with the resulting ethical issues that that would have). If your goal is to train an image segmentation algorithm, it’s also useful because you can train and validate on a much larger data set than what you initially started with.

Kim and Kim

I like this paper because they do a good job of reviewing the recent work, pointing out some of the challenges, and offering a new set of techniques for low-dose estimation.

According to the authors, there are two types of CT -> RDCT conversion methods: (1) those based on the raw sinograms from the machines, and (2) those based on the reconstructed 3D volumes (voxels). For method (1), they cite early work such as that from Mayo et. al (1997), which was based on simply adding Gaussian noise to the sinogram. They also cite some more recent work. They mention that this type of method has the issue where raw sinograms usually aren’t available. As a fellow radiology researcher, I can wholeheartedly confirm this. Some more recent work has used synthetic sinograms but the realism is limited.

So we’re left with option (2) – working with reconstructed volume data. This is the problem that the paper addresses.

CT noise model

If you want to simulate RDCT, you have to have a noise model, and you need to know the parameters of the noise model. The central idea of the paper is to measure the noise characteristics from the CT data itself, and to amplify these noise parameters to increase noise by any desired amount. The noise model they used is fairly simple and has roots in work done in the 70’s by Robert F. Wagner and Kenneth Hanson. It represents the noise in the CT image as a quantity called the noise power spectrum (NPS). There are two other variables in the model:

  1. Noise equivalent density (NEQ), which relates to the number of photons that have to be detected in order to produce a SNR of 1.
  2. Modulation transfer function (MTF), which is a function that relates each spatial frequency in the scanned object to its attenuation and phase shift in the photo-detection and reconstruction system.

Given these variables, the model simply says that the NPS (for each frequency) is inversely proportional to the NEQ, proportional to the frequency, and proportional to the squared magnitude of the MTF (at that frequency). We don’t have to worry about estimating the NEQ or MTF from the physical parameters of the machine (which may be unavailable) since they offer in the paper a method for estimating this from CT phantom data. They also offer the NEQ curves estimated from the machine they used, which could be useful for people looking at validating the methods in their paper. They also go on to present a set of filters that they use for spatial noise reconstruction.

Žabić-Muenzel

In these two papers, the first one presents a RDCT estimation method and the second one presents a validation. The validation is interesting because they actually took multiple CT scans of the same subject (don’t worry though, it wasn’t a person, it was a pig). The first one argues that previous approaches to RDCT estimation had some problems; in their words:

All state of the art approaches for the single energy simulation have one thing in common: a low dose scan is simulated using a monochromatic noise model and synthetic noise is added on to a higher dose scan. We believe that all previously published papers make approximations to the realistic noise models that break down for very low dose simulations. We will discuss these approximations and provide evidence for our concerns regarding the very low dose scans in Sec. VII and show that our model is accurate even in those conditions. Finally, we complete our model by including a realistic non-Gaussian detector noise component, which additionally distinguishes our approach to any other method published to date.

In the second paper, Muenzel demonstrates ‘good agreement’ between the simulated and actual RDCTs, although to my semi-trained eye, the simulations actually look quite a bit noisier, and this leads me to question the validity of the results, although I’ll have to look at it in a bit more detail.

Advertisements

One thought on “Computed Tomography (CT) segmentation with deep learning (part 2)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s