CT segmentation with deep learning (part 3)

In the previous posts (#1 and #2) I talked about generating x-ray CT and reduced-dose CT (RDCT) images synthetically for purposes of training a neural network segmentation algorithm. In this post I’m going to shift gears a bit and, instead of talking about the literature, talk about the first step in actually implementing this stuff (or any stuff, really): writing code.

In the most common form of X-ray CT imagers (helical scan), a gantry containing an x-ray source and an x-ray detector moves around the subject, obtaining 1D x-ray slices from various angles. The slices can then be used to reconstruct the 2D image corresponding to that slice. The gantry rotates pretty rapidly, and moves forward and back across the subject to obtain a full 3d volume. There are other types of CT imagers, such as cone-beam imagers, but helical CT is by far the most common method used.

Computationally, the process of scanning a subject is the Radon transform, and the process of reconstructing a slice from the scan is called, well, the inverse Radon transform. In medical imaging lingo, this is also called ‘filtered back projection’ (actually, FBP has some modifications to make it less sensitive to imaging errors, but it’s basically the same). The larger the number of rotational angles that we image from, and the higher the resolution of each 1D image, and the higher the x-ray dose (higher dose means more photons which means less noise), the better the approximation to the actual subject. In the limit of infinite rotation angles and resolution, and zero noise, the reconstruction matches the imaged volume perfectly – every voxel represents the precise radiodensity of that point in the imaged subject.

For a variety of reasons, though, FBP is actually a pretty lousy algorithm to use in practice. Nevertheless, most commercial CT platforms continue to use it, and since most of them don’t allow easy access to the raw data from the machine which you could try your own reconstruction algorithms on, FBP is pretty much what you’re stuck with if you’re a radiology researcher. The silver lining, though, is that you can simulate the output from a commercial CT machine pretty easily, as FBP is pretty easy to code in software. There’s a matlab toolbox for it, but really, it’s not that hard to roll your own.

Just for fun, I actually downloaded that toolbox and gave it a whirl on the Siemens test pattern. It’s not really a useful test pattern for CT imaging but hey. Below are the reconstructions for 36, 360, 360 + noise, and 720 rotational angles (original is on the right).


The sinogram for r=360 is:


And here’s a closeup of the central regions to see the detail a bit better:


It might be interesting to produce a lot of randomly-generated noisy reconstructions and train a net (something like a stacked convolutional autoencoder) to reproduce the ‘ground truth’ from the reconstruction. I might do that in a future post.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s