Creative Commons License Copyright © Michael Richmond. This work is licensed under a Creative Commons License.

How to "clean" raw images


The general plan

So, we have acquired some images at the telescope. How exciting! We can hardly wait to start measuring the brightness of our target on them. Oh, boy!

But when we display the first image, we see something that looks, well, ugly:

How can we make any measurements from an image such as this? It's almost impossible to figure out where the stars are.

Our plan will be to create a pair of "master" calibration frames, which we will then use to clean this raw image. The result will be a much nicer image, in which (we hope) the stars and galaxies and other objects appear more clearly. Even better than that -- in the clean image, measurements we make will more accurately reflect the number of photons arriving at our telescope from those celestial sources.

Here's the outline:

  1. create a "master" dark frame
  2. create a "master" flatfield frame
  3. subtract the "master" dark from the raw image
  4. divide the dark-subtracted image by the "master" flatfield image
  5. result: a clean image!


Create a "master" dark frame

Our first step will be to create an image which depicts -- with as little random noise as possible -- the electrons which are knocked free in the silicon due to thermal motions of the atoms, rather than by photons from the sky.

For CCDs, it's usually a good idea to create a separate "master dark" frame for each exposure time used for one's target objects. In other words, if we acquired images of several objects during the night with exposure times of, say, 10 seconds, 60 seconds, and 90 seconds, then we ought to create separate "master dark" frames with exposure times of 10 seconds, 60 seconds, and 90 seconds. That means we'd have to take a set of dark frames with exposures of 10 seconds, then a second set of dark frames with exposures of 60 seconds, then a third set of dark frames with exposures of 90 seconds. Phew. That would be a lot of work.

But for (at least some) CMOS cameras, such as the ASI 6200MM that we'll use at the RIT Observatory this semester, the dark current is so low that we can usually use just a single set of dark frames with one exposure time to correct raw frames with different exposure times. We could take one set of dark frames with an exposure time of 30 seconds, use it to create a single master dark frame, and apply that to ALL of our raw images.

We could acquire just a single dark image with the appropriate exposure time, and subtract it from our target frames; but that wouldn't be a great idea. The problem is that every dark frame we take has a combination of two features: the average number of electrons knocked free per pixel, plus some random fluctuations in that number. If we examined the values in one particular pixel in 10 consecutive dark frames, for example, we might see these counts:



        89, 91, 88, 95, 92, 85, 88, 90, 93, 89


    Q:  What is the average of these values?






The average of these value is 90 counts, with a scatter of about 3 counts around that average. Our best estimate of the number of electrons in a pixel due to the dark current would be 90 -- so that's the value we should subtract from our target images.

"Can we just take the average of a set of dark frames?" you ask. Well, we could, but we might run into problems. Suppose that in just one frame, a pixel is struck by a cosmic ray, so that the list of values in consecutive images looks like this:



        89, 91, 88, 95, 92, 385, 88, 90, 93, 89


    Q:  What is the average of these values?





Whoops! Now the average value is 120 counts -- which is NOT the typical value we expect in a typical frame. The AVERAGE is not a very robust mathematical quantity; it can be influenced strongly by a few very high or very low values.



     Q:   Is there a better statistic than the average for this
                sort of calculation of a "typical" value?








There are several more robust statistical quantities we could chose. Let's pick the MEDIAN, which is simply the middle item after values have been sorted numerically.



        89, 91, 88, 95, 92, 385, 88, 90, 93, 89


    Q:  What is the MEDIAN of these values?





The median is either 90 or 91, depending on how you deal with an even number of input values. Either way, it's a much better choice to represent the typical pixel value.

So, our method for creating a "master dark" frame will be to acquire a set of dark images, then compute the pixel-by-pixel MEDIAN of them all.

In the case of our example night of observing, the raw image looks like this:

and the "master dark" looks like this:


Create a "master" flatfield frame

We can follow exactly the same procedure to create a "master" flatfield frame, which will correct for changes in sensitivity to light across the field of view. In order to decrease the contributions of random noise, we can acquire a set of 10 or 20 flatfield frames for a given filter, and then combine them using the median, rather than the average. The result will be a "master flatfield" image which we can apply to our dark-subtracted target images.

But there's one small item we must not forget. The flatfield frames should represent only the response of the camera to light coming from the sky -- NOT to electrons generated by thermal motions. So, before we combine the individual flatfield images to create the master, we must first subtract the appropriate "master dark" image from each.



   Step 1: subtract master dark image from each raw flatfield image

   Step 2: combine dark-subtracted flatfield images into a "master flatfield"
                 using the median for each pixel


Because the flatfield frames are exposed to very high levels of light, the contributions of the dark current are small by comparison. It's difficult to tell the difference between a raw flatfield image and dark-subtracted one simply by viewing them.



  Q:  Which of the images above is the raw, single flatfield?
         Which is the dark-subtracted single flatfield?









If one combines enough dark-subtracted flatfield images, on the other hand, one can often see the effect of a lower noise level quite clearly.



  Q:  Which of the images above is the single dark-subtracted flatfield?
         Which is the master flatfield?











Subtract the "master dark" frame

We're ready to start cleaning! The first step is easy: just subtract the master dark image from the raw target image.



    (dark-subtracted image)  =  (raw target) - (master dark)

Here's the raw image,

and here's the dark-subtracted version:

Much better! Hooray!


Dividing by the "master flatfield" frame

If we stretch the contrast on our dark-subtracted image, we will see some additional nasty features appear. In many cases, the most obvious defects will be the shadows of specks of dust in the optical path. In our example case, there aren't any super-obvious dust shadows, but there is a general trend for the center of the image to be brighter than average, and the corners and edges to be fainter.

If we divide by the master flatfield frame, we can push down the central bright area, and pull up the outer faint areas. The result is a more uniform version of the image.

It looks even better if we set the contrast at a lower, more reasonable level.

Take a look for yourself: use AstroImageJ to display the clean image, and compare it side-by-side with the raw one.


For more information


Creative Commons License Copyright © Michael Richmond. This work is licensed under a Creative Commons License.