# Dark Subtraction and Flatfielding

Before we can start to make measurements of position or brightness on a CCD image, we must "clean it up." CCDs leave (at least) two kinds of instrumental artifacts on every raw image. The goal of the "cleaning" process is to remove these artifacts so that the resulting image is an accurate record of the amount of light which struck the chip.

#### Dark Current

Recall that atoms in the silicon lattice vibrate and jostle each other due to their thermal energy. As the atoms bump into each other, they can excite electrons into the conduction band, even in the absence of light. The result is a pool of electrons which appears in every pixel. Astronomers call this signal dark current. Some pixels collect many more of these thermal electrons due to local imperfections in the crystal lattice: we call them hot pixels.

The level of the dark current depends on the temperature of the silicon. The colder the chip, the smaller the thermal motions, and the fewer the electrons knocked free by collisions.

Here's an example of a 30-second dark frame taken at the RIT Observatory. Note the hot pixels, which stand out from the general low background.

If we increase the contrast level, we see a small gradient in dark current across the chip:

The gradient is due to the way a CCD is read out: charge is transferred along columns, row by row. Pixels close to the amplifier are read out first, while those which are far from the amplifier have to wait a little longer. As they wait, thermal motions knock free extra electrons -- which gives them slightly higher dark current.

Exercise:
1. In which corner of the image above is the amplifier located?

Dark curent is an additive source of signal: the number of counts in each pixel is

```       measured electrons  =  (thermal electrons)  +  (photo-electrons)
```
We are interested in the amount of light which struck the CCD, which gives rise to photo-electrons. So, we want to subtract the contribution due to thermal noise:
```       photo-electrons     =  (measured electrons)  -  (thermal electrons)
```

So, we need to

1. make an accurate estimate of the number of thermal electrons in a particular pixel
2. subtract that estimate from the measured number of electrons for the pixel

Astronomers refer to this procedure as "subtracting the dark current", or dark subtraction for short.

#### Variations in sensitivity

Another problem with CCDs is that the response of the silicon to light may change slightly from place to place in the crystal, due to variations in chemical composition, electrode size and shape, or just plain dirt and dust on the front surface. So, even if 100 photons strike pixel A, and 100 photons also strike neighboring pixel B, we may read out 55 electrons from pixel A and 56 electrons from pixel B. It seems like a small difference -- and usually is, for real chips -- but can make a significant difference in photometry.

```           56 electrons - 55 electrons
-----------------------------  =  2 percent
56 electrons
```

Here's an example of a flatfield exposure from the RIT Observatory, taken in the V band. White areas represent low pixel values (with pure white being 23,569 or fewer counts), and dark areas represent high pixel values (with pure black being 24,195 or higher counts).

The amplitude of these variations is roughly

```           24,195 - 23,569
-----------------  =  3 percent
24,195
```

The little white donuts are diffraction patterns of dust particles on the glass window in front of the CCD chip. The big light donuts are again diffraction patterns, but this time due to dust particles on the glass filters (which are much farther from the focal plane than the camera's window).

How can we account for such small changes in sensitivity across the chip?

The solution is to expose the chip to a uniform level of light and record its response. If we know that the same amount of light strikes each pixel, but measure different pixel values, we can determine the relative sensitivity of each pixel. The usual method is to normalize the pixel values to the mean level over the entire frame.

```      pixel    pixel value     image mean      normalized pixel value
----------------------------------------------------------------
A        24,335          25,000      24,335/25,000  =  0.9734
B        26,103          25,000      26,103/25,000  =  1.0441
```

Then, before we start to measure the properties of stars on a target image, we can correct each pixel's value for this relative sensitivity by dividing the measured pixel value by its pixel's relative sensitivity:

```                                A                    B
------------------------------------------------------------
measure                5957 electrons       6405 electrons

relative sensitivity      0.9734               1.0441

divide                 5957/0.9734          6405/1.0441

===========================================================

corrected value          6,120                 6,134

```

So, one must create a flatfield frame for a camera/filter combination, and then divide by the flatfield to correct for pixel-to-pixel variations in sensitivity.

How can we take a picture with uniform light across the entire frame? There are three main techniques:

• dome flats: take a picture of the inside of the dome, or (better) a sheet of white plastic/paper hanging from the dome and illuminated by a set of lamps
• twilight sky flats: take a picture of the twilight sky, either at sunset or sunrise, when the light levels are high enough to hide stars
• night sky flats: combine lots and lots of night-time images of the sky (which do show stars); you need LOTS, because the average signal level in the background is usually low

We will usually take twilight sky flats.

#### Making a master dark

It is dangerous to acquire a single dark exposure and use it alone to reduce other data. There will be (small) random noise fluctuations in each pixel, plus, in a few pixels, (big) cosmic ray hits. We can improve the quality of the final, cleaned image by combining a number of dark frames to create a master dark frame.

The XVista program median will combine a number of images to create a single output image. It goes pixel-by-pixel through the input images, making a list of the values at each position in all the images:

```     at  row=120  col=95      values are   353 374 394 355 368 381 374
```

It selects the median pixel value from the list, by sorting the values and then picking the middle one:

```                       sorted values are   353 355 368 374 374 381 394
^
|
median value  ---
```

In this way, any pixels which are much higher or lower than normal (due to cosmic ray hits) are discarded, and random variations are smoothed out.

One uses the median program like so:

```      median dark1.fts dark2.fts ... dark9.fts outfile=master_dark.fts
```
In other words, one provides a list of input images on the command line, plus the name of the file into which the output image should be placed.

The amount of dark current in a pixel depends on the exposure time: the longer the exposure, the more time for electrons to be knocked free by thermal motions. One should therefore correct each image with a dark frame of exactly the same exposure time. If one has target frames of several different exposure times, say, 10 sec, 30 sec, and 60 sec, one must take dark frames with corresponding exposure times: dark frames of 10 sec, 30 sec, and 60 sec. For each exposure time, one should make a separate master dark frame, and use it only with corresponding target images.

#### Making a master flat

Just as we combine many individual dark images to create a master dark frame, we must also combine a number of individual flatfield images to create a master flatfield frame. As before, each image has random variations, plus possible cosmic-ray hits; but there's another possible source of noise in flatfield frames of the twilight or night sky: stars!

There's also another step involved: we must subtract the dark current from each individual flatfield frame before combining them to create the master flat.

Since different flatfield frames are very likely to have slightly different levels of illumination (especially twilight sky frames, which are taken as the sky grows darker or lighter), one cannot simply take the median or average of the individual frames. Instead, one should multiply or divide each individual flatfield image so that it has the same overall light level as all the other flatfield frames.

Putting this together, we have the sequence

1. subtract master dark frame from each flatfield image
2. calculate mean light level in each flatfield image
3. scale each flatfield image to a common mean level
4. combine the flatfield images to create a master flatfield frame

#### Correcting each target image

Now, how do we remove the dark current and variations in sensitivity from images of targets in the night sky? The procedure is first to remove the additive noise, then the multiplicative factor: