How does "flatfield noise" scale with flatfield light level?

Michael Richmond
Apr 17, 2008
Apr 18, 2008


Executive summary

How much light from lamps inside the spacecraft must reach the focal plane in order to create a sufficiently precise "P-flat" ("lamp-flat")? I ran some simple pixel-level simulations of isolated stars to look at the relationship between the exposure levels for individual flatfield frames and the precision of stellar photometry. I find that if we build up a total signal of about 100,000 photoelectrons per pixel in the master "lamp" flatfield frame, the random noise added to a single stellar measurement due to shot-noise fluctuations in the master flatfield will be of order 0.001 mag.

How the scatter in single stellar measurements affects the determination of cosmological parameters is outside the scope of this document.

Details of the simulation with a uniform detector

When we create a "P-flat" using lamps inside the spacecraft, we are interested in measuring the sensitivity of the detector on small spatial scales. Our ability to compare the sensitivity of one pixel to that of its neighbor depends largely on the random fluctuations in the number of photons which strike the detector. The larger the number of photons collected, the smaller these fluctuations and the more precisely we can remove pixel-to-pixel variations in sensitivity via flatfielding.

But how many photons must we collect in order to incur a specified random error in the measurement of a single STAR? This is quite a different question than asking about the statistics of a single pixel, since the light from a star is spread over a number of pixels in a non-uniform manner.

To address this question, I made some simple pixel-level artificial images with the following properties.

First, I ran a test with no flatfielding at all. I simply generated a grid of 16 stars in a regular grid across the detector, each one centered at the same sub-pixel location. I found that moving these stars to different sub-pixel locations caused my star-generating routines to yield changes in total counts of about 0.00034 mag, so that should not have been important; but to be safe, all stars fell in exactly the same sub-pixel spot.

The stars were set to be equal in brightness, with a peak value of about 30,000 counts, just below the saturation threshold. This total number of electrons collected was about 850,000, so the random photon noise should have introduced a fractional error of about

   fractional error =  ------------------  =  0.00108

I generated typically 10 frames of 16 stars each, so a total of 160 measurements. My measurements of the scatter in measurements of these stars was about 0.0010 mag, so that's a good check that my code is doing the right thing so far.

We can call these artificial frames the "raw star" images. Since there are no imperfections in my simulation, however, these ought to be clean.

Next, I created flatfield frames. Since the sensitivity across each detector was uniform, I simply generated a uniform source of photons falling on each pixel ... but with random fluctuations due to the shot noise in the signal striking each pixel. For very low flatfield light levels, I used a single image. To build up larger light levels, I created multiple flatfield frames and combined them using the median technique (i.e., find the median pixel value at each location and place that into the output "master" frame). The result was a "master" flatfield image which had a uniform level overall, but small fluctuations on small scales.

I verified that those small pixel-to-pixel fluctuations obeyed the same statistical relationship shown above for shot noise.

Now, I "degraded" the "raw star" frames by dividing them by the master flatfield. This step just ADDED noise. In real life, of course, the hope is that, although it adds a small amount of random noise, it reduces systematic errors which would otherwise plague the photometry. By creating a master flatfield based on a very large number of collected photons, we can keep the added random noise at a neglible level.

I then measured the properties of the stars in these "flatfielded" frames. I knew the input brightness of each star, so I could easily compute the difference

       diff  =  (output counts per star) - (input counts per star)

I then computed the mean and standard deviation of this "diff" statistic, using all the stars in a set of frames. The standard deviation is the important part: I will call it the "scatter" in measurements of single stars from this point forward.

We can divide this scatter into two components:

    (total scatter per star)  =  (flatfield noise per star)     + 
                                     (star's own photon noise per star)

Since my earlier simulations (without any flatfielding) provided the star's own contribution to this noise, I could compute the contribution from the flatfields alone:

    (flatfield noise per star) =  (total scatter per star)  -
                                     (star's own photon noise per star)

In the table below, I present my results. The values for "flatfield noise per star" are expressed in fractional terms, so a value of 0.0010 corresponds to 0.1 percent, or about 0.0010 magnitudes.

  Photons per pixel            flatfield contribution
   in master flatfield      to scatter per stellar measurement
       4 x 10^3                     0.0040

       8 x 10^3                     0.0034

       2 x 10^4                     0.0020

       4 x 10^4                     0.0014

       8 x 10^4                     0.0009

       2 x 10^5                     0.0006
       4 x 10^5                     0.0004

Do these results depend on the exact size or shape of the PSF? I think that they must, but only made a simple test of the idea. I created artificial images of stars which had the SAME TOTAL NUMBER OF PHOTONS, but different PSF values, ranging from 0.1 pixels to 4.0 pixels. The light was therefore concentrated within very different areas on the chip. In real life, this would cause great differences in the measurement of the sources, due to the contributions of background sky. In my simulation, however, there was no background sky nor any readout noise.

I ran the same procedure: create grid of stars, divide by a noisy master flatfield, compare the output magnitudes to the input magnitudes. What I found was only a weak dependence on the size of the PSF: the smallest PSF, with FWHM = 0.1 pixel, had the largest scatter, but only by 15 percent or so compared to the broader PSFs.

Details of the simulation with a non-uniform detector

At Mike Lampton's suggestion, I repeated the simulations with two small changes:

Once again, I ran simulations with different levels of light in the individual flatfield exposures. In the end, I compared the output stellar brightness to the input stellar brightness, and separated the scatter in differences into components due to the photon noise in the starlight itself, and noise due to the imperfect flatfield frame.

The results are similar to those for a perfectly uniform detector, but not quite as good. There is more residual scatter in the results for a given level of flatfield exposure time.

  Photons per pixel            flatfield contribution
   in master flatfield      to scatter per stellar measurement
       4 x 10^4                     0.0019

     1.2 x 10^5                     0.0012

       4 x 10^5                     0.0006

     1.2 x 10^6                     0.0005