General plan and timescale for making a SNAP flatfield

Michael Richmond
Mar 14, 2004
Mar 15, 2004

What sequence of exposures is required to determine a "flatfield" for the SNAP camera? I use the term "flatfield" as a shorthand for "the set of corrections necessary to remove instrumental signatures from stellar magnitudes." There are many corrections to make because the SNAP camera is large and complex, and because the goal of the SNAP mission requires that we eliminate systematic errors at the level of one or two percent.

A brief discussion of the manner in which one might characterize the instrumental errors is given in

http://spiff.rit.edu/richmond/snap/pipeline/oct22_2003/oct22_2003.html

In this document, I try to estimate the time required to complete a series of exposures which will permit us to estimate the instrumental effects. It is likely that "final" calibrations will include measurements from the regular observations of the SNAP fields over a period of many months. However, if we wish to make preliminary reductions of the data during the first few months on orbit, we will need a first guess at the systematic errors.


Degeneracies and how to avoid them

Because the SNAP camera contains so many pieces and covers such a large area, it is not simple to identify the source of a particular error unambiguously. For example, suppose that we take one image, shift the telescope so that stars move from one chip on the focal plane to another, and then take a second image. Consider a star which moves from chip A to chip B, both of which are covered by the same filter.


           First image                         Second image

        chip A        chip B                chip A       chip B
       ---------    ----------            ----------    ----------
       |       |    |        |            |        |    |        |
       |   *   |    |        |            |        |    |   *    |
       |       |    |        |            |        |    |        |
       |       |    |        |            |        |    |        |
       ---------    ----------            ----------    ----------

The star appears 0.05 mag brighter in the first image than in the second image. Why? There are several possibilities:

With just this single pair of measurements of this single star, we cannot figure out which effect(s) is responsible for the difference in instrumental magnitude. However, if we take many exposures which include many stars, we can disentangle the various effects ... if we plan those exposures properly. The key is to acquire images which emphasize one particular source of error at a time, and use those images to determine the size of each source.


Assumptions

I make the following assumptions for properties of the camera and spacecraft.


First draft of a calibration/flatfield sequence

There are two components in this plan:

I have chosen an exposure time of 10 seconds for the "short" series. This is a balance of two factors: the images should include stars bright enough to have very good spectrophotometric calibration -- which suggests very short exposure times -- but be long enough that errors in the shutter motions are negligible. As we learn more of the hardware, the exposure time may change. A value of 10 seconds permits us to measure stars of approximately V=15 without saturation. There are roughly 630 stars per square degree in the SNAP North field with both B and R magnitudes between 15 and 18; this gives us over 20 stars in each filter with good signal in a 10-second exposure.

The short exposures should be conducted as follows:

  1. point telescope near center of SNAP field

  2. move stars within a single filter The total time: 4 x (10 + 30) = 160 sec per grid point, thus (25 x 160) = 4,000 seconds.

  3. move stars within a single chip The total time: 4 x (10 + 30) = 160 sec per grid point, thus (25 x 160) = 4,000 seconds.

  4. move stars from filter to filter The total time: 4 x (10 + 30) = 160 sec per pointing, thus (180 x 160) = 28,800 seconds.

At the end of this sequence, many stars will have been measured on each of the 180 different chip-plus-filter units. We can examine variations in sensitivity within each filter and each chip, and see large-scale variations across the focal plane. We should also understand the action of the shutter.

The total time for this entire sequence is 36,800 seconds. There will be 3 long slews involved, each time we move stars from the chips of one quadrant of the focal plane to another. I suspect that these slews may take longer than the nominal 30 seconds. However, it appears that the total time will be less than one day.

Now for the series of intermediate-length images. An exposure time of 100 seconds will saturate stars of V=15, but will give good signal for stars in the range from magnitude 17 to magnitude 20. Stars at the faint end of this range will be unsaturated on ordinary SNAP survey exposures, while those at the bright end allow us to tie to the very short exposures.

The main goal of this set is to place stars at least once on all 9 of the filters, so that we have a measure of each star's magnitude in each filter. We include 5x5 grids for intra-filter and intra-chip variations again, since the number of stars on each chip will be much larger than in the short exposures. We can compare the variations across each grid in the short vs. intermediate set to gauge the accuracy of the corrections.

  1. point telescope near center of SNAP field

  2. move stars within a single filter The total time: 4 x (100 + 30) = 520 sec per grid point, thus (25 x 520) = 13,000 seconds.

  3. move stars within a single chip The total time: 4 x (100 + 30) = 520 sec per grid point, thus (25 x 520) = 13,000 seconds.

  4. move stars from filter to filter The total time: 4 x (100 + 30) = 520 sec per pointing, thus (18 x 520) = 9,360 seconds.

The total time for this set of intermediate-length exposures is thus 35,360 seconds.

It might be a good idea to roll by 90 degrees and repeat this series of measurements so that we can tie all four quadrants together. I want to give this matter more thought ...


A note on "ordinary" flatfield images via lamps

There is a plan to include lamps on board the spacecraft so that we can acquire "ordinary" flatfield images: pictures taken of a uniform source of illumination. In our case, the idea is to shine light from the lamps near the focal plane onto the back of the shutter, so that it bounces back onto the detectors.

We can certainly use such lamp flats to investigate the pixel-to-pixel sensitivity variations within a single chip, as well as to detect defects and gross variations in overall quantum efficiency between chips. However, if we were to rely solely on these lamp flats to correct our measurements, we would incur systematic errors on large scales across the focal plane, because starlight and lamplight will not strike the focal plane in the same way. Starlight will come through the optics; there may be vignetting across the field, and there will certainly be effects due to the vanes of the secondary. In addition, rays of light will hit each chip at a very particular narrow range of angles from the normal. Light from the lamps will not pass through the optics, but bounce off the back of the shutter and reach the chips from a much wider range of angles.

I suspect that the variations in sensitivity across the focal plane due to the optics may be corrected with a relatively simple, low-order polynomial. The lamp flats cannot tell us anything about these variations.


Conclusions

The two sequences of exposures described above, with short and intermediate exposure times, require a total of about 72,000 seconds; that's less than one day.

It may be necessary to include another set of calibration exposures which include 90-degree rolls, made whenever the spacecraft is going to execute a roll anyway. It will also be very useful to compare measurements made in the ordinary "lawn-mower" survey mode just before a roll with those made just after a roll, because the perpendicular motions of stars through the rows of detectors will tie together the photometric solutions of adjacent columns.


For more information

The idea of using stars as constant sources to determine the errors in photometry across a focal plane is not new. A number of authors have suggested it or demonstrated it:

The basic idea is the same in every case: find some sources of light and take repeated images while shifting the telescope by small amounts. The change in the intensity of the sources as they move through the field can be inverted to determine spatial variations in the detector's sensitivity.

Ralph asks specifically about any differences between the approach I have described above and that used by van der Marel to make "L-flats" for HST. Let's see .... I believe that HST L-flats are measured for one chip at a time. Each chip covers a very small region of the sky. For SNAP, we must account for variations not only within each chip (or filter), but between chips and across the entire focal plane. That makes our job much more difficult. Van der Marel's paper describes two choices of describing variations in the sensitivity, polynomials and "chessboards"; we might use either one; chessboards might be better suited to variations within an individual chip or filter. We have additional terms -- color-dependent effects -- which I suspect the HST users ignore. There are additional differences in computational details which I don't think are important for us to consider.