On August 28, 1995, frank sheldon wrote in sci.astro: > I am putting togeather a presentation for my asronomy club on > measuring distances in space. > In regard to distance determined by heliocentric parallax, my > various sources describe the accurate effective range of this > technique form 100 light years to 250 light years. That's a > pretty broad spread! Does anyone know the maximum range of > distance using HP? What is the accuracy at this maximum range? This is a complicated question, since there are a number of different techniques for deriving parallaxes; each has different advantages, weaknesses, and precisions. For example, one can - use a long-focus refractor to take a series of photographic plates over many decades (e.g. Sproul Observatory) - use a long-focus refractor with a special detector incorporating a diffraction grating (e.g. MAP, at U of Pittsburgh) - place a satellite into orbit, with special-purpose astrometric detector, for a few years (e.g. Hipparcos) Let's look at the general case. Assume that you have an instrument that can measure the parallax of a star to a precision of X arcseconds (let's ignore proper motions and other complications). With the many different methods, X might be 0.05 to 0.001 arcseconds. Now, the distance to a star, D, is simply related to its parallax, P, via D = 1/P Imagine that you measure a parallax P for some particular star, and you estimate that the (1-sigma) uncertainty in your measurement is X (meaning there is a 67 percent chance that the actual measured quantity P lies within the range P-X to P+X). You might expect that the (1-sigma) upper and lower limits to the distance to the star would then be D(min) = 1/(P-X) D(max) = 1/(P+X) However, there is a problem with this argument. The problem is that some error in your measuring process can act two ways: 1. it can cause a star which is _truly_ nearby (distance D1 = D-X < D) to appear at the greater apparent distance D 2. it can cause a star which is _truly_ far away (distance D2 = D+X > D) to appear at the smaller apparent distance D Now, if there were equal numbers of stars at distances D1 and D2, then equal numbers would get scattered into the apparent distance D. BUT, since the volume of space inside which a star might be lurking is larger in the second case -- that is, because the volume of a shell from radius D to D+X is larger than that of a shell from radius D-X to D -- it is more likely that a _truly_ distant star suffers an inaccurate measurement, and appears to be closer than it truly is, than that a _truly_ nearby star should appear to be farther than it truly is. This introduces a "bias" to measurements of parallax, causing one to UNDER-estimate the distances to stars most of the time. Now, if you are working with a large sample of stars, you can try to account for this bias when working out their group properties. However, if you want the know the distance to _one, single star_, then you can't remove the bias. This is sometimes called the "Lutz-Kelker bias", after those astronomers who first studied it. As an illustration for this, let me introduce a specific case. Suppose that you have a device that measures parallaxes with a precision of X=0.02 arcseconds. "Hey," you might say, "with this device, I can measure the distances to stars as far away as 1/X = 50 parsecs (= 163 light years)". I have written a program to simulate the results of the use of such a device; it assumes that stars are uniformly distributed in the solar neighborhood (which is probably not far from true out to several hundred parsecs). My simulation considers your measurements of about 70 stars in each of a series of shells, moving outwards from radius 5 parsecs to 200 parsecs. It compares your measured parallaxes to the actual ones. Here's a table showing the fraction of measured stars which have a true distance within 20% of the measured distance: D = 5 parcsec 94 % have error < 20% in distance 10 69 15 40 20 32 50 16 100 12 So, if you wanted to know the distance to a single, specific star, and be confident (at the 1-sigma = 67% level) that your distance was within 20% of the true distance, you could NOT go out to 1/X = 50 parsecs. Instead, you'd only be able to go out to about 10 parsecs. In fact, I believe the Lutz-Kelker criterion is that one shouldn't pay much attention to distances such that D(limit) > 1/(6*X) In my example, X = 0.02 arcsec, so 6*X = 0.12 arcsec, so D(limit) ~ 8 pc; this agrees roughly with my simulation. Let's get down to the nitty-gritty: the VERY best ground-based parallaxes have uncertainties of about 0.001 arcsec, and the Hipparcos satellite will have a similar precision. Then naive expectation: can go out to 1/0.001 = 1000 pc = 3260 light years on second thought: 1/0.006 = 170 = 550 Most parallaxes measured _so far_ have larger uncertainties than quoted here, and so reach reliably to smaller distances. ----- Michael Richmond "This is the heart that broke my finger." richmond@astro.princeton.edu http://astro.princeton.edu/~richmond/