Remembrance of Gigabytes Past

Posted on February 28th, 2008 by blue collar scientist

Martin Rundkvist of Aardvarchaeology writes about the battle between scientism and antiscientism in what some of my colleagues have described, a little less than generously I think, as a “soft science” - archaeology. Martin, who I met at TAM 5.5, writes about the necessity of interpreting data, even in the “hard” physical sciences like astronomy. One part in particular brought back memories (emphasis mine).

Could it be that the anti-scientism archaeologists believed that their work was fundamentally different from natural science because it involved interpretation? Well, in fact, yes. They tended to hint that natural scientists just read their conclusions off of their source material using fancy instruments, and that this would never work with cultural source material. The truth, as anybody who’s ever done real scientific research knows, is that all data must be interpreted in order to be understood and generate knowledge. Hundreds of gigabytes of observational data on quasars from a radio telescope is not astronomical knowledge. It is the necessary raw material of such knowledge. And the first interpretation of such a dataset that is published will not be accepted as knowledge until it has been thoroughly discussed and perhaps repeatedly (though ultimately unsuccessfully) challenged.

Let me tell you, Martin may be more right than he knows.

Replace “hundreds of gigabytes” with “several gigabytes,” “radio telescope” with “optical telescope,” and “quasars” with “asteroids” and you’ll have the position I was in back in 1999. I was a nobody, working at a small, privately owned observatory in southern Arizona. A commercial 0.4 meter Schmidt-Cassegrain was in use at the observatory, popularly referred to as the “POS,” the “telescope that caught on fire,” and “that damn telescope.” It was all we had while a 0.8 meter was constructed and installed, and it suffered from 30% downtime and a huge suite of mechanical and control system idiosyncracies.

My job was writing software to automate this telescope, and its successor, the 0.8 meter. Very early in the game I adopted Bob Denny’s ACP and PinPoint software, because these tools solved a lot of very difficult - or at least tedious to code - problems for me. It took care of things like using COM ports to send commands to telescope firmware, and analysis of images. However, an efficient high-level control system was not, at the time, included, so that is substantially what I set about to write.

The telescope was a major problem. It wouldn’t point accurately. Most telescopes that have problems pointing accurately like this run a layer of software that models the pointing errors. Up to a couple dozen elements go into the model, and when you need the telescope to point somewhere, you tell the model where, and it invisibly calculates the corrections to be made and passes them onto the telescope. In this way, your somewhat balky, misbehaving telescope is supposed to work well. But this telescope wouldn’t point inaccurately in a consistent way. Of the couple dozen coefficients that went into the error model, several of them were stochastic. We needed a 60 percent improvement in pointing, but we could only get about ten percent or so. The solution to this was to take an image after every telescope movement, compare the stars in the picture with a list of stars and their positions from a star catalog, and figure out where we were “really” pointed and make an empirical correction - a small, second slew - afterwards. It added several seconds onto every observation.

The optical mounting was substandard, and allowed the telescope mirror to flop around by several millimeters depending on where in the sky the telescope was pointed. This led to more pointing errors, and to football-shaped stars in our images. The telescope was made of steel, and as the air temperature dropped as the night went on, the telescope shrank. That threw the telescope out of focus.

I jury-rigged a huge bolt to the optical chassis to keep the mirror from flopping around and installed an ingenious focuser that had a temperature sensor and focus-loss model built into firmware that was installed on a BASIC Stamp (remember those?). I wrote all kinds of little routines into the control system to check and see if one of more than two dozen of this telescope’s failure modes had occurred, and recover from it if so. Even so the telescope more often than not failed in some new and novel way halfway through the night, bringing observing to a halt.

The whole thing frankly sucked. The only good thing about it was that we were learning what could go wrong with the 0.8 meter. We benefited a lot from that knowledge.

In the meantime, we were hearing a lot - mostly from the Minor Planet Circulars, where asteroid discoveries are announced - about an outfit called LINEAR. They had a budget in the millions, and were using expensive and classified Air Force technology. They had at first one, then two, telescopes that were designed from the ground up for automated observing of Earth satellites, which they had adapted to hunting for asteroids. They were using a huge, wide-angle CCD camera which we could not have bought even if we could have afforded it (they were classified technology). They had a bunch of experienced software developers to contrast with the single inexperienced one at my observatory (me). And they were sweeping up dozens of new asteroids a night. We didn’t really want to compete with these guys, we just wanted to have an efficient, automated observing capability.

After months of development, we ran the telescope for a full night in completely unattended mode. Then we did it again. And again. It gradually dawned on us that we had a robot on our hands.

We decided to conduct a little stunt. As a way of pointing out to our colleagues that we were automated, we decided to spend a night observing as many known asteroids as we possibly could. We set up a target list of about 400 asteroids, and at the appointed time we unleashed the telescope. We watched it observe the first half-dozen or so, and then went and watched a movie, because the only thing more boring than operating a telescope in person to observe an asteroid is watching a computer do the same thing. Nowadays, people would say “big deal.” At the time, something like this hadn’t been done before for a budget under millions - and we’d had a budget of thousands.

To make useful observations of asteroids, you need to take more than one picture. In our case, we took three images, each separated by about 20 minutes. That’s about 1,200 images by the end of the night.

The following day we had a almost two gigabytes of images sitting on our hard drive. Now this was the late 90’s, and I think the biggest hard drive we had was a two gig drive. I vaguely remember archiving all this data the next day to four or five CD-ROMs.

But it’s only data. Once you have data, you have to reduce it, and that’s what Martin is talking about.

Pictures of asteroids taken in visible light are not very useful for anything except determining the asteroid’s position. The pictures are taken with a fancy digital camera - with cooling modules and extreme sensitivity and so on, but basically just a plain old digital camera - and therefore the picture is made up of pixels. Each pixel covers a certain amount of sky. In our case, each pixel was about 2.8 arcseconds on each side1.

So you want to measure the position of the asteroid, but your camera’s blocky pixels make doing this precisely difficult. This is where centroiding comes in. The idea is that an image of a star - or an asteroid, which looks a lot like a star, only it moves - is actually a smeared-out, blurry disk, with a brighter center and gradually fading edges. A star image like this will typically have a diameter of four or five pixels. Now the brightest part of the star image is where the star “is,” but just by looking you can’t narrow that down very well because of all this smeared-out light and the ‘bigness’ of the pixels.

It turns out that you can model the star image, and calculate from the model where the brightest part of the star image really is, and you can do it to a resolution much finer than that of your sensor. Most astronomers use a model called the point spread function , but there are other choices as well. By using this model, we could take our 2.8 arcsecond images and measure asteroid positions to about 0.3 arcseconds. Pretty slick.

Turns out, that’s the first layer of interpretation that the investigator imposes on the data set. What method to use to centroid the star and asteroid images can influence both the accuracy and precision of the positional measurements.

The next step is to take the list of positions from our 1,200 images and send them off to Gareth Williams, of the Smithsonian Astrophysical Observatory at Harvard. Gareth would take our positions and use them to calculate orbits. How? Would he compute a Vaisala orbit? Well, if the asteroid was a new discovery, he probably would - but otherwise not. If it were a known asteroid, he would add the observations to an existing list of prior observations and compute a much more precise orbit that makes fewer assumptions. Would he just generate some Keplerian elements as a result of this? Yeah, probably - unless the asteroid was “interesting,” for example if it were going to pass close to Earth or some other planet in a way most asteroids never do. If that happened, would he include Newtonian influences? Certainly. Would he deal in relativistic effects? Probably not, but maybe. Would N-body problems come up? Maybe.

How to deal with all this imposes another layer of interpretation.

The end result of all this work is an idea of where the asteroid is, and where it is going to be. An orbit can be visualized in 3-D as though it were a garden hose. The asteroid is a grain of sand somewhere in the hose - not sure exactly where, but definitely within the hose, and at a certain position along the hose’s length. If the asteroid is not well observed, it might be best to visualize the hose as a big fat fire department hose - the uncertainty is bigger. But if it is a well known asteroid, it might be a pebble in aquarium tubing. This lack of exact knowledge about the asteroid’s orbit is known as ‘orbital uncertainty,’ and it arises from measurement errors back when you reduce the data from your images. Only a few asteroids’ orbits are known so well that the orbital uncertainty is always less than their diameters. But almost everything has an orbital uncertainty low enough that we know for certain that it can’t possibly hit something for the foreseeable future (which is often a hundred years or more distant).

The bottom line is that Martin is right - data from the physical sciences is very heavily interpreted indeed. Even in the ultimate automated observing system, in which the telescope automatically generates data and the software automatically reduces it, the methods of interpreting the data would be imposed by the programmer at design-time. There’s no free lunch.

What is really cool about the physical sciences, though, is that despite all of this interpretation, you can have some reasonable level of certainty about your results. There is simply no other plausible explanation for the phenomenon we call “asteroids” than to believe that there are big chunks of rock (etc) orbiting the sun in very specific, well-measured paths, and that these bodies respond to well-defined physical laws.

Today, the only asteroid work we do is the occasional interesting near-earth object and the occasional discovery of a new main-belt asteroid. But the same system observes variable stars, exoplanet transits, active galactic nuclei, and a bunch of other interesting things.

  1. That will seem grossly big to astronomers, but this telescope was optically terrible, and our seeing was also gross. We had a pretty good match between our PSF and our sampling. []

Tags: , , , , , , , , , , ,

One Response to “Remembrance of Gigabytes Past”

  1. Blue Collar Scientist » Blog Archive » Asteroids Named For PZ Myers, Phil Plait, Rebecca Watson, Michael Stackpole Says:

    [...] night of November 23, 2001, I was sitting in the cold at the 16″ instrument, trying to debug one of the endless problems this telescope had. In the course of a test run of my latest attempt to overcome the problem, I took three images of a [...]

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Related Posts from the Past: