April 14, 2017: The Dragonfly Telephoto Array, Galactic Formation, and Dark Matter

Having moved to Denver last summer, I could not attend NEAF this past weekend as I had the previous two years.  However, that didn’t stop me from “attending” the lectures.  Unlike previous years, where only select lectures were posted to Youtube a few months afterwards, NEAF live-streamed all of the lectures on both Saturday and Sunday.  I watched eagerly.

Obviously, you can watch the lectures yourself at the links I just provided; but I want to share a truly extraordinary thing that I learned about from having watched one of the lectures in particular:  the existence, function, and discoveries of the Dragonfly Telephoto Array.  If you’re a member of astronomy groups on Facebook like I am, or if you just frequent Cloudy Nights or other astronomy boards, you may have seen a picture like this:

Dragonfly_Telescope10

And your reaction to that photo, like mine, was probably, “WTF?  Why would anyone build a telescope this way?” This is the Dragonfly Telephoto Array. Or more precisely, this was the Dragonfly as it existed a couple of years ago, when it had “only” 10 lenses; it now has 48 lenses.

And lenses – as opposed to telescopes – is actually the correct term.  The Dragonfly is made up of commercial, off-the-shelf 400mm f/2.8 Canon telephoto lenses for use with digital cameras.  Each lens has an aperture of 143mm.  Doing the math, the array of 48 (assembled in two groups of 24, as shown at the top of the page) currently has just a smidge less light-gathering capacity by area as the largest refractor in the world, the 40-inch Yerkes.

As you would expect, all 48 lenses are aligned to point in the same direction; SBIG astro-cameras (CCDs) are attached to the back of each so that they can all image the same area; and then all of the images collected are then stacked together with software designed for that purpose. The two 24-lens arrays are located at a very nice high-altitude observing location in New Mexico, which is operated remotely/robotically.

Each lens costs $10,000; and a little bit of internet research shows that the cameras cost about $2000 or so each.  (Although I would imagine they get a volume discount.)  They also use a special adapter for focusing the lenses at $1000 each.  Even so, with mount and materials, they were able to effectively build a world-class one-meter refractor for far less than a million dollars.  The 40-inch Yerkes itself cost $350,000 in 1892 money, which is something like ten million or more now.

You may well ask, “What’s the point? Why are they using all these telephoto lenses for astronomy?”  According to their paper on the capabilities of the scope, Canon has developed new “nano-fabricated sub-wavelength corrugations on their antireflecting coatings”, used on the glass surfaces.

To put that into more understandable English, the good scientists at Canon have invented a special proprietary ( = secret) new coating that consists of nanoparticles – each of which is significantly smaller than the wavelength of light.  In other words, as far as the light passing through these coatings is concerned, the lens is perfectly smooth.  These lenses are an order of magnitude smoother than anything else that has ever been made to date. This results in less scattering of light within the optical path – effectively eliminating light noise that puts a limit on how faint you can go.

The limiting magnitude for earthbound observing stops at about 28th magnitude.  This is true no matter how large your telescope is, and no matter how long your exposure time is.  The reason for this is because of noise inherent in the system; noise which comes in the form of scattered light.  Or, in other words, and incredibly apt for the point of this blog, this noise is light pollution inside the scope.  Even extremely well-made optical surfaces have some amount of roughness left in them, and this roughness scatters some small amount of light.  That small amount of light is enough to act as a background noise floor below which you simply cannot go.  This prevents exceedingly dim objects from being detected.

Interestingly, this 28th magnitude limit has been true for decades, going way back to the days when the Palomar 200-inch (5-meter) reflector was the largest in the world, and to the days of film, when extremely long film exposures were taken.  It is still true today in our world of 10-meter telescopes and digital cameras – even with digital techniques, that noise can only be reduced so much, generally down to 28th magnitude.  Only one particular highly optimized telescope, the Burrell Schmidt, can get down to 29th magnitude.

However, part of that noise has to do with the very fact that all of the world’s premium telescopes are all reflectors.  The introduction of secondary (and even tertiary) mirrors introduces diffraction around the edge of the obstruction in the light path.  This diffraction serves as yet another cause of light scattering.  Also, any dust in the system, i.e., on the mirrors, introduces even more scattering, as dust on the mirrors reflects the light right back into the optical path.

Refractors obviously do not suffer from any of these central obstruction/diffraction problems.  Further, dust on a refractor’s objective causes the light to bounce backwards out of the optical path, and not to interfere with the image.

Because these lenses have no light scattering, they can reliably get images down to 32nd magnitude.  That’s forty times fainter than current telescopes can go.  You can read about the Dragonfly optical system in their scientific paper.

Notably, even one, just one individual Canon telephoto lens, can reach 32nd magnitude.  The lower magnitude limit is because of the extreme lack of noise (light scattering), and not because of the aperture.  Obviously, a 5.6-inch telescope – even an exceedingly fast f/2.8 telescope – will still take a verrrrry long time to capture enough photons to get an image down to 32nd magnitude.  We’re talking exposure times measured in weeks to collect enough light to get down that low with one lens.

The solution was simply to start stacking the lenses and integrating the data received, which is exactly what the Dragonfly team did.  The team, led by Roberto Abraham and Pieter von Dokkum of the University of Toronto, started out with 3 lenses, then went to 8, then 10, then 24, and now 48.  Obviously, an effective aperture of one meter has cut exposure times down to just several hours instead of weeks.  And because the focal length of the system is a very short 400mm, the Dragonfly is able to image a very large widefield area about 2 degrees by 3 degrees on a side at a time.

The 24/48 lenses are not precisely aligned to one another.  This is so as to be able to digitally correct for any exceedingly small amount of ghosting that might crop up in one lens, but not another.  Only those parts that overlap in all 24/48 lenses end up in the final integrated image.

And finally, we get to the real “why” of all of this.  Yeah, yeah, going down another 4 magnitudes sure is nice, but what’s down there to see?  Just ever fainter and fainter galaxies?  A super-duper Hubble Mega Ultra Deep Field? That would be nice, to be sure, no question.

There are many other things that become visible when you get down to 32nd magnitude. Most galaxies have much smaller, and therefore much fainter, satellite galaxies orbiting them; and certain galactic structures such as dust rings, structures that form where the galactic disk interfaces with the halo, can only be seen at this level.  These substructures and satellite galaxies have to do with current explanations and models of galactic formation.  Going down four magnitudes deeper should reveal other non-galactic things as well, like dust rings around planets in our solar system, and supernova light echoes.

Before Dragonfly came along, the prevailing theory of galactic formation and evolution was that galaxies are built from the bottom up.  Smaller galaxies tend to get sucked into, ripped apart, and absorbed by larger galaxies.  This destruction of the smaller galaxy leaves behind remnants – galactic gas tails and debris – in the outer galactic halo of the remaining larger galaxy.  This leftover stream of debris is, of course, quite dim indeed.

The original point of the Dragonfly project was to look for ultrafaint substructures in galaxies, including this galactic debris.  They looked at NGC 2841, a beautiful 10th magnitude spiral galaxy in Ursa Major, and found something unexpected.  Under normal optical photography, as you get down to 28th magnitude, a galaxy extends outward only so far.  When that galaxy is viewed in radio wavelengths, there is a halo of neutral gas surrounding the galaxy that extends out further, about twice as far.  That neutral gas is different from molecular gas, which is molecular hydrogen – the stuff that collapses to form stars.  Neutral gas can’t do that.

With the Dragonfly going down to 32nd magnitude, looking at NGC 2841 revealed that there were stars glowing in optical wavelengths way out in this halo of neutral gas.  Stars should not be able to form out there.  Yet there they were.  No current models for galaxy formation can account for this.

As the team had to get more and more serious about doing their research and work systematically, it became less and less fun for them.  And after all, they initially embarked on this project to get back to basics, to get back to observing, and have fun!  So, they decided to take a break from “real” work.  They pointed the Dragonfly array at the Coma cluster of galaxies to see if they would find anything of interest there.

When they did, they found 57 previously unknown galaxies in that area.  These are extremely low surface brightness galaxies – galaxies that are both incredibly faint and very spread out, even up to a full degree across.  In a word, diffuse.

One of these, dubbed Dragonfly 44, is roughly the size of the Milky Way (100,000 light years across), but with only 1% of the stars.  This makes no sense in relation to the current theory of galactic formation and evolution.  A galaxy this size, with almost no regular matter in it to create enough gravity to keep it together, would simply fly apart as it rotated.  The fact that it has not done so means that it is completely dominated by dark matter – over 99.99% of this galaxy is dark matter.

They know that the dark matter is there, because they counted the number of globular clusters surround this galaxy, and they got a number similar to what they get with non-diffuse galaxies of similar size – about 100.  Because the globs are there, that means the mass to hold the globs there is present, too.

Dragonfly44_1858x1302
Dragonfly 44 in the context of its surroundings, at left, and closeup, at right.  Taken by the Gemini scopes in Hawaii.

And this throws a wrench in current explanations for galaxy formation.  There simply is no explanation for how such a large, diffuse galaxy like Dragonfly 44 could form.  As a result, the theorists are having to go back and rewrite the astronomy textbooks (okay, just the graduate-level astronomy textbooks) and come up with new hypotheses to explain this.  Over 900 papers have already been written based on the Dragonfly data.

Two schools of thought have begun to emerge:  one, that these are failed giant galaxies – galaxies like our Milky Way or Andromeda that got their dark matter in place, but failed to attract enough normal matter to undergo extensive star formation.  The other is that these are inflated dwarfs – that the dark matter just isn’t there, and as a result, dwarf galaxies have spread apart as they have spun, or are being expanded by tidal forces from other nearby galaxies.

One of the problems the Dragonfly team has encountered is something called galactic cirrus.  As you might expect from the name, these are clouds of gas floating around the galaxy, reflecting the light of nearby stars.  Normally, when imaging down to higher magnitudes (above 28th), this isn’t a problem, because the cirrus is so dim and doesn’t show up.  But at 32nd magnitude, they find it everywhere.  The team has to use already existing infrared information from other sky surveys to find holes in the cirrus at which they can point the Dragonfly.

Another thing they’d like to do is to get H-alpha filters for the array.  This would allow them to see the glowing gas that follows the cosmic web of gas filaments in the universe to get an even more accurate map of it.  They estimate that each 143mm H-alpha filter to go over the front of each telephoto lens would cost $10,000, though.  (Oof!)  But then again, an accurate map is available now, today, just half a million dollars away!

The Dragonfly system is literally a revolution in our ability to study the structure of the universe.  Where the LIGO system costs over a billion dollars to study the universe in terms of detecting gravitational waves, the Dragonfly is discovering more with far less, for only about a million dollars.  For more in-depth information summarizing what Dragonfly has been up to so far, you can read their surprisingly layperson-readable paper here.

Meanwhile, the Dragonfly team is thinking about replacing the currently relatively low-resolution CCD cameras with much higher resolution (and more expensive) ones to increase the level of detail they capture 2-3 times for other purposes.  They also want to scale up the array by a factor of 10 to FOUR HUNDRED EIGHTY lenses, which would be the equivalent of a 3.2 meter refractor.  Holy eyeballs, Batman!

Advertisements

One thought on “April 14, 2017: The Dragonfly Telephoto Array, Galactic Formation, and Dark Matter

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s