Institute of Astronomy

Hign Resolution Imaging Projects

High Resolution Imaging in the Visible with Faint Reference Stars on Large Ground-Based Telescopes.

Abstract: A research paper has now been published which describes how to combine Lucky Imaging with low-order AO using very faint reference stars, demonstrating a radically new approach to measuring low-order wavefront errors. After correcting for these errors, signifcant mprovements can be delivered in image resolution in the visible on telescopes in the 2.5–8.2m range on good astronomical sites. As a minimum the angular resolution may be improved by a factor of 2.5–3 under almost any conditions and, with further correction and image selection, even sharper images may be obtained routinely. Many of the assumptions about what may be  achieved with faint reference stars has been re-examined to achieve this performance. The paper showed how a new design of wavefront curvature sensor combined with novel wavefront-ftting routines allow this performance to become routine. Simulations over a wide range of conditions match the performance already achieved in runs with earlier versions of the hardware described. In order to use faint reference sources in astronomy it is important to have an approach to wavefront detection and characterization that is flexible and continues to work stably even under  conditions of very low light from the reference object. The key way to handle that is to limit the number of Zernike terms that are corrected. Most of the power in atmospheric turbulence
is in the lowest orders and so the greatest improvement is achieved by correcting for those low orders. Correcting for too many orders simply adds noise and worsens performance. Combining the right approach to wavefront correction with lucky imaging (described below) allows a trade-off between the very sharpest images achieved with a small percentage selection or a slightly less good image profle but with much higher signal-to-noise (see figure below). 

/sites/default/files/Sim_4m2_30e-jai_2019_0.jpg

Figure: Typical simulation results for a 4.2m telescope looking at a high-density star feld modelled on Baade’s window with around 600 stars brighter than I~25 in each feld of about 4.2 x 8.3 arcsecs.

The pixel scale here is ~45milliarcsec. It assumes 0.6as seeing and 10 Zernike terms have been corrected. The core resolution obtained is of the order of 45mas and is more than twice that of the Hubble Space Telescope. These are characteristic of the appearance of lucky plus low-order adaptive optics images. The central near-diffraction limited core is surrounded by a diffuse halo
signifcantly narrower than the natural seeing. A copy of the published paper from the Journal of Astronomical Instrumentation is available at https://arxiv.org/abs/1911.05743.

GravityCam: Project Summary.                                                           

Abstract: GravityCam will allow wide-field visible surveys at high angular resolution and high time resolution for the first time. It will detect large numbers of Earth-mass exoplanets as well as an exceptionally sensitive survey for Kuiper belt and Oort cloud objects. It will also allow detections of bright pulses of light across wide fields of view.

GravityCam is an entirely new instrument designed to image large areas of the sky in the visible with angular resolution much better than normally possible with ground-based instruments. It does this by taking images at high-speed, typically 25 Hz. Each image is then shifted and added to give the output image. This technique, known as Lucky Imaging, improves the resolution by a factor of 2.5-3 over that normally obtained from the ground. Even higher resolution comes from post-processing.

/sites/default/files/GravityCam_planets.gif

GravityCam will detect planets that are virtually undetectable by other methods. This figure shows the planets detected to date where we see that for planets of Earth-mass and below detections are very few though we expect there are vast numbers of them.

There are a number of scientific programs that GravityCam is particularly well-suited to. The detection of planets the mass of the Earth and below is extremely difficult by normal radial velocity or transit methods. GravityCam will allow up to 90 million stars in the bulge of the Galaxy to be tracked night after night looking for gravitational microlensing events.

/sites/default/files/microplanet.gif/sites/default/files/microlunar.gif

Gravitational microlensing profile showing planet detection. On the right are simulated profiles for an Earth-mass planet (blue line) and a moon-mass object (red line).

Should the lensing star have a planet in orbit around it, the microlensing brightness profile is affected allowing the mass, diameter and distance of planet from the lensing star to be determined. We predict we will detect several thousand new microlensing events over the six-month period when the bulge of the galaxy is in the sky, and expect to detect very many low mass planets in the region of 0.3–3 AU. The sensitivity allows planets down to the mass of the moon to be detected.

Other programs use the capacity of GravityCam to build statistics on the fluctuations of each and every star in the field. Microlensing events are selected by abnormal brightening characteristics. They are then followed accurately for many weeks.

We will simultaneously survey for Kuiper belt and Oort cloud objects with great efficiency. We will see stars blink off and on again for very short periods of time. The length of the occultation indicates the size of the object. If the occultation produces only a 20% reduction in brightness then we know that the occultation duration was only 20% of the frame rate or 8 ms. These data are produced in parallel during the above microlensing survey.

The total number of star-hours of observations searching for such objects is around 1 million at present. Within six months we expect to have approximately 100 billion star-hours and so should be able to detect very large numbers of Kuiper belt and Oort cloud objects. Good statistical coverage of the incidence and size spectrum of such objects is key for assessing the risks of interstellar travel.

The high time resolution of GravityCam also allows parallel surveys of the sky for bright optical flashes, such as those from lasers. The brightness produced by phased laser arrays can be extremely high and GravityCam has the capacity to detect those. The high frame rate allows detection of pulses with high signal-to-noise even on at very great distances.

The high angular resolution means that GravityCam has application in many other imaging areas such as the detection of dark matter by the distortions of Galaxy shapes at high redshift.

GravityCam would use the New Technology Telescope 3.6 m ESO telescope on La Silla in Chile. This is a high-performance telescope well-suited to mounting a wide area detector array in the Naysmith focal plane. The instrument will include an atmospheric dispersion corrector to maintain angular resolution when observing well away from the zenith.

The instrument will use wide area CMOS imaging detectors with very high quantum efficiency and low read-out noise. These are currently under development by E2V Teledyne. They are the manufacturers of most of the visible detectors for projects such as Hubble, Gaia, the LSST and the dark energy survey. The instrument will produce ~ 500 TB of data per night. Real time processing reduces that data rate dramatically.

The technical challenges of building GravityCam are not particularly severe. The detector manufacturer will probably build the main camera cryostat. Much of the effort will be focused on software development to make an efficient and reliable data pipeline. Our estimates are that the total project might cost in the region of $20 million including operations for two years after the end of commissioning. It should take about 2.5-3 years to substantially complete and begin to take data.

In order to take GravityCam forward, the next step is to undertake a Phase A study to develop a more detailed and properly costed plan for the instrument. That study should be completed in about 6-9 months and we estimate that it will cost about $200,000. The main costs here are travel to Chile and to ESO headquarters in Germany, outline design and costing of the optical components plus a relatively detailed plan for the software development. The software structure must allow easy reconfiguration for a variety of future programs. Both optical and software design would be done by well-qualified consultants.

GravityCam is a project between the Institute of Astronomy of the University of Cambridge, UK, the Open University STEM Faculty with Colin Snodgrass, the Centre for Electronic Imaging under Prof Andrew Holland, and the University of St Andrews under Dr Martin Dominik.

There is a slightly more detailed account of this project here, plus a copy of the full published paper here.

AOLI: Adaptive Optics Lucky Imager.

 AOLI is an Instrument for near-diffraction limited imaging in the visible on ground-based telescopes. The technique has already been demonstrated on the Palomar 5m, giving 35 mas resolution in I-band (>3 times Hubble) .  Our goal is to provide this performance with close to  all-sky coverage, using  faint reference stars (I~17-18) and reaching science targets as faint as Hubble.  AOLI achieves these faint reference star limits by using a novel low-order curvature wavefront sensor with photon-counting EMCCD detectors.  We have just completed our second commissioning run on the WHT which was very successful in that virtually all aspects of the instrument performed as intended.  However, exceptionally poor observing conditions (seeing often worse than 3-4 arcseconds with high humidity and little sky time) greatly limited the amount of data taken.  Nevertheless valuable experience was gained and the prospects are excellent for using AOLI for science measurements in a later  semester. 

About Lucky Imaging

Lucky Imaging is a remarkably effective technique for delivering near-diffraction-limited imaging on ground-based telescopes. The basic principle is that the atmospheric turbulence that normally limits the resolution of ground-based observations is a statistical process. If images are taken fast enough to freeze the motion caused by the turbulence we find that a significant number of frames are very sharp indeed where the statistical fluctuations are minimal. By combining these sharp images we can produce a much better one than is normally possible from the ground. We have routinely taken Hubble resolution images (0.15 arcsec resolution) on the Hubble sized telescope is (~2.5 m). More recently we have used the same techniques behind a low order adaptive optics system in order to give even higher resolution on telescopes that are too big to have a significant chance of conventional lucky imaging without adaptive optics assistance.

Lucky imaging is not a new idea. The name "Lucky Imaging" came from Fried (1978) though the first calculations of the Lucky Imaging probabilities were first carried out by Hufnagel in 1966 (see reference pages (click here) for copies of the Hufnagel papers that are otherwise difficult to find) and these principles have been used really quite extensively by the amateur astronomy community who have been able to take very high quality images of bright objects such as Mars and the other planets. There is more information about Amateur Lucky Imaging here..

Recent Publications

A recent paper by Craig Mackay looks at ways in which the overall efficiency of Lucky Imaging might be improved.  One of its perceived disadvantages is that it does require that are relatively large percentage of the images recorded are discarded.  By changing the lucky image selection criteria to do some of the selection in the Fourier plane much higher selection percentages may be used.  The paper will be published soon in the Monthly Notices of the Royal Astronomical Society and is entitled "High-Efficiency Lucky Imaging".  The abstract of the paper is as follows:

"Lucky Imaging is now an established observing procedure that delivers near diffraction-limited images in the visible on ground-based telescopes up to ~2.5 m in diameter.  Combined with low order adaptive optics it can deliver resolution several times better than that of the Hubble Space Telescope.  Many images are taken at high speed as atmospheric turbulent effects appear static on these short timescales.  The sharpest images are selected, shifted and added to give a much higher resolution than is normally possible in ground-based long exposure time observations.  The method is relatively inefficient as a significant fraction of the frames are discarded because of their relatively poor quality.  This paper shows that a new Lucky Imaging processing method involving selection in Fourier space can substantially improve the selection percentages.  The results show that high resolution images with a large isoplanatic patch size may be obtained routinely both with conventional Lucky Imaging and with the new Lucky Fourier method.  Other methods of improving the sensitivity of the method to faint reference stars are also described."

A copy of the paper may be found here.

A second paper has recently been published by Peter Aisher et al.,  that looks at various strategies that may be used in fitting wavefronts when multiple near-pupil plane images have been recorded.  The abstract is as follows:

"Increasing interest in astronomical applications of non-linear curvature wavefront sensors for turbulence detection and correction makes it important to understand how best to handle the data they produce, particularly at low light levels. Algorithms for wavefront phase retrieval from a four-plane CWFS are developed and compared, with a view to their use for loworder phase compensation in instruments combining adaptive optics and Lucky Imaging. The convergence speed and quality of iterative algorithms is compared to their step-size, and techniques for phase retrieval at low photon counts are explored. Computer simulations show that at low light levels, preprocessing by convolution of the measured signal with a Gaussian function can reduce by an order of magnitude the photon flux required for accurate phase retrieval of low-order errors. This facilitates wavefront correction on large telescopes with very faint reference stars."

A copy of the paper may be found here.

AOLI: Adaptive Optics Lucky Imager for the WHT 4.2 M and GTC 10.5 Meter Telescopes

AOLI Is a major collaboration between the University of Cambridge, Institute of Astronomy, the IAC in La Laguna, Tenerife, the ING in La Palma and the Universities of Cartegena and Cologne.  The acronym AOLI stands for “Adaptive Optics Lucky Imager”. This project aims at building a camera able to deliver diffraction limited images in the visible range. This instrument is first designed for the 4.2-m William Herschel Telescope, in the island of La Palma (Canary Islands), but later intended for use on the 10.4-m Gran Telescopio de Canarias (GTC).

Obtaining optical diffraction limited images is almost impossible to achieve from the ground because of the lack of efficient adaptive-optics systems for wavelength below 1.2-1.6 microns. The atmospheric turbulence rapidly degrades the wavefronts entering the telescope, which results in seeing-limited images with no spatial information below 0.8-1” resolution. Until recently, optical diffraction-limited images were only delivered by the Hubble Space Telescope

By combining Lucky Imaging techniques with low order adaptive optics it is possible to obtain diffraction limited images in the visible on ground-based telescopes.  AOLI is intended to give a resolution of approximately 3 times that of the Hubble Space Telescope from the ground on the WHT 4.2 m telescope and approximately 8 times that Of the Hubble Space Telescope on the 10.5 m GTC telescope

Observing and Results

A recent observing trip (July 2007) to the Palomar 200 inch telescope has been extremely successful. The images we obtained are the highest resolution direct images, about 35 milliarcsec FWHM, ever obtained either from the ground or from space in the visible at about twice the resolution of the Hubble Space Telescope.

M13

Images shown below are of the core of globular cluster M 13. The blinking image shows what the telescope delivers on its own, followed by what it delivers with the adaptive optic system and LuckyCam.

https://www.ast.cam.ac.uk/images/research/instrumentation/lucky_imaging/m13lucky.gif

The images below show a direct comparison between the conventional image taken under conditions of good seeing (0.65 arcsec), the Hubble image from the ACS (centre) and our Lucky/AO image (right). The Hubble picture goes fainter because the exposure is longer and the wavelength shorter (where CCDs have a much higher sensitivity). The ACS image has been "drizzled" to improve its appearance. The Lucky image is as taken. The markedly better resolution of the Lucky image is clear. This is exactly what is predicted purely because the Palomar 5.1 m telescope is twice the size of the 2.5m Hubble.

 

 

Cat's Eye Nebula (NGC 6543)

A planetary nebula is formed when the central star evolves from a red giant to its final white dwarf phase. A relatively short time in the life of the star, possibly 10,000 years in total, gas is ejected from the surface of the dying star. We can look at the expansion velocity of these filaments and sure that the age of the bright inner shells is probably only about 1000 years. The nebula is about 3000 light years from Earth.

The image below shows the Cat’s Eye Nebula (NGC 6543) as imaged conventionally by the Palomar 200 inch telescope. The green light is oxygen emission, the red is hydrogen emission, and the blue is near-infrared radiation, again followed by the Cat’s Eye Nebula (NGC7543) as imaged with the Lucky Camera behind an adaptive optics system on the Palomar 200 inch telescope. The resolution in the Lucky image is lower than Hubble as the image covers four times the area of the M13 images above, but it is still a good demonstration of what can be done from the ground. These images are all slightly lower resolution than those of the globular cluster but nevertheless show the considerable improvement over conventional ground-based imaging that the AO system produces with LuckyCam.

https://www.ast.cam.ac.uk/images/research/instrumentation/lucky_imaging/catseye_lucky.gif

Tags: 

 

 

 

 

 

Tags:
Page last updated: 16 November 2021 at 15:37