Photography was used in astronomical studies. It was observed that a film can accumulate light over several second’s unlike the human eye, to get clearer images. In 1839, Louis Daguerre, a French artist, who invented the first practicable photographic process, took the first photo of the moon. In America, john William draper also photographed the moon and in addition the solar spectrum. Another American, George P Bond, was the first to take photographs of stars fields for determining their relative brightness.
In the 1805’s, a German scientist, Justus von Liebig, made a new kind of mirror, a glass with a thin film of silver. German astronomers soon used it in telescopes. This led to mirrors coated with silver or aluminium. A refractor telescope of Harvard was used to first photograph the Moon. Photographic time exposure exceeded the eye’s sensitivity and recorded very fast objects. In 1884 the first roll film camera was invented.
Normal reflecting telescopes have a limited field of view. An optician, Bernhard v. Schmidt (1879-1935) made a wide-angle photographic telescope in 1930. This is a reflector of a special kind, with a thin glass plate for photographing larger areas of the sky. Such a provision is made in several observatories.
In a later development, photo-multipliers recorded the light captured by a primary mirror. The new device converted light into measurable electric current. This way a star’s energy output and its variation could be measured. The data became useful in determining the distance to remote galaxies and map their distribution in our universe. An optical grating, called diffraction grating, with microscopic grooves, is used to disperse the reflected light and distinguish the absorption and emission lines.
A more profound change occurred with the introduction of the charge coupled device (CCD) –a solid state sensor – for imaging. A film does not clearly bring out the subtle difference between bright and faint objects. in contrast, a CCD capture every photon and register it. And it coverts an optical image into an electronic image in the silicon integrated circuit. Our eyes can only take observations for about a 30 th of a second and they are only one 1 per cent as effective in collecting light as an electronic detector. A CCD can take in observations for hours.
Invented by researches of Bell Labs (Willard S. Boyle and George Elwood Smith) in 1969 , a CCD is made up of tiny, light- sensitive capacitors with an array of electrodes sandwiched in the thin surface of a semi conducting material light falling on each pixel (picture element) displaces some electrons and generates an electric current (charge ), proportional to the number of photos (particles of light) hitting it. (Technically speaking, the photons that hit the conduction band.) CCD’s store the charge and the corresponding electronics signal; it is then read out to form an image. The charge is discarded row by row after the read-out.
The images can be stored in digital format and their resolution improved. The resolution of the image depends on the number of pixels. And the number has been doubling every two years. a typical infrared array in 1985 had only 900 pixels. A decade later, the number went up to more than a million. In recently built telescope, the CCDs have as many as 250 million pixels in a 30 –cm square and one billion pixels are on the way. Advanced techniques eliminate the ‘noise’ in the system and improved clarity.
In what is known as drift scanning, the electrons keep pace with the photons that produce them as the star image drifts across the CCD, so that there is no blurring of the image. incidentally , the first astronomical image that was improved by a CCD is that of Uranus (1975).