Skip to main content

Modern technologies for retinal scanning and imaging: an introduction for the biomedical engineer

Abstract

This review article is meant to help biomedical engineers and nonphysical scientists better understand the principles of, and the main trends in modern scanning and imaging modalities used in ophthalmology. It is intended to ease the communication between physicists, medical doctors and engineers, and hopefully encourage “classical” biomedical engineers to generate new ideas and to initiate projects in an area which has traditionally been dominated by optical physics. Most of the methods involved are applicable to other areas of biomedical optics and optoelectronics, such as microscopic imaging, spectroscopy, spectral imaging, opto-acoustic tomography, fluorescence imaging etc., all of which are with potential biomedical application. Although all described methods are novel and important, the emphasis of this review has been placed on three technologies introduced in the 1990’s and still undergoing vigorous development: Confocal Scanning Laser Ophthalmoscopy, Optical Coherence Tomography, and polarization-sensitive retinal scanning.

Introduction

In the past few decades, the use of light has played an important role in revealing structural and functional information from the human retina in a non-destructive and non-invasive manner. Ophthalmic optics as an active research area has been expanding steadily, providing scientists and doctors with priceless multidisciplinary information and enabling new diagnostic and therapeutic methods. New scanning and imaging technologies have had a tremendous impact on ophthalmology, where information about the fovea and the optic nerve is essential.

The anatomy of the human eye and its optical properties

The anatomy of the human eye is shown in Figure 1. The eyeball measures about 24 mm in diameter and is filled with jelly-like vitreous humor. The light entering the eye passes through the iris and the pupil, is focused by the cornea and the crystalline lens onto the retina in the region of the macula, its most sensitive part being the fovea, which is the spot of the sharpest vision. The retina converts the photon energy of the incoming light into electrical activity, which is transferred to the optic disc and along the optic nerve to the brain. The fibers carrying the electric signal from the fovea to the optic disc are called Henle fibers in the vicinity of the fovea, and form the thicker retinal nerve fiber layer (RNFL), the axons of the nerve fibers, mainly in the area surrounding the optic nerve. Both the Henle fibers and the RNFL change the polarization state of light – an optical property known as birefringence [15]. Birefringent materials delay the vertical (s-) and the horizontal (p-) components of light differently, and hence have a refractive index that depends on the polarization state and propagation direction of the impinging light. These optically anisotropic materials exhibit different indices of refraction for p- and s-polarization of the incoming light. Upon reflection by diffuse birefringent reflectors, such as the fovea and the optic disc, the p- and the s-components are delayed differently, yet they can be detected separately in a polarization-sensitive (PS) detection system. The thickness of the RNFL is not constant over the retina. Another part of the human eye that is birefringent is the cornea, with its corneal collagen fibrils in fact constituting the main part of the birefringence of the eye, ca. six times higher than the birefringence of the fovea. It has also been shown that corneal birefringence varies greatly among people and, within a single cornea, significantly with position [6]. The layer underneath the retina is called the choroid, which is just above the sclera. The choroid contains numerous tiny blood vessels responsible for the retina’s metabolism. Deeper layers of the retina can today be examined with new technologies, most of which are based on scanning the fundus of the eye. They can be polarization-insensitive, or polarization-sensitive. Both types will be discussed in the upcoming sections.

Figure 1
figure 1

The human eye. The object being observed is projected onto the macula, the central part of which is the fovea, the spot of the sharpest vision.

Fundus photography

Fundus photography was introduced in the 1920’s and has been used extensively since the 1960’s – first as a standard photographic technique based on 35 mm film, and later as digital photography. Of main interest is the optic nerve photography, allowing the evaluation of structural relationships within the nerve. It also allows the practitioner to examine fine details not easily seen on examination, as well as evolution of such changes over time. Additional techniques such as stereo disc photography and red-free RNFL photography led to substantial enhancement of fundus photography.

In analogy with the indirect ophthalmoscope, the objective lens forms a real intermediate image of the illuminated fundus in front of a pinhole mirror. Behind the pinhole mirror, a second intermediate image is formed by the main objective lens. With a movable focusing lens, the rays are parallelized, thus enabling the use of high-resolution cameras. The maximum resolution of fundus cameras is considered to be ca. 6 μm, but it can only be obtained for a small field of view (FOV), and if the pupil is dilated. To capture reflection-free fundus images, with a large FOV, a small aperture stop is needed, which, in turn, reduces the resolution (to approximately 10 μm for a FOV of 50°. Normally, the maximum FOV for a fundus camera is 50°. Only with special mydriatic (for work with pupil dilation) cameras, a larger FOV of up to 60° can be realized. Typical FOV graduations are 20° to 50°. Peripheral areas of the retina which lie outside the central FOV can be registered when the patient looks in different directions, changing the line of sight. With special Auto Mosaic (or Montage) software, the individual images can then be stitched together forming a panoramic image which can span an angular range of up to 110°. Table 1 shows a comparison between fundus photography and other retinal imaging technologies, with respect to FOV, resolution, and size of the features of interest. It can be seen that the large FOV with fundus cameras comes at a cost of lower resolution and inability to detect microscopic structures, such as very small blood vessels, cone photoreceptors etc. Also, no information from deeper retinal layers can be obtained. A good comparative analysis of fundus camera systems has been reported in [7].

Table 1 Comparison between retinal imaging technologies

The cost of fundus photography continues to be significantly lower than the newer techniques based on retinal scanning. Its main advantages are the easy interpretation, full color (helping to distinguish between cupping and pallor), better detection of disc hemorages, peripapillary atrophy etc. Disadvantages include lack of quantitative description and hence inter-observer variability, need of highest photographic quality (not always easily achievable), and difficult serial comparison because of limited ability to detect subtle changes with a photograph. Another drawback of fundus photography is the need of high light intensity for illumination of the retina, in the order of 10-100% of the maximum permissible levels [8], typically delivered by a flash. Figure 2 shows three fundus images taken with the FF450 Fundus Camera from Zeiss, whose standard configuration is equipped for color imaging, fluorescein angiography and filter-based red-free, red and blue imaging, courtesy of Carl Zeiss Meditec.

Figure 2
figure 2

Fundus images taken with the FF450 Fundus Camera from Carl Zeiss Meditec, Inc. Left: color image; Middle: fluorescein angiography image; Right: zoom in the macular region. Courtesy of Carl Zeiss Meditec, Inc.

Hyperspectral imaging of the fundus

Hyperspectral imaging (HSI) originated from remote sensing and has been explored for various applications by NASA. It is an emerging imaging modality for medical applications [9]. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. In biology and medicine it is being used in image-guided surgery, tissue optics, cancer diagnostics, kidney disease, retinal diagnostics etc. HSI can deliver nearly real-time images of biomarker information, such as oxyhemoglobin and deoxyhemoglobin, providing assessment of tissue pathophysiology based on the spectral characteristics of different tissue. HSI has been successfully applied to the diagnosis of hemorrhagic shock, the assessment of peripheral artery disease, diabetic foot, and the identification of many other abnormalities.

Typically, HSI instruments are point or slit imagers that scan the object of interest temporally, in order to produce a two-dimensional image, or use optical bandpass filters to scan the scene spectrally. Examples are the Hadamard encoding slit spectrometer [10], HIS imagers using liquid crystal and acousto-optic tunable filters [11], Fourier transform spectrometers [12], spectro-temporal scanners [13], and more recently volume holographic methods [14]. A tunable laser source has also been employed coupled to a custom-built fundus camera, to sweep the working wavelength from 420 to 1000 nm at steps of 2 nm, eliminating the conventional Xenon flash lamp, with images being registered by a 1.3 megapixel charged-coupled (CCD) camera, to fill the spatial-spectral hypercube [15]. All these serial acquisition systems collect only a fraction of the full data cube at a single instant in time and trade off critical imaging parameters, such as image size, speed, resolution, or signal-to-noise ratio [16]. Various new HIS techniques have been developed lately to overcome these problems. Bernhardt utilized an HSI system with rotational spectrotomography to detect all available photons from an object while obtaining enough information to reconstruct the data cube [17]. Johnson et al. [18] used a computed tomographic imaging spectrometer (CTIS) to capture both spatial and spectral information in a single frame without moving parts or narrow-band filters, and with high optical throughput, which is well suited for human retina imaging with constantly moving eyes. CTIS captures the spatial and spectral information of the retina by imaging the scene through a two-dimensional grating disperser which produces multiple, spectrally dispersed images of the retina that are recorded by a focal plane array (FPA). From the captured intensity pattern, computed-tomography algorithms are used to reconstruct the scene into a “cube” of spatial (x and y) and spectral (wavelength λ) information. The image cube in wavelength space is then reconstructed from a single image [18]. The basic CTIS design uses just two lenses and a focal plane detector. The CTIS instrument concept originated in Japan [19] and Russia [20] and has been advanced to maturity by a group at the Jet Propulsion Laboratory in Pasadena [21] and one at the University of Arizona [22, 23]. Trade-off problems between imaging acquisition rate and signal throughput in scanning-based techniques also led to the development of image mapping spectroscopy (IMS), [16, 24, 25] which captures the whole data cube in a single snapshot without compromising image resolution, speed, optical throughput, or intensive post-processing. The IMS is based on the image mapping principle: the device is coupled to the back image port of a traditional retinal imaging camera [26] and the intermediate image at the entrance port is re-imaged onto a custom fabricated image mapper which consists of hundreds of tiny mirror facets that have a two-dimensional tilt [27]. The image mapper cuts the intermediate image into strips and reflects them toward different locations of a CCD camera. Due to differences in the tilt-angle of the mirror facets, blank regions are created between adjacent image strips at the detector plane. The strips of reflected light from the image mapper are further dispersed by means of a prism array, and re-imaged onto their associated blank regions by an array of re-imaging lenses. Thus, each pixel on the CCD camera is encoded with unique spatial and spectral information from the sample. Finally, the hyperspectral datacube (x,y,λ ) is calculated by a re-mapping algorithm [27]. The IMS is one of the first real-time, non-scanning techniques [26, 28].

In ophthalmology, HIS has been used to detect various retinal abnormalities. Among the most significant ones is the age-related macular-degeneration (AMD), which is a major cause of blindness in the elderly. Its prevalence increases exponentially with every decade after age 50 [29]. Cell protein cytochrome-c has been identified as a key signaling molecule in the degeneration processes and apoptosis. Schweizer et al. [30] developed an HSI system to collect spectroscopic data, which provided information about the oxidative state of cytochrome-c during oxidative stress for detection of AMD. Another group [25] applied CTIS to quantify the macular pigment (MP) in healthy eyes. They successfully recovered the detailed spectral absorption curves for MP in vivo that correspond to physically realistic retinal distributions.

Retinal oxymetry

The proper functioning of the retina depends on the availability of an adequate amount of oxygen. Therefore, measuring the amount of oxygen present in the retinal vessels is important in order to detect and monitor diseases such as glaucoma and diabetic retinopathy. The main chromophor of blood is hemoglobin, which is a special protein contained in red blood cells (RBC). As light propagates through a blood sample, absorption and scattering take place. The absorption is due to the hemoglobin contained in the red blood cells and scattering is due to the discontinuities of refractive indices between RBCs and the plasma in which they are suspended. The absorption characteristics of blood can be expressed by the extinction coefficients of hemoglobin which can be found into two states: oxygenated (HbO2) and deoxygenated (Hb). Generally, blood oxygen saturation is estimated based on the variation of blood spectra with oxygen saturation. There are two primary vascular networks that provide retina with nutrition: the choroid and the retinal vessels. The choroid lies beyond the outer retina, with a capillary bed in contact with the retinal pigment epithelium. Retinal vessels occupy the inner half of the neural retina, extending outward from the optic disc in all directions. As the wavelength of illuminating light changes, light penetrates to different depths throughout the retina in which wavelengths between 530–580 nm illuminate the retinal background and retinal vessels. However, as wavelength increases (λ > 600 nm), light penetrates the retinal vessels and background to reach the choroid at λ > 640 nm [31]. Assuming blood can be spectrally characterized as comprising fully oxygenated hemoglobin (HbO2) and deoxygenated hemoglobin (Hb), the oxygen saturation OS is defined as:

OS = C Hb O 2 C Hb O 2 + C Hb

Where CHbO 2 and CHb are the molar concentrations of oxygenated and deoxygenated hemoglobin respectively. Several study groups have employed the existing spectroscopic techniques to measure retinal blood oxygen saturation, which involves detecting the difference in light absorption between oxygenated and deoxygenated hemoglobin using multiple wavelength reflectance oximetry. As a result, numerous dual- and multiple wavelength combinations sensitive to oxygen saturation have been utilized in various imaging systems. A good historical summary of such techniques is given in [31].

Modern retinal oxymetry uses hyperspectral imaging methods, to add a topological component to the retinal oxygenation information [15, 3137]. Khoobehi et al. [34] attached a fundus camera to an HSI for monitoring relative spatial changes in retinal oxygen saturation. The integrated system can be adapted to measure and map relative oxygen saturation in retinal structures and the optic nerve head in nonhuman primate eyes. Another team [36] measured the intensities of different wavelengths of light that were transmitted through and reflected out of the arteries, veins, and the areas surrounding these vessels. A hyperspectral fundus imaging camera was used to capture and analyze the spectral absorptions of the vessels. Johnson and co-workers developed a snapshot HSI system with no moving parts or narrow-band filters in order to perform functional mapping of the human retina using chromophore spectra [18]. It was based on the CTIS design, mentioned above. The hemoglobin spectral signatures provided both qualitative and quantitative oxygen saturation maps for monitoring retinal ischemia from either systemic diseases, such as diabetes, or from localized retinal arterial and vascular occlusions, which are the leading causes of untreatable blindness. The results showed a clear distinction between veins, arteries, and the background. Regions within vessel capillaries agreed well with the 30 to 35% oxygen saturation difference expected for healthy veins and arteries. The saturation for most of the background spatial locations in between the capillary regions showed a tendency to be within the 90 to 100% range, which was consistent with the subjects being healthy. This system is capable of acquiring a complete spatial-spectral image cube of 450 to 700 nm with 50 bands in ca. 3 ms and without motion artifacts or pixel misregistration [18].

Confocal microscopy

In order to better understand the material in some of the following sections, we need to introduce the concept of confocal microscopic imaging, which was patented in 1955 by Marvin Minsky [38]. This technique has successfully been utilized in numerous instruments in different areas of science and engineering. A confocal microscope uses point illumination and a pinhole (also called confocal filter) in an optically conjugate plane in front of the detector to eliminate out-of-focus signal (Figure 3). Only light reflected by structures very close to the focal plane can be detected. However, since much of the light returning from the specimen is blocked at the pinhole, the increased resolution comes at the cost of decreased signal intensity, i.e. either a more powerful light source or a long exposure is needed. As only one point in the sample is illuminated and acquired at any given instant, 2D imaging requires scanning over a regular raster in the specimen. The achievable thickness of the focal plane is defined mainly by the wavelength of the used light divided by the numerical aperture(the range of angles over which the system can accept or emit light) of the focusing lens, but also by the optical properties of the specimen. Factors affecting axial (depth) resolution are the objective numerical aperture (NA) and pinhole diameter. Increasing the NA and/or decreasing the diameter of the pinhole will increase the z-resolution. The thin optical sectioning makes confocal microscopes particularly good at 3D imaging. By scanning many thin sections through the sample, one can build up a very clean three-dimensional image of the sample. The main advantage of confocal microscopy is the controllable depth of field, suppression of out-of-focus information, and ability to provide optical sections at different depths [39].

Figure 3
figure 3

The principle of confocal microscopy. Only light reflected by structures very close to the focal plane can be detected.

Scanning Laser Ophthalmoscope (SLO)

The first attempt to introduce an ophthalmic imaging technique which would not suffer from the disadvantages of fundus photography was the scanning laser ophthalmoscopy. It was reported first by Webb and co-authors [40, 41]. In the scanning laser ophthalmoscope (SLO), a narrow (ca. 1 mm) laser beam of safe intensity traverses the optical axis to a single point (ca. 10 μm diameter) on the retina, resulting in comfort for the patient. The fundus image was produced by scanning the laser over the retina in a raster pattern, detecting the signal from each point scanned, to produce a digital image. Beam deflection was achieved by a combination of two galvanometer scanners – one slow vertical scanner (~60 Hz), and one fast horizontal scanner (~15 kHz). Alternatively, more expensive acousto-optic deflectors can be used [41, 42]. Modulation of the scanning beam permits projection of graphics or text in the raster). An avalanche photodetector was initially used, to enhance detector sensitivity. Early SLOs typically provide an output in standard TV format which can be viewed live on a TV monitor and recorded on a videotape, or fed to a digital frame grabber [43, 44].

The ability to perform confocal imaging is a major advantage of the SLO [45, 46]. The confocal scanning laser ophthalmoscope (cSLO) was developed several years after the SLO as a new version, taking advantage of the principle of confocal microscopy, to achieve high contrast and depth resolution. By moving a confocal aperture between two end points, a number of tomographic slices can be acquired, to extract depth information [47, 48].

Another important development in scanning laser ophthalmoscopy is the introduction of color, to better match the images produced by fundus photography. Such devices, often called multi-spectral SLOs, use multiple separate lasers of different wavelength in the illumination model, usually made coaxial by means of a set of dichroic combining mirrors. The source lasers are multiplexed, to create interlaced images in a multispectral frame acquisition mode. Multispectral SLOs are usually confocal, and are useful in retinal vessel oximetry, reflectometry, angioscotometry, fundus perimetry etc [44, 4952]. Figure 4 shows a generalized diagram of a multispectral cSLO. The illumination module comprises several separate laser beams combined by dichroic mirrors. The lasers can be of any type, yet more recent designs tend to use diode lasers. The lasers can be multiplexed, or fired simultaneously. The polychromatic beam is made incident on a two-dimensional (X-Y) scanning mirror assembly that displaces it over a square area of several millimeters on the retina. Light reflected from the fundus traverses the incident path in a reverse direction up to a separating beam splitter whereupon a portion is redirected towards the detector. A switchable band-pass optical filter may be placed here to block all other wavelengths but the one of the laser currently turned on, or the wavelength currently being acquired. Recent developments in liquid crystal technology have resulted in the design of electrically tunable tri-color optical filters (red 680 nm, green 550 nm, blue 450 nm) suitable for such applications. Because of laser safety issues [5355], it is desirable to have only one laser turned on at a time. To obtain information from the lasers needed to build a color image, one can either acquire the monochromatic images consecutively and then merge [56], or generate the color image by pulsing the lasers at such a rate that each point on the imaged area on the retina is illuminated by all colors, one after the other [44]. Latter approach decreases motion artefacts due to eye movements. The receiving path further contains the confocal pinhole and the photodetector, which can be a simple photodiode (covering the wavelengths of interest), or an avalanche photodiode. The pinhole allows passage of light reflected only from the focal plane and blocks scattered light that can blur the image. The result is a focused, high contrast image. The image on the figure is acquired with the Panoramic200 imaging SLO, courtesy of Optos, NA.

Figure 4
figure 4

A generalized diagram of a multispectral confocal scanning laser ophthalmoscope (cSLO).

The advantages of the cSLO over traditional fundus photography include improved image quality, patient comfort, video capability, and effective imaging of patients who do not dilate well, such as diabetics. The cSLO has been used for detecting biomarkers of diabetic retinopathy [57], as well as age-related macular degeneration [58].

A typical cSLO device is the Heidelberg Retinal Tomograph (HRT) which generates up to 64 transaxial laser scans, to reconstruct a high-resolution 3D image of the fundus using a 670-nm diode laser. A laser light scans the retina in 24 milliseconds, starting above the retinal surface, capturing parallel image sections at increasing depths, which can be combined to create three-dimensional images of the retina. Images are aligned and compared using TruTrack™ technology for both individual examinations and for detecting progression between examinations. The HRT II and HRT III, along with optical coherence tomographs, have become standard instruments for cSLO scanning of the optic nerve head in glaucoma, and is widely being used for imaging the RNFL [59, 60]. Figure 5 shows two 3D images of the retina reconstructed with the HRT, courtesy of Heidelberg Engineering.

Figure 5
figure 5

Three-dimensional view of the retina reconstructed with the Heidelberg Retinal Tomograph (HRT). Left: 3-D view of optic nerve drusen. Right: 3-D image from a person with advanced glaucoma. Note the depth of the cup, steepness of the walls, and reduced rim tissue. Courtesy of Heidelberg Engineering.

The left panel is a view of optic nerve drusen. The right panel presents an image from a person with advanced glaucoma. Note the depth of the cup, steepness of the walls, and reduced rim tissue.

SLO image quality is often degraded by effects of involuntary eye movements, especially with patients who cannot fixate properly (i.e. patients with diabetic retinopathy or central scotoma). Since the SLO builds images point-by-point from a flying laser spot, using retinal spatial information from a fixed frame of reference along with retinal eye tracking can significantly improve image quality. This was achieved in a compact Tracking SLO (TSLO) with high-speed retinal tracker [61]. The TSLO employs active tracking by placement of a dithered beam originating from a low-power LED onto the fundus and detection and processing of the backscattered reflectance signal by means of a phase-sensitive detection. Feedback is accomplished in real time with a digital signal processor (DSP) thus achieving overall system bandwidth of 1 kHz and significantly enhancing the imaging capabilities of the SLO. Further work in the field of confocal scanning laser ophthalmoscopy has led to the development of relatively simple, low-cost, compact non-adaptive optics, lens-based cSLO designs operating at relatively large field of view (FOV) and throughput, while maintaining resolution adequate for visualizing para-foveal cone photoreceptors and nerve fiber bundles [62].

Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO)

The scanning laser ophthalmoscopes were further improved by integrating additional technologies into them. The most significant one was adaptive optics (AO), which originated from astronomy [63, 64]. With adaptive optics, the performance of optical systems is improved by reducing the effect of wavefront distortions. It is used in astronomical telescopes and laser communication systems, to remove the effects of atmospheric distortion. In retinal imaging systems AO is used to reduce optical aberrations by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror [6570]. Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. They diminish the quality of the image formed on the retina. Spectacles and contact lenses correct low-order aberrations, such as defocus and astigmatism. With retinal imaging, light returning from the eye is subject to similar wavefront distortions caused by spatial phase nonuniformities, deteriorating the quality of the image and the ability to resolve microscopic retinal structures such as cells and capillaries. In order to achieve microscopic resolution, high-order aberrations, such as coma, spherical aberration, and trefoil, often not stable over time, must also be corrected.

The adaptive optics scanning laser ophthalmoscope (AOSLO) measures ocular aberrations using a wavefront sensor, most commonly the Shack-Hartmann sensor. In a Shack-Hartmann wavefront sensor, the nonuniformities in the wavefront are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront—an estimate of the phase nonuniformities causing aberration. Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator (wavefront compensator) such as a deformable mirror at yet another plane in the system, conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. AOSLO systems, although usually more complex than the “standard” cSLO, have proven to deliver excellent high-contrast imaging quality at high axial resolution [7174].

With AO systems, the high magnification necessary to resolve small structures such as photoreceptors are concomitant with smaller fields of 1-2° (ca. 400–500 μm). This requires also image stabilization. An image-based eye-tracking and stimulus delivery method has been implemented into an AOSLO [7578]. In [74] retinal image was stabilized to 18 μm 90% of the time using a tracking AOSLO. This stabilization was sufficient for cross-correlation techniques to automatically align images. The detection system incorporated selection and positioning of confocal apertures, allowing measurement of images arising from different portions of the double pass retinal point-spread function (PSF).

Scanning Laser Polarimetry (SLP)

The RNFL is not constant across the retina. It can also change with time – as nerve fibers die with advancing glaucoma, the RNFL becomes thinner. This corresponds to decreased amount of birefringence, which can be detected by a device called scanning laser polarimeter (SLP). The SLP incorporates polarimetry into a scanning laser ophthalmoscope, in order to detect the birefringence of the RNFL. Ellipsometry and polarimetry are often used interchangeably. Strictly speaking, ellipsometry measures the polarization state of light, whereas polarimetry often refers to measuring the angle of rotation caused by retardation when passing polarized light through, or reflecting light by an optically active substance. Birefringence in the retina was first observed by several investigators in the 1970s and early 1980s [7981]. In the mid-to-late 1980s, human foveal birefringence was measured in vivo with Mueller-matrix ellipsometry [82]. In the early 1990s, the birefringence of the retinal nerve fibers was utilized by Dreher and collaborators [4] to measure the thickness of the nerve fiber layer, again, using a retinal laser ellipsometer.

In the meantime, the theory of Mueller matrix ellipsometry was developed in the late 1970s as a convenient automatic method to measure polarization states and polarization properties of optical media [83, 84]. It is well known that the polarization state of light can be described by the Stokes vector S = {S0, S1, S2, S3}, with S0 representing the intensity of the wave, while S1, S2 and S3 are linearly independent and describe fully the polarization state of light. The transmission or reflection properties of an optical medium can be represented by the 4×4 Mueller matrix M [85, 86]. The change in polarization introduced to a light beam can be described as a multiplication of the Mueller matrix of the polarization-changing structure applied to the Stokes vector of the incident light. Thus, the performance of the birefringent material (called also a retarder) can be described as:

S out = M × S in
(1)

where S is the 4-element Stokes vector, and M is the 4×4 Mueller matrix, whose values are functions of the azimuth θ and the retardance δ of the corresponding retarder. This also means that the birefringence represented by the Mueller matrix M can be measured by giving different values to the input Stokes vector S in and measuring every time the output vector S out, then solving a set of equations, to obtain M. Consequently, the Mueller matrix elipsometer has two necessary components: the polarization-state generator (PSG) containing a linear retarder (compensator) C1, and a polarization-state detector (PSD) containing a second retarder (compensator) C2 and a linear analyzer (polarizer) A (Figure 6) [83]. It has been shown [84] that if the PSD consists of a quarter-wave plate rotating at speed ω and the PSD contains a quarter-wave plate rotating synchronously at a speed of 5ω, and the light flux is linearly detected, then a periodic signal.

F = a 0 + n = 1 12 a n cos n ω f t + b n sin n ω f t
(2)

is generated, with fundamental frequency ωf = 2ω. From the Fourier amplitudes a0, an, b0, bn, which can be measured by performing a discrete Fourier transform of the signal, the 16 elements of the Mueller matrix can directly be determined [84]. This principle was used in the first SLP for measuring the thickness of the RNFL [3, 4]. A simplified diagram of a SLP is shown in Figure 6. The SLP sends a laser beam to the posterior retina and assesses the change in polarization (also called retardation) of the reflected beam. This birefringence in the case of the RNFL is caused by neurotubules within the ganglion cell axons.

Figure 6
figure 6

A simplified general diagram of a scanning laser polarimeter (SLP), consisting of a scanning laser ophthalmoscope (SLO), a polarization-state generator (PSG) and a polarization-state detector (PSD). The SLP sends a laser beam to the posterior retina and assesses the change in polarization of the reflected beam. This birefringence in the case of the RNFL is caused by neurotubules within the ganglion cell axons.

One such SLP device, developed specially for the purpose of identifying glaucoma, is the GDx nerve fiber analyzer (developed by Laser Diagnostic Technologies and marketed later by Carl Zeiss Meditec) [87]. The laser scanning is based on the principle of the cSLO. The device generates a high-resolution image of 2565×256 pixels created by measuring the retardation of the laser scan at each location. Thus, RNFL thickness maps are generated representing the likelihood of glaucomatous RNFL loss. For each measurement, the GDx generates two images: a reflection image and a retardation image. The reflectance image is generated from the light reflected directly back from the surface of the retina, and is displayed as the Fundus Image on the device printouts. The retardation image is the map of retardation values and is converted into RNFL thickness based on a conversion factor of 0.67 nm/μm [5]. Figure 7 shows two images generated by the GDxVCC, courtesy of Carl Zeiss Meditec. The left image is the reflectance image, displayed as a color map. The right image is the retardation map converted to color-coded RNFL thickness, with thinner regions displayed in blue or green, while thicker regions are displayed in yellow or red [87].

Figure 7
figure 7

Images generated by the GDx VCC. Left: the reflectance image, displayed as a color map; Right: the retardation map converted to color-coded RNFL thickness, with thinner regions displayed in blue or green, while thicker regions are displayed in yellow or red. Courtesy of Carl Zeiss Meditec, Inc.

It should be pointed out that in addition to the RNFL, also the cornea and the eye lens cause birefringence, commonly referred to as anterior segment retardation. Several methods have been proposed for compensation of anterior segment birefringence in scanning laser polarimetry [8893]. At first, a fixed corneal compensator (FCC) was used. It was a retarder of fixed magnitude (60 nm) and fixed fast axis orientation (15° nasally down). Later, a variable corneal compensator (VCC) was introduced to individually compensate corneal retardance in terms of retardance magnitude and azimuth [87, 88, 94]. This technique was implemented in the GDxVCC: first the uncompensated image is acquired, which includes the retardation from the cornea, lens and RNFL. The macular region (containing the fovea) of this image is then analyzed to determine the axis and magnitude of the anterior segment birefringence [88]. The macular region birefringence is uniform and symmetric due to the radial distribution of the Henle fiber layer, which is made up of parallel photoreceptor neuronal processes that are radial and horizontal to the retinal surface in the center of the fovea. However, in uncompensated scans, a non-uniform retardation pattern is present in the macula due to the birefringence from the anterior segment (Figure 8). The axis orientation (azimuth) and magnitude values from the anterior segment can be computed by analyzing the non-uniform retardation profile around the macula. The axis of the anterior segment is determined by the orientation of the “bow-tie” birefringent pattern, and the magnitude is calculated by analyzing the circular profile of the birefringence in the macula. Once the axis and the magnitude values are known, the variable compensator VCC can be set to compensate for the anterior segment birefringence [87, 88]. Later, an enhanced corneal compensation algorithm (ECC) was introduced by Zeiss to the GDx technology. With it, a known large birefringence bias is introduced into the measurement beam path to shift the measurement of total retardation into a higher value region. The birefringence bias is determined from the macular region of each measurement, and then, point-by-point, removed mathematically, to yield true RNFL retardation [95]. In another study, the authors suggested an algorithm for calculating birefringence that uses large areas of the macula available in the images, to achieve better signal-to-noise ratio. The uncertainty of the calculated retardance was estimated, and an appropriate averaging strategy to reduce uncertainty was demonstrated [90].

Figure 8
figure 8

Compensating the anterior segment birefringence in the GDx VCC (macula and optic nerve head). Left: The uncompensated retardation image, which includes the retardation from the cornea, lens and RNFL. The retardation profile in the macula is due to the cornea, lens, and macula itself (Henle fiber layer). The axis of birefringence is shown as a dashed line. Once the axis and the magnitude values are known, the variable compensator VCC can be set to compensate for the anterior segment birefringence. Right: The resulting compensated image. The retardation profile in the macula is now uniform due to compensation.

Figure 9 shows a single exam printout from the GDxVCC, taken at the author’s institution (Wilmer Eye Institute at Johns Hopkins University School of Medicine). Its key elements are: a) the fundus image (top row; useful to check for image quality); b) the thickness map (second row) showing the thickness of the RNFL on a scale of 0 (dark blue) to 120 μm (red), with yellow-red colors for a healthy eye (the pink and white areas are present only in uncompensated cornea scans); c) the deviation maps (third row) revealing the location and severity of RNFL loss over the thickness map in serial comparison of thickness maps; d) the Temporal-Superior-Nasal-Inferior-Temporal (TSNIT) maps (bottom row), displaying the thickness values along the calculation circle starting temporally and moving superiorly, nasally, inferiorly and ending temporally, along with the shaded areas representing the 95% normal range for the patient’s particular age. The printout also includes parameters, such as the TSNIT average, Superior average, Inferior average, TSNIT standard deviation and Inter-eye Symmetry (Figure 9) [87].

Figure 9
figure 9

GDxVCC – exam printout of a normal subject. Key elements: a) the fundus image (top row; useful to check for image quality); b) the thickness map (second row) showing the thickness of the RNFL on a scale of 0 (dark blue) to 120 μm (red), with yellow-red colors for a healthy eye; c) the deviation maps (third row) revealing the location and severity of RNFL loss over the thickness map in serial comparison of thickness maps; d) the Temporal-Superior-Nasal-Inferior-Temporal (TSNIT) maps (bottom row), displaying the thickness values along the calculation circle starting temporally and moving superiorly, nasally, inferiorly and ending temporally, along with the shaded areas representing the 95% normal range for the patient’s particular age. The printout also includes parameters, such as the TSNIT average, Superior average, Inferior average, TSNIT standard deviation and Inter-eye Symmetry.

It should be noted that with respect to RNFL thickness measurements, other modalities, such as cSLO and optical coherence tomography have proven to be successful alternatives to SLP.

Retinal Birefringence Scanning (RBS)

A special type of polarimetry called Retinal Birefringence Scanning (RBS) was developed in the author’s laboratory, mainly for detection of central fixation and eye alignment - important for identifying risk factors for amblyopia (“lazy eye”) [96100] RBS is a technique that uses the polarization changes in the light returning from the eye to detect the projection into space of the array of Henle fibers around the fovea. In RBS, polarized near-infrared light is directed onto the retina in a circular scan, with a fixation point in the center, and the polarization-related changes in the light retro-reflected from the ocular fundus are analyzed by means of differential polarization detection. Due to the radial arrangement of the birefringent Henle fibers, a bow-tie pattern of polarization states results, centered on the fovea, with maximum and minimum areas of the polarization cross approximately 1.5° from the center of the fovea. Figure 10(a) and (b) shows a birefringence image of the fovea taken with the GDxVCC before anterior segment compensation (courtesy of Carl Zeiss Meditec). The red dashed circle of diameter of 3° of visual angle represents the scanning path, which can be centered on the fovea (during central fixation as in Figure 10(a), or to the side of the center of the fovea (during para-central fixation – as in Figure 10(b). During central fixation, the concentric circle of light falls entirely on the radial array of Henle fibers, and generates a characteristic birefringence signal which is twice the scanning frequency fs (two peaks and two dips per scan), as shown in Figure 11(a). This leads to the appearance of a peak at 2fs in the power spectrum, shown in Figure 11(c). During paracentral fixation, the scan is decentered with respect to the center of the fovea, and the orientation of the radially arranged nerve fibers changes only once during each single scan, resulting in a main frequency component equal to the scanning frequency fs. Thus, spectral analysis of the back-reflected signal from the foveal region allows detection of central fixation for that particular eye.

Figure 10
figure 10

Retinal Birefringence Scanning (RBS). A birefringence image of the fovea with the scanning circle (3° of visual angle). The circle can be centered on the fovea during central fixation as in (a), or to the side of the center of the fovea during para-central fixation – as in (b).

Figure 11
figure 11

Signals produced by RBS: a) during central fixation and b) during para-central fixation. The power spectrum (c) contains two peaks – one at 2fs, characteristic of central fixation, and one at fs, characteristic of para-central fixation.

Figure 12 shows the basic design of a RBS system. A “scanning” near-infrared source of polarized light, typically a low-power laser diode, produces linearly vertically polarized light at wavelength λ (λ =785…830 nm), which after collimation arrives at a non-polarizing beam splitter (NPBS). Half of the light continues towards a circular scanning system, which can consist of two plane mirrors. The incoming beam is converted into a circular scan, subtending an angle of approximately 3° at the subject’s eye. By the eye’s own optics, the beam is focused onto the retina, with the eye fixating on the image of small light target, appearing in the center of the scanning circle. The light follows the same path back out of the eye after being reflected from the ocular fundus. The NPBS redirects the retroreflected light to a bandpass filter and a polarization-sensitive photodetector. The polarizing beam splitter PBS separates the light of changed polarization into two orthogonal components (s- and p-) and each one them is detected by a separate photodetector. The vertical polarization component (s) is transmitted by the polarizing beam splitter (PBS) towards the first photodetector, whereas the horizontal component (p) is reflected towards the second photodetector. The second component of the Stokes vector, S1, is obtained by building the difference of the two polarization components [85, 86]. The difference signal is amplified, digitized and spectrally analyzed in software. Fast-changing and short-lasting spectral components indicative of intermittent central fixation can be detected using time-frequency methods, as described in [101].

Figure 12
figure 12

Basic design of an RBS system. The light retro-reflected from the retina is of changed polarization, which is measured by a polarization-sensitive detector.

In a binocular configuration of the above described system, by analyzing frequencies in the RBS signal from each eye, the goodness of binocular eye alignment can be measured, and thus strabismus (as a risk factor for amblyopia) can be detected. In a number of studies and in several prototypes, RBS has demonstrated reliable non-invasive detection of foveal fixation, as well as detection of eye misalignment [99, 102105]. Also, a non-moving-part design was developed [106], and its ability to perform eye-tracking after calibration was successfully tested [107]. A more recent study has led to the optimization of the parameters of the optical components used in RBS and to improvement of the signal-to-noise ratio across a wide population [108]. RBS has also been shown to work for biometric purposes by identifying the position of the retinal blood vessels around the optic nerve [109], and for identification of Attention Deficit and Hyperactivity Disorder (ADHD) by assessing the ability of test subjects to stay fixated on a target [100].

Optical Coherence Tomography (OCT)

Optical Coherence Tomography (OCT) is an imaging technique that utilizes the interferometry. The interferometer invented by Michelson sent a beam of light through a half-silvered mirror (beam splitter) splitting the beam into two paths. After leaving the beam splitter, the beams travelled out to the ends of long arms where they were reflected into the middle of small mirrors, and were then recombined in an eye piece, producing a pattern of interference. If the two optical paths differ by a whole number of wavelengths, the interference is constructive, delivering a strong signal at the detector. If they differ by a whole number and a half wavelengths (odd number of half-wavelengths), the interference is destructive and the detected signal is weak.

It can be shown [110] that the intensity measured at the photodetector of a low-coherene interferometer is a sum of three components – the backscattered intensities received respectively from the sample and reference arm, and the interference signal that carries the information about the structure of the sample, and depends on the optical path delay between the sample and the reference arm:

I d τ = I s + I r + 2 I s I r Re V mc τ
(3)

where

V mc τ = E s t E r * t + τ I s I r
(4)

and τ is the time delay corresponding to the round-trip optical path length difference between the two arms:

τ = ΔL c = L s - L r c = 2 n l s - l r c
(5)

with c being the speed of light, n - the refractive index of the medium, and l s and l r - the geometric lengths of the two arms. The normalized mutual coherence function V mc (τ) in the above equation is a measure of the degree to which the temporal and spatial characteristics of the source and reference arm match. Since a temporal coherence function is actually the Fourier transform of the power spectral density S(k) of the light source (Wiener-Khinchin theorem), the above equations can be rewritten to [110112]:

I d ΔL = I s + I r + 2 I s I r S k cos k 0 ΔL
(6)

where k 0 = 2π/λ 0 is the average wave number and the relation λ 0 = c/f 0 is used to transform from the time domain to the path domain [110].

With OCT, as with the classical Michelson interferometer, light is split into two arms – a sample arm scanning the retina, and a reference arm, which is typically a mirror. After reflection (respectively from the sample and from the reference mirror) light is recombined and directed to the sensor, which can be a simple photodetector, or a camera. Figure 13 shows a typical optical setup of an OCT system containing a moving reference mirror. Systems containing a movable mirror are also called time-domain (TD) OCT systems. A measurement beam emitted by the light source is reflected or backscattered from the object (the retina) with different delay times, depending on the optical properties of the layers comprising the object. A longitudinal (axial) profile of reflectivity versus depth is obtained by translating the reference mirror, thus changing the path length in the reference arm. For each point on the retina, the magnitude of the intensity of the resulting interference fringes is recorded for each position of the reference mirror, i.e. for each depth. In order to extract the depth-signal carrying component, the detection electronics usually contains three main circuits: a) a transimpedance amplifier, b) a band-pass filter centered at the Doppler frequency defined as fd = 2ν / λ 0 (ν: speed of the moving mirror; λ 0 : the central wavelength of the light source), and c) an amplitude demodulator to extract the envelope of the interferometric signal [113].

Figure 13
figure 13

Typical optical setup of an OCT system containing a moving reference mirror (Time-Domain OCT). Free space design with no fiber optics.

Scanning the light beam on the retina enables non-invasive cross-sectional imaging with micrometer resolution. OCT is based on low coherence interferometry [114117]. In conventional interferometry with long coherence length, which is the case with laser interferometry, interference occurs over a long distance (meters). In OCT, low coherence light is used. A low coherence light source consists of a finite bandwidth of frequencies rather than just a single frequency. Thanks to the use of broadband light sources (emitting over a broad range of frequencies), this interference is shortened to a distance of micrometers. Broad bandwidth can be produced by superluminescent light emitting diodes (SLDs) or lasers emitting in extremely short pulses (femtosecond lasers). With no lateral X-Y scanning, the information from only one point on the retina is read, at a depth defined by the position of the reference mirror. Lateral (transverse) scanning provides a 2D image for the particular depth chosen. In some designs, instead of X-Y scanning, a camera functioning as a two-dimensional detector array was used as a sensor (full-field OCT optical setup). There are two types of designs that use a moving reference mirror – a free-space and a fiber-based design. A free space design (as in Figure 13) can provide very high resolution images by using custom designed lenses, compensating components in the reference arms, and dynamic focusing to prevent loss of contrast [118]. Instead of dynamic focusing, the more popular fiber-based systems reduce the effects of transversal (lateral) resolution loss by acquiring and subsequently fusing multiple tomograms obtained at different depths at the same transverse location [110, 119122]. Figure 14 shows a generalized fiber-based TD OCT system.

Figure 14
figure 14

A generalized design of a fiber-based Time-Domain OCT system.

OCT typically employs near-infrared (NIR) light. The use of relatively long wavelength allows light to penetrate deeper into the scattering medium. Confocal microscopy, as used in cSLOs, typically penetrates less deeply into the retina. The transverse resolution for optical coherence tomography is the same as for conventional microscopy, being determined by the focusing of the optical beam. The minimum size to which an optical beam can be focused is inversely proportional to the numerical aperture of the angle focus or the beam [110, 123]:

Δx = 4 λ π f d
(7)

where λ is the wavelength, d is the spot size on the objective lens, and f is the focal length. High transverse resolution can be achieved by using a large numerical aperture and focusing the beam to a small spot size. In addition, the transverse resolution is related to the depth of focus or the confocal parameter b, which is two times the Rayleigh range z R :

b = 2 z R = π Δ x 2 2 λ
(8)

With other words, increasing the transverse resolution produces a decrease in the depth of focus. The signal-to-noise ratio (SNR) is given by the expression [123]:

SNR = 10 log ηP 2 hνNEB
(9)

where η is the quanum efficiency of the detector, is the photon energy, P is the signal power, and NEB is the noise equivalent bandwidth of the electronic filter used to demodulate the signal. The axial resolution of OCT is primarily determined by the bandwidth of the low-coherence light source used for imaging. In this aspect, OCT is different from cSLO, where the depth of focus can be limited by the numerical aperture of the pupil of the eye. For a source of Gaussian spectral distribution, the axial resolution, Δz, is

Δz = 2 ln 2 λ 0 2 πΔλ
(10)

where Δλ is the full width at half maximum (FWHM) wavelength range of the light source, and λ0 is the center wavelength [113]. Commercial “standard-resolution” OCT instruments use superluminescent diodes (SLD) emitting light centered at 830 nm and 20–30 nm bandwidth, thus resulting in a ~10 μm axial resolution in the retina [124]. Ultrahigh-resolution OCT imaging (UHR OCT) [125, 126] achieves better axial resoulution of 2–3 μm thereby enabling visualization of intraretinal structures. This advance was first demonstrated using ultrabroad-bandwidth, solid state femtosecond Titanium:sapphire lasers [127, 128] instead of the traditional SLD. Ti:sapphire lasers are capable of providing FWHM of 140–160 nm and in some cases over 250 nm. Further, a frequency-doubled Nd:YVO4, 1.8 W laser (Excel, Laser Quantum) was reportedly integrated into the resonator layout, and a prototype of a prismless Ti:sapphire laser of 260 nm bandwidth at FWHM, 6.5 femtosecond pulse duration was developed, for a wavelength range of 640–950 nm [129]. Femtosecond laser technology achieved unprecedented resolution, but is expensive, being suitable mainly for fundamental research. More recently, cost-effective, broad-bandwidth SLD sources have been developed that approach resolutions achieved by femtosecond lasers [130133]. They comprise multiplexed SLDs consisting of two or three spectrally displaced SLDs, combined to synthesize a broad spectrum. With very wide-spectrum sources emitting over nearly 100 nm wavelength range, OCT has achieved sub-micrometer resolution. Despite the disadvantage of spectrally modulated emission spectra producing sidelobes in the coherence function and image artifacts, multiplexed SLDs are the light source of choice for many commercial instruments, providing 5–8 μm axial resolution [124].

Figure 15 shows pathology examples detected with the TD OCT instrument STRATUS OCT™, courtesy of Carl Zeiss Meditec. The left panel shows a macular hole with posterior vitreous detachment. The right panel presents pigment epithelial detachment. The structures of the retina are color-coded.

Figure 15
figure 15

Pathology examples detected with the TD OCT instrument STRATUS OCT™, courtesy of Carl Zeiss Meditec. The left panel shows a macular hole with posterior vitreous detachment. The right panel presents pigment epithelial detachment. The structures of the retina are color-coded.

Optical coherence tomography in the Fourier domain (FD OCT, spectral radar, spectral domain OCT)

It can be shown that the cross spectral density function of two waves (in this case the reference and the sample wave) can be obtained as the Fourier transform of the cross-correlation function [110]:

S ij k = r ij ΔL
(11)

where k = 2π/λ 0 is the wave number, r ij (ΔL) are the cross-correlation functions of the two waves, r ij (ΔL) = cτ, τ being the time delay corresponding to the round-trip optical path length difference between the two arms [116]. The amplitude of the spectrum of the backscattered light, I(k), can be measured for different wavenumbers k using a spectrometer. The inverse Fourier transform of the measured spectral intensity gives theoretically the same signal as obtained by low coherence interferometry, providing a function of the depth for each point, without a moving reference mirror [110, 134, 135]:

s ij z = - 1 r ij ΔL = - 1 I k
(12)

In fact, similar to (6), the total interference spectrum I(k) for a scatterer at a distance z can be calculated as [110]:

- 1 I k = - 1 S k δ z + 0.5 a ^ z + 0.125 H a ^ z = A B + C + D
(13)

where S(k) is the spectrum of the source. The useful signal C (the middle convolution term) is the scattering amplitude a(z), i.e. the strength of the scattering versus the depth of the sample. The first convolution is the Fourier transformation of the source spectrum located at z = 0, and the last convolution stands for the autocorrelation terms, describing the mutual interference of the scattered elementary waves [110].

Thus, compared to TD OCT, with FD OCT only the transversal scanning procedure remains. Figure 16 shows a typical fiber-optic implementation of the Fourier domain OCT. Similar to TD OCT, a broad bandwidth source is used. In contrast to TD OCT, the slow mechanical depth scan is replaced by a spectral measurement consisting of diffraction grating and photodetector array (here a CCD). The signal is measured in the spectral domain and then the Fourier transform delivers the scattering profile in the spatial domain. The interference spectrum I(k) for a single scatterer at a certain distance from the reference plane z1 is a cosine function multiplied by the source spectrum S(k). The Fourier transform delivers the location of the peak at that frequency that corresponds to the scatterer location. With FD OCT, the measurable axial range is limited by the resolution of the spectrometer. It has been shown [110] that the maximum resolvable depth is

Z max = λ 0 2 4 n
(14)

where dλ denotes the wavelength sampling interval of the spectrometer.

Figure 16
figure 16

A typical fiber-optic implementation of the Fourier domain OCT (FD OCT). The slow mechanical depth scan is replaced by a spectral measurement consisting of diffraction grating and photodetector array.

Figure 17 shows pathology examples detected with the CIRRUS HD-OCT™ FD OCT, courtesy of Carl Zeisss Meditec. The left panel shows age-related macular degeneration. The right panel presents a lamellar macular hole. Figure 18 shows photoreceptor disruption of the retina (right), observed in a section marked with a green line on the transversal image (left). The image was obtained with the SPECTRALIS® FD OCT from Heidelberg Engineering. This instrument has enhanced the role of FD OCT by integrating it with a cSLO. Courtesy of Heidelberg Engineering.

Figure 17
figure 17

Pathology examples detected with the CIRRUS HD-OCT™ FD OCT, courtesy of Carl Zeiss Meditec. The left panel shows age-related macular degeneration. The right panel presents a lamellar macular hole.

Figure 18
figure 18

Photoreceptor disruption of the retina (right), observed in a section marked with a green line on the transversal image (left). The image was obtained with the SPECTRALIS® FD OCT, courtesy of Heidelberg Engineering.

Swept Source Optical Coherence Tomography (SS OCT, Wavelength Tuning)

In Swept Source OCT (Figure 19) the wavelength-dependent intensity data are not acquired simultaneously by using a broadband light source and a spectrometer. Instead, the wavelength of the source is being tuned, and a single photodetector is used, recording wavelengths sequentially [136]. The light intensity at the photodetector at wavelength λ of the tunable laser can be calculated as [137]:

I = I s + I r + 2 I s I r cos 2 πΔ Φ
(15)

where I s and I r are the intensities reflected from the sample and the reference arm, respectively, and ΔΦ is the phase difference between the two beams [110]:

ΔΦ = 2 L λ = 2 L k 2 π
(16)

with k being the wavenumber corresponding to wavelength λ. The phase difference ΔΦ changes with the wavenumber, causing the intensity at the photodetector to change with a frequency [110]:

f = d ΔΦ dt = d ΔΦ dk dk dt = L π dk dt
(17)
Figure 19
figure 19

A Swept Source OCT system – typical design.

The above equation shows that the signal frequency at the detector is directly proportional to the tuning rate of the wavenumber dk/dt and the path difference L. With a constant dk/dt (wavelength λ being a ramp), L can be calculated by means of Fourier transform of the time-dependent intensity recorded at the photodetector. Fourier-transforming the time-dependent beat signal yields the sample depth structure. With other words, the magnitude of the beat signal defines the amplitude reflectance while the beat frequency defines the depth position of light scattering sites in the sample [110].

Polarisation Sensitive Optical Coherence Tomography (PS OCT)

Originally, the emphasis of OCT has been the reconstruction of 2D maps of changes of tissue reflectivity, with depth information. However, in 1992 Hee et al. [138] reported the first OCT system capable of measuring also changes in the polarization state of light (birefringence). In 1997, the first polarization-sensitive (PS) images of biological tissue (bovine tendon) were presented, examining also the effect of thermal damage on collagen birefringence [139]. A further theoretical contribution to the determination of depth-resolved Stokes parameters of backscattered light using PS OCT was made two years later by the same authors [140]. Thus, PS OCT became a functional extension that takes advantage of the additional polarization information carried by the reflected light. In the meantime it had become known that several ocular structures possess birefringent properties. In the retina these are the RNFL around the optic disc [4], which can help in the diagnostics of glaucoma [141], and the Henle fiber layer around the fovea [1], which can be used for detection of macular defects. As reported in [142], the optic nerve head is surrounded by the birefringent sclera rim, which may be used as a landmark in studies of optic disc anatomy. In addition, a polarization scrambling layer is located near the retinal pigment epithelium (RPE) which may become useful in the diagnostics of age-related macular degeneration (AMD) [143]. The main advantage of PS OCT is the enhanced contrast and specificity in identifying structures in OCT images by detecting induced changes in the polarization state of light reflected from the sample. Moreover, changes in birefringence may indicate changes in functionality, structure or viability of tissues [144].

Birefringence changes the polarization state of light by a difference (Δn) in the refractive index for light polarized along, and perpendicular to the optic axis of a material. The difference in refractive index introduces a phase retardation δ between orthogonal light components that is proportional to the distance traveled through the birefringent medium [144]:

f = d ΔΦ dt = d ΔΦ dk dk dt = L π dk dt
(18)

A simplified configuration of a PS OCT (time-domain) is shown in Figure 20. It is based on early open-air designs [138, 140, 144, 145]. Linearly polarized light (produced by either a laser diode, or a superluminescent diode and a polarizer) is split into reference and sample arm by a non-polarizing beam splitter (NPBS). Light in the reference arm passes through a zero-order quarter-wave plate (QWPr) with its slow-axis oriented at 22.5° to the incident horizontal polarization. After reflection from the reference mirror, the light is returned through QWPr , now linearly polarized at 45°, providing equal reference beam power in the two orthogonal directions (vertical and horizontal). Light in the sample arm passes through another quarter-wave plate, (QWPs) oriented at 45° to the incident horizontal polarization and through focusing optics, producing circularly polarized light incident on the sample. Light reflected from the sample has generally elliptical polarization, determined by the birefringence of the sample. The reflected light passes through the QWPs again. After recombination in the detection arm, the light is split into its horizontal (p) and vertical (s) linear polarization components by a polarizing beam splitter PBS, and is then measured by corresponding detectors. The two photodetector signals are demodulated separately, to produce a two-channel scan of reflectivity versus distance. Buy using a PBS and quarter-wave plates, and detecting in two orthogonal linear polarization modes, this design is made sensitive to phase retardation and measurements are independent of sample axis rotation in the plane perpendicular to the sample beam [138].

Figure 20
figure 20

A simplified configuration of a Polarization Sensitive OCT (time-domain).

Several groups have reported also fiber-based PS-OCT systems [146148]. Compared to open-air systems, fiber-based PS-OCT are easier to construct. Yet, in a fiber-based system, maintaining the polarization state in the fiber is a challenge, because of stress in the fibers and a non-circular shape of the fiber core. Further developments include Spectral Domain PS OCT where, just as in standard FD OCT, the reference mirror is stationary and the photodetectors (now a pair) are replaced by a pair of spectrometers. This led to a significant increase of speed [149, 150]. More recently, an even faster, Swept Source PS OCT was reported [151] achieving a 350 kHz A-scan rate.

It should be noted that also the cornea is birefringent. This means that a beam probing the retina, be it initially of circular or linear polarization, will have elliptical polarization after passing through the cornea. A similar problem arises in scanning laser polarimetry, discussed earlier, where using a variable retarder helped compensate individual corneal birefringence [152]. An interesting approach was taken by Pircher et al. [147] who report a software-based corneal birefringence compensation that uses the polarization state of light backscattered at the retinal surface to measure corneal birefringence (in terms of retardation and axis orientation) and then compensate corneal birefringence numerically.

An international group [153] recently reported a PS OCT based method to quantify the double pass phase retardation induced strictly by the Henle fiber layer. On three patients, the study showed elevated double-pass retardation of 20°-to-23° occurring at an average retinal eccentricity of ca. 1.8° (range 1.5° to 2.25°). The method was also able to determine the fast axis of retardation. These results were consistent with previous knowledge of the radial pattern of Henle fibers.

Birefringence changes polarization in a predictable manner, which can be described by either the Mueller [154] or Jones [155] matrix of a linear retarder. A good review of PS OCT is given in [148].

Retinal identification using retinal scanning

The amount of birefringence can drop locally if a blood vessel is encountered during retinal scanning.This enables retinal identification for biometric purposes [109, 156]. A circular scan of of 20° around the optic disc catches all major blood vessels entering the fundus through the disc. The blood vessels often displace the nerve fibers in the RNFL, and since they are not birefringent, a steep drop (‘blip’) in the signal proportional to the size of the blood vessel is observed. Similar drops are seen on GDx images, where the blood vessels are represented as dark lines on the bright RNFL background. The birefringence-based retinal identification has certain advantages over the more traditional light absorption method [157, 158], such as using near-infrared light (NIR) which does not cause discomfort or pupil constriction to the test person the way visible (green) light does with absorption methods.

Discussion and conclusions

As Table 1 shows, different imaging techniques allow obtaining of different types of diagnostic information from the retina. Compared to fundus imaging, scanning technologies are generally applied to smaller portions of the retina (smaller FOV), but allow higher resolution and depth-penetration. TD OCT scanning technologies were generally slower than fundus imaging. But with the latest developments in FD OCT and SS OCT, image acquisition times have been shortened, allowing applications also on pediatric patients. At the same time, with moving towards larger wavelengths (above 1 μm), depth penetration has increased significantly. In addition, increasing the bandwidth of the source and the speed of the axial scan has led to a significant improvement in depth (axial) resolution. It should be mentioned, that the ability of imaging cellular-level information, such as cone photoreceptors, depends on additional factors, such as image stabilization features, numerical aperture (influencing the lateral resolution), scanning speed etc.

Retinal scanning technologies have revolutionized ophthalmic diagnostics in the last few decades, improving decisively the ability to detect and follow progression of eye diseases like glaucoma, AMD, amblyopia etc. Current research effort is directed towards improving speed and resolution, in order to enable data acquisition and 3D reconstruction of retinal substructures. Newer technologies are expected to deliver more affordable instrumentation and thus reduce health care costs. Polarization sensitive technologies are expected to enhance contrast and specificity in identifying structures in OCT images by detecting induced changes in the polarization state of light reflected from the retina or cornea.

Retinal scanning methods are being used not only for obtaining diagnostic information from the retina through imaging. A growing trend is to use such methods for screening – for example screening for amblyopia [102105] and screening for retinopathy of prematurity (ROP) through assessment of vascular characteristics [159]. Non-retinal applications of ophthalmic imaging technologies include non-contact biometry, identification and monitoring of intraocular masses and tumors, and elucidation of abnormalities in the cornea, iris, and crystalline lens, all at micrometer resolution [120]. The Duke group has recently developed a handheld OCT device for use in infants [160162]. The same group reported intraoperative use of an OCT device for imaging during macular surgery [163].

A number of diagnostic instruments have been developed combining different modalities. Some of them are commercially available – i.e. the CIRRUS photo system of Carl Zeiss Meditec, combining their CIRRUS HD-OCT with a fundus camera, and the SPECTRALIS instrument from Heidelberg Engineering, combining SD OCT with cSLO.

Retinal birefringence scanning can be used in a variety of medical and non-medical applications, such as detection of central fixation, eye alignment, biometric devices, eye tracking etc.

Authors’ information

The author’s original background is in biomedical engineering, electrical and computer engineering, and signal processing. He has been with the Wilmer Eye Institute at Johns Hopkins since August 2000, developing instrumentation for retinal birefringence scanning for medical applications. In the last 14 years he acquired knowledge and developed skills in ophthalmology, optics, optoelectronics, modeling of polarization-sensitive systems, and prototype development in the field of ophthalmic optics. He is collaborating with researchers at other universities, working in the same area.

Abbreviations

ADHD:

Attentionm deficit and hyperactivity disorder

AMD:

Age-related macular degeneration

AO:

Adaptive optics

AOSLO:

Adaptive optics scanning laser ophthalmoscope

CCD:

Charged-coupled device (here used for a camera)

cSLO:

Confocal scanning ophthalmoscope

CTIS:

Computed tomographic imaging spectrometer

DSP:

Digital signal processor

ECC:

Enhanced corneal compensation

FCC:

Fixed corneal compensator

FD OCT:

Fourier domain optical coherence tomography

FPA:

Focal plane array

FOV:

Field of view

FWHM:

Full width at half maximum (wavelength range)

fs:

Scanning frequency (RBS)

GDx:

SLP-based nerve fiber analyzer (developed by Laser Diagnostic Technologies and marketed later by Carl Zeiss Meditec)

GDxVCC:

The GDx with a variable corneal compensator

Hb:

Deoxygenated hemoglobin

HbO2:

Oxygenated hemoglobin

HRT:

Heidelberg Retinal Tomograph

HSI:

Hyperspectral imaging

IMS:

Image mapping spectroscopy

NA:

Numerical aperture

NEB:

The noise equivalent bandwidth of the electronic filter used to demodulate the OCT signal

NIR:

Near-infrared light

NPBS:

Non-polarizing beam splitter

OCT:

Optical Coherence Tomography

PBS:

Polarizing beam splitter

PS:

Polarization-sensitive

PSD:

Polarization-state detector

PSF:

Point-spread function

PS OCT:

Polarization-sensitive Optical Coherence Tomography

QWP:

Quarter-wave plate (having a retardance of λ/4)

RBC:

Red blood cells

RBS:

Retinal birefringence scanning

RNFL:

Retinal nerve fiber layer

ROP:

Retinopathy of prematurity

S0:

S1, S2, S3, Elements of the Stokes vector, describing the polarization state of light

SD OCT:

Spectral domain optical coherence tomography (same as FD OCT)

SLD:

Superluminescent light emitting diode

SLO:

Scanning laser ophthalmoscope

SLP:

Scanning laser polarimeter / scanning laser polarimetry

SS OCT:

Swept Source Optical Coherence Tomography

TD OCT:

Time domain optical coherence tomography

TSLO:

Tracking scanning laser ophthalmoscope

TSNIT:

Temporal-Superior-Nasal-Inferior-Temporal maps, displaying the thickness values of the retina around the optic nerve

UHR OCT:

Ultrahigh-resolution OCT

VCC:

Variable corneal compensator.

References

  1. Brink HB, van Blokland GJ: Birefringence of the human foveal area assessed in vivo with Mueller-matrix ellipsometry. J Opt Soc Am A Opt Image Sci 1988, 5: 49–57. 10.1364/JOSAA.5.000049

    Article  Google Scholar 

  2. Cope WT, Wolbarsht ML, Yamanashi BS: The corneal polarization cross. J Opt Soc Am 1978, 68: 1139–1141. 10.1364/JOSA.68.001139

    Article  Google Scholar 

  3. Dreher A, Reiter K: Scanning laser polarimetry of the retinal nerve fiber layer. Proceedings SPIE 1746: 34–41.

  4. Dreher AW, Reiter K, Weinreb RN: Spatially resolved birefringence of the retinal nerve fiber layer assessed with a retinal laser ellipsometer. Appl Opt 1992, 31: 3730–3735. 10.1364/AO.31.003730

    Article  Google Scholar 

  5. Weinreb RN, Dreher AW, Coleman A, Quigley H, Shaw B, Reiter K: Histopathologic validation of Fourier-ellipsometry measurements of retinal nerve fiber layer thickness. Arch Ophthalmol 1990, 108: 557–560. 10.1001/archopht.1990.01070060105058

    Article  Google Scholar 

  6. Knighton RW, Huang XR, Cavuoto LA: Corneal birefringence mapped by scanning laser polarimetry. Opt Express 2008, 16: 13738–13751. 10.1364/OE.16.013738

    Article  Google Scholar 

  7. DeHoog E, Schwiegerling J: Fundus camera systems: a comparative analysis. Appl Opt 2009, 48: 221–228. 10.1364/AO.48.000221

    Article  Google Scholar 

  8. Delori FC, Parker JS, Mainster MA: Light levels in fundus photography and fluorescein angiography. Vis Res 1980, 20: 1099–1104. 10.1016/0042-6989(80)90046-2

    Article  Google Scholar 

  9. Lu G, Fei B: Medical hyperspectral imaging: a review. J Biomed Opt 2014, 19: 10901. doi: 10.1117/1.JBO.19.1.010901 10.1117/1.JBO.19.1.010901

    Article  Google Scholar 

  10. Riesenberg R, Dillner U: HADAMARD imaging spectrometer with micro slit matrix. P Soc Photo-Opt Ins 1999, 3753: 203–213.

    Google Scholar 

  11. Morris HR, Hoyt CC, Treado PJ: Imaging Spectrometers for Fluorescence and Raman Microscopy - Acoustooptic and Liquid-Crystal Tunable Filters. Appl Spectrosc 1994, 48: 857–866. 10.1366/0003702944029820

    Article  Google Scholar 

  12. Chao TH, Zhou HY, Xia XW, Serati S: Hyperspectral Imaging using Electro-Optic Fourier Transform Spectrometer. Opt Pattern Recognit Xv 2004, 5437: 163–170. 10.1117/12.548075

    Article  Google Scholar 

  13. Murguia JE, Reeves TD, Mooney JM, Ewing WS, Shepherd FD, Brodzik A: A compact visible/near infrared hyperspectral imager. Infrared Detectors Focal Plane Arrays Vi 2000, 4028: 457–468. 10.1117/12.391760

    Article  Google Scholar 

  14. Liu WH, Barbastathis G, Psaltis D: Volume holographic hyperspectral imaging. Appl Opt 2004, 43: 3581–3599. 10.1364/AO.43.003581

    Article  Google Scholar 

  15. Patel SR, Flanagan JG, Shahidi AM, Sylvestre JP, Hudson C: A prototype hyperspectral system with a tunable laser source for retinal vessel imaging. Invest Ophthalmol Vis Sci 2013, 54: 5163–5168. 10.1167/iovs.13-12124

    Article  Google Scholar 

  16. Kester RT, Bedard N, Gao L, Tkaczyk TS: Real-time snapshot hyperspectral imaging endoscope. J Biomed Opt 2011, 16: 056005. 10.1117/1.3574756

    Article  Google Scholar 

  17. Bernhardt PA: Direct Reconstruction Methods for Hyperspectral Imaging with Rotational Spectrotomography. J Opt Soc Am a-Opt Image Sci Vis 1995, 12: 1884–1901. 10.1364/JOSAA.12.001884

    Article  Google Scholar 

  18. Johnson WR, Wilson DW, Fink W, Humayun M, Bearman G: Snapshot hyperspectral imaging in ophthalmology. J Biomed Opt 2007, 12: 014036. 10.1117/1.2434950

    Article  Google Scholar 

  19. Okamoto T, Yamaguchi I: Simultaneous acquisition of spectral image-information. Opt Lett 1991, 16: 1277–1279. 10.1364/OL.16.001277

    Article  Google Scholar 

  20. Bulygin FV, Vishnyakov GN, Levin GG, Karpukhin DV: Spectrotomography - a New Method for Production of 2d-Object Spectrograms. Opt Spektrosk+ 1991, 71: 974–978.

    Google Scholar 

  21. Johnson WR, Wilson DW, Bearman G: All-reflective snapshot hyperspectral imager for ultraviolet and infrared applications. Opt Lett 2005, 30: 1464–1466. 10.1364/OL.30.001464

    Article  Google Scholar 

  22. Descour M, Dereniak E: Computed-Tomography Imaging Spectrometer - Experimental Calibration and Reconstruction Results. Appl Opt 1995, 34: 4817–4826. 10.1364/AO.34.004817

    Article  Google Scholar 

  23. Descour MR, Volin CE, Dereniak EL, Gleeson TM, Hopkins MF, Wilson DW, Maker PD: Demonstration of a computed-tomography imaging spectrometer using a computer-generated hologram disperser. Appl Opt 1997, 36: 3694–3698. 10.1364/AO.36.003694

    Article  Google Scholar 

  24. Elliott AD, Gao L, Ustione A, Bedard N, Kester R, Piston DW, Tkaczyk TS: Real-time hyperspectral fluorescence imaging of pancreatic beta-cell dynamics with the image mapping spectrometer. J Cell Sci 2012, 125: 4833–4840. 10.1242/jcs.108258

    Article  Google Scholar 

  25. Fawzi AA, Lee N, Acton JH, Laine AF, Smith RT: Recovery of macular pigment spectrum in vivo using hyperspectral image analysis. J Biomed Opt 2011, 16: 106008. 10.1117/1.3640813

    Article  Google Scholar 

  26. Gao L, Smith RT, Tkaczyk TS: Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS). Biomed Opt Exp 2012, 3: 48–54. 10.1364/BOE.3.000048

    Article  Google Scholar 

  27. Kester RT, Gao L, Tkaczyk TS: Development of image mappers for hyperspectral biomedical imaging applications. Appl Opt 2010, 49: 1886–1899. 10.1364/AO.49.001886

    Article  Google Scholar 

  28. Kester RT, Bedard N, Tkaczyk TS: Image mapping spectrometry - a novel hyperspectral platform for rapid snapshot imaging. Proc SPIE 2011., 8048: Accession number: WOS:000292737000018, Section: Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, abstract 80480J. (May 13, 2011) doi: 10.1117/12.884627

    Google Scholar 

  29. Cheung LK, Eaton A: Age-related macular degeneration. Pharmacotherapy 2013, 33: 838–855. 10.1002/phar.1264

    Article  Google Scholar 

  30. Schweizer J, Hollmach J, Steiner G, Knels L, Funk RH, Koch E: Hyperspectral imaging - A new modality for eye diagnostics. Biomedizinische Technik (Biomedical engineering) 2012,57(Suppl.1):293–296.

    Google Scholar 

  31. Alabboud I: Human retinal oximetry using hyperspectral imaging. Ph.D. Thesis. Edinburgh, United Kingdom: Heriot-Watt University, School of Engineering and Physical Sciences; 2009:280.

    Google Scholar 

  32. Harvey AR, Lawlor A, McNaught AI, Williams JW, Fletcher-Holmes DW: Hyperspectral imaging for the detection of retinal diseases. Imaging Spectrom Viii 2002, 4816: 325–335. 10.1117/12.451693

    Article  Google Scholar 

  33. Lawlor J, Fletcher-Holmes DW, Harvey AR, McNaught AI: In vivo hyperspectral imaging of human retina and optic disc. Invest Ophthalmol Vis Sci 2002, 43: U1257-U1257.

    Google Scholar 

  34. Khoobehi B, Beach JM, Kawano H: Hyperspectral imaging for measurement of oxygen saturation in the optic nerve head. Invest Ophthalmol Vis Sci 2004, 45: 1464–1472. 10.1167/iovs.03-1069

    Article  Google Scholar 

  35. Beach J, Ning J, Khoobehi B: Oxygen saturation in optic nerve head structures by hyperspectral image analysis. Curr Eye Res 2007, 32: 161–170. 10.1080/02713680601139192

    Article  Google Scholar 

  36. Hirohara Y, Okawa Y, Mihashi T, Yamaguchi T, Nakazawa N, Tsuruga Y, Aoki H, Maeda N, Uchida I, Fujikado T: Validity of retinal oxygen saturation analysis: Hyperspectral imaging in visible wavelength with fundus camera and liquid crystal wavelength tunable filter. Opt Rev 2007, 14: 151–158. 10.1007/BF02919416

    Article  Google Scholar 

  37. Hardarson SH: Retinal Oxymetry. Ph.D. Thesis. Reykjavík: University of Iceland, Faculty of Medicine, School of Health Sciences; 2013:47.

    Google Scholar 

  38. Minsky M: Memoir on Inventing the Confocal Scanning Microscope. 1988.

    Google Scholar 

  39. Laser Scanning Confocal Microscopy. [http://www.micro.magnet.fsu.edu/primer/techniques/confocal/]

  40. Webb RH, Hughes GW, Pomerantzeff O: Flying spot TV ophthalmoscope. Appl Opt 1980, 19: 2991–2997. 10.1364/AO.19.002991

    Article  Google Scholar 

  41. Webb RH, Hughes GW: Scanning laser ophthalmoscope. IEEE Trans Bio-Med Eng 1981, 28: 488–492.

    Article  Google Scholar 

  42. Webb RH: Optics for laser rasters. Appl Opt 1984, 23: 3680. 10.1364/AO.23.003680

    Article  Google Scholar 

  43. Plesch A, Klingbeil U, Rappl W, Schroedel C: Scanning Ophthalmic Imaging. In Laser Scanning Ophthalmoscopy and Tomography. Edited by: Nasemann JE, Burk ROW. Munich, Germany: Quintessenz; 1990:109–121.

    Google Scholar 

  44. Vieira P, Manivannan A, Sharp PF, Forrester JV: True colour imaging of the fundus using a scanning laser ophthalmoscope. Physiol Meas 2002, 23: 1–10. 10.1088/0967-3334/23/1/301

    Article  Google Scholar 

  45. Webb RH, Hughes GW, Delori FC: Confocal scanning laser ophthalmoscope. Appl Opt 1987, 26: 1492–1499. 10.1364/AO.26.001492

    Article  Google Scholar 

  46. Sharp PF, Manivannan A: The scanning laser ophthalmoscope. Phys Med Biol 1997, 42: 951–966. 10.1088/0031-9155/42/5/014

    Article  Google Scholar 

  47. Vieira P, Manivannan A, Lim CS, Sharp P, Forrester JV: Tomographic reconstruction of the retina using a confocal scanning laser ophthalmoscope. Physiol Meas 1999, 20: 1–19. 10.1088/0967-3334/20/1/001

    Article  Google Scholar 

  48. Sharp PF, Manivannan A, Vieira P, Hipwell JH: Laser imaging of the retina. Br J Ophthalmol 1999, 83: 1241–1245. 10.1136/bjo.83.11.1241

    Article  Google Scholar 

  49. Elsner AE, Burns SA, Hughes GW, Webb RH: Reflectometry with a scanning laser ophthalmoscope. Appl Opt 1992, 31: 3697–3710. 10.1364/AO.31.003697

    Article  Google Scholar 

  50. Remky A, Beausencourt E, Elsner AE: Angioscotometry with the scanning laser ophthalmoscope. Comparison of the effect of different wavelengths. Invest Ophthalmol Vis Sci 1996, 37: 2350–2355.

    Google Scholar 

  51. Lompado A, Smith MH, Hillman LW, Denninghoff KR: Multispectral confocal scanning laser ophthalmoscope for retinal vessel oxymetry. Proc. SPIE 3920. Spectral Imaging: Instrumentation, Applications, and Analysis, 67. (March 14, 2000) doi: 10.1117/12.379584

  52. Remky A, Elsner AE, Morandi AJ, Beausencourt E, Trempe CL: Blue-on-yellow perimetry with a scanning laser ophthalmoscope: small alterations in the central macula with aging. J Opt Soc Am A Opt Image Sci Vis 2001, 18: 1425–1436. 10.1364/JOSAA.18.001425

    Article  Google Scholar 

  53. Sliney D, Wolbarsht M: Safety with Lasers and Other Optical Sources. New York and London: Plenum Press; 1980.

    Book  Google Scholar 

  54. Klingbeil U: Safety aspects of laser scanning ophthalmoscopes. Health Phys 1986, 51: 81–93. 10.1097/00004032-198607000-00006

    Article  Google Scholar 

  55. de Wit GC: Safety norms for Maxwellian view laser scanning devices based on the ANSI standards. Health Phys 1996, 71: 766–769. 10.1097/00004032-199611000-00020

    Article  Google Scholar 

  56. Manivannan A, Kirkpatrick JN, Sharp PF, Forrester JV: Novel approach towards colour imaging using a scanning laser ophthalmoscope. Br J Ophthalmol 1998, 82: 342–345. 10.1136/bjo.82.4.342

    Article  Google Scholar 

  57. Wykes WN, Pyott AA, Ferguson VG: Detection of diabetic retinopathy by scanning laser ophthalmoscopy. Eye 1994,8(Pt 4):437–439.

    Article  Google Scholar 

  58. Manivannan A, Kirkpatrick JN, Sharp PF, Forrester JV: Clinical investigation of an infrared digital scanning laser ophthalmoscope. Br J Ophthalmol 1994, 78: 84–90. 10.1136/bjo.78.2.84

    Article  Google Scholar 

  59. Seymenoglu G, Baser E, Ozturk B: Comparison of spectral-domain optical coherence tomography and Heidelberg retina tomograph III optic nerve head parameters in glaucoma. Ophthalmologica 2013, 229: 101–105. 10.1159/000341574

    Article  Google Scholar 

  60. Chan EW, Liao J, Wong R, Loon SC, Aung T, TY W, Cheng C-Y: Diagnostic Performance of the ISNT Rule for Glaucoma Based on the Heidelberg Retinal Tomograph. Trans Vis Sci Technol (TVST) 2013, 2: 1–10.

    Article  Google Scholar 

  61. Hammer DX, Ferguson RD, Magill JC, White MA, Elsner AE, Webb RH: Compact scanning laser ophthalmoscope with high-speed retinal tracker. Appl Opt 2003, 42: 4621–4632. 10.1364/AO.42.004621

    Article  Google Scholar 

  62. LaRocca F, Dhalla AH, Kelly MP, Farsiu S, Izatt JA: Optimization of confocal scanning laser ophthalmoscope design. J Biomed Opt 2013, 18: 076015. 10.1117/1.JBO.18.7.076015

    Article  Google Scholar 

  63. Beckers JM: Adaptive optics for astronomy: principles, performance, and applications. Annu Rev Astron Astrophys 1993, 31: 13–62. 10.1146/annurev.aa.31.090193.000305

    Article  MathSciNet  Google Scholar 

  64. Adaptive Optics. [http://en.wikipedia.org/wiki/Adaptive_optics]

  65. Liang J, Williams DR, Miller DT: Supernormal vision and high-resolution retinal imaging through adaptive optics. J Opt Soc Am A Opt Image Sci Vis 1997, 14: 2884–2892. 10.1364/JOSAA.14.002884

    Article  Google Scholar 

  66. Roorda A, Williams DR: The arrangement of the three cone classes in the living human eye. Nature 1999, 397: 520–522. 10.1038/17383

    Article  Google Scholar 

  67. Roorda A: Adaptive optics ophthalmoscopy. J Refract Surg 2000, 16: S602-S607.

    Google Scholar 

  68. Roorda A, Williams DR: Retinal imaging using adaptive optics. In Customized Corneal Ablation: The Quest for SuperVision. Edited by: MacRae S, Krueger R, Applegate RA. Thorofare, NJ: SLACK, Inc; 2001.

    Google Scholar 

  69. Roorda A, Romero-Borja F, Donnelly Iii W, Queener H, Hebert T, Campbell M: Adaptive optics scanning laser ophthalmoscopy. Opt Express 2002, 10: 405–412. 10.1364/OE.10.000405

    Article  Google Scholar 

  70. Zhang Y, Roorda A: Evaluating the lateral resolution of the adaptive optics scanning laser ophthalmoscope. J Biomed Opt 2006, 11: 014002. 10.1117/1.2166434

    Article  Google Scholar 

  71. Burns SA, Marcos S, Elsner AE, Bara S: Contrast improvement of confocal retinal imaging by use of phase-correcting plates. Opt Lett 2002, 27: 400–402. 10.1364/OL.27.000400

    Article  Google Scholar 

  72. Hammer DX, Ferguson RD, Bigelow CE, Iftimia NV, Ustun TE, Burns SA: Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging. Opt Express 2006, 14: 3354–3367. 10.1364/OE.14.003354

    Article  Google Scholar 

  73. Zhang Y, Poonja S, Roorda A: MEMS-based adaptive optics scanning laser ophthalmoscopy. Opt Lett 2006, 31: 1268–1270. 10.1364/OL.31.001268

    Article  Google Scholar 

  74. Burns SA, Tumbar R, Elsner AE, Ferguson D, Hammer DX: Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope. J Opt Soc Am A Opt Image Sci Vis 2007, 24: 1313–1326. 10.1364/JOSAA.24.001313

    Article  Google Scholar 

  75. Vogel CR, Arathorn DW, Roorda A, Parker A: Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy. Opt Express 2006, 14: 487–497. 10.1364/OPEX.14.000487

    Article  Google Scholar 

  76. Arathorn DW, Yang Q, CR V, Zhang Y, Tiruveedhula P, Roorda A: Retinally stabilized cone-targeted stimulus delivery. Opt Express 2007, 15: 13731–13744. 10.1364/OE.15.013731

    Article  Google Scholar 

  77. Yang Q, Arathorn DW, Tiruveedhula P, Vogel CR, Roorda A: Design of an integrated hardware interface for AOSLO image capture and cone-targeted stimulus delivery. Opt Express 2010, 18: 17841–17858. 10.1364/OE.18.017841

    Article  Google Scholar 

  78. Sheehy CK, Yang Q, Arathorn DW, Tiruveedhula P, de Boer JF, Roorda A: High-speed, image-based eye tracking with a scanning laser ophthalmoscope. Biomed Opt Express 2012, 3: 2611–2622. 10.1364/BOE.3.002611

    Article  Google Scholar 

  79. Weale RA: On the birefringence of rods and cones. Pflugers Arch - Eur J Physiol 1971, 329: 244–257. 10.1007/BF00586618

    Article  Google Scholar 

  80. Delori FC, Webb RH, Parker JS: Macular birefringence. Invest Ophthalmol Vis Sci 1979,19(Suppl):Issue 53, ARVO abstract. Issue 53, ARVO abstract

    Google Scholar 

  81. Hochheimer BF, Kues HA: Retinal polarization effects. Appl Opt 1982, 21: 3811–3818. 10.1364/AO.21.003811

    Article  Google Scholar 

  82. Klein Brink HB, Van Blokland GJ: Birefringence of the human foveal area assessed in vivo with Mueller-matrix ellipsometry. J Opt Soc Am A Opt Image Sci 1988, 5: 49–57. 10.1364/JOSAA.5.000049

    Article  Google Scholar 

  83. Hauge PS: Mueller matrix ellipsometry with imperfect compensators. J Opt Soc Am 1978, 68: 1519–1528. 10.1364/JOSA.68.001519

    Article  Google Scholar 

  84. Azzam RMA: Photopolarimetric Measurement of Mueller Matrix by Fourier-Analysis of a Single Detected Signal. Opt Lett 1978, 2: 148–150. 10.1364/OL.2.000148

    Article  Google Scholar 

  85. Shurcliff WA: Polarized Light: Production and Use. Cambridge, Massachusetts: Harvard University Press; 1962.

    Book  Google Scholar 

  86. Collett E, Schaeffer B: Polarized Light for Scientists and Engineers. 1st edition. Long Branch, New Jersey: The PolaWave Group; 2012.

    Google Scholar 

  87. RNFL Analysis with GDxVCC: A primer and Clinical Guide. Dublin, CA: Carl Zeiss Meditec, Inc. /Laser Diagnostic Technologies; 2004.

    Google Scholar 

  88. Zhou Q, Weinreb RN: Individualized compensation of anterior segment birefringence during scanning laser polarimetry. Invest Ophthalmol Vis Sci 2002, 43: 2221–2228.

    Google Scholar 

  89. Knighton RW, Huang XR, Greenfield DS: Analytical model of scanning laser polarimetry for retinal nerve fiber layer assessment. Invest Ophthalmol Vis Sci 2002, 43: 383–392.

    Google Scholar 

  90. Knighton R, Huang XR: Analytical methods for scanning laser polarimetry. Opt Express 2002, 10: 1179–1189. 10.1364/OE.10.001179

    Article  Google Scholar 

  91. Reus NJ, Zhou Q, Lemij HG: Enhanced imaging algorithm for scanning laser polarimetry with variable corneal compensation. Invest Ophthalmol Vis Sci 2006, 47: 3870–3877. 10.1167/iovs.05-0067

    Article  Google Scholar 

  92. Sehi M, Ume S, Greenfield DS: Scanning laser polarimetry with enhanced corneal compensation and optical coherence tomography in normal and glaucomatous eyes. Invest Ophthalmol Vis Sci 2007, 48: 2099–2104. 10.1167/iovs.06-1087

    Article  Google Scholar 

  93. Knighton RW, Huang XR: Corneal compensation in scanning laser polarimetry: characterization and analysis. Invest Ophth Vis Sci 2000,41(4):S92.

    Google Scholar 

  94. Bagga H, Greenfield DS, Knighton RW: Scanning laser polarimetry with variable corneal compensation: identification and correction for corneal birefringence in eyes with macular disease. Invest Ophthalmol Vis Sci 2003, 44: 1969–1976. 10.1167/iovs.02-0923

    Article  Google Scholar 

  95. Toth M, Hollo G: Enhanced corneal compensation for scanning laser polarimetry on eyes with atypical polarisation pattern. Br J Ophthalmol 2005, 89: 1139–1142. 10.1136/bjo.2005.070011

    Article  Google Scholar 

  96. Hunter DG, Patel SN, Guyton DL: Automated detection of foveal fixation by use of retinal birefringence scanning. Appl Opt 1999, 38: 1273–1279. 10.1364/AO.38.001273

    Article  Google Scholar 

  97. Hunter DG, Sandruck JC, Sau S, Patel SN, Guyton DL: Mathematical modeling of retinal birefringence scanning. J Opt Soc Am A Opt Image Sci 1999, 16: 2103–2111. 10.1364/JOSAA.16.002103

    Article  MathSciNet  Google Scholar 

  98. Guyton DL, Hunter DG, Patel SN, Sandruck JC, Fry RL: Eye Fixation Monitor and Tracker. U.S. Patent No. 6,027,216. 2000.

    Google Scholar 

  99. Hunter DG, Nassif DS, Piskun NV, Winsor R, Gramatikov BI, Guyton DL: Pediatric Vision Screener 1: instrument design and operation. J Biomed Opt 2004, 9: 1363–1368. 10.1117/1.1805560

    Article  Google Scholar 

  100. Gramatikov B, Irsch K, Mullenbroich M, Frindt N, Qu Y, Gutmark R, Wu YK, Guyton D: A device for continuous monitoring of true central fixation based on foveal birefringence. Ann Biomed Eng 2013, 41: 1968–1978. 10.1007/s10439-013-0818-2

    Article  Google Scholar 

  101. Gramatikov B: Detecting fixation on a target using time-frequency distributions of a retinal birefringence scanning signal. Biomed Eng Online 2013, 12: 41. 10.1186/1475-925X-12-41

    Article  Google Scholar 

  102. Hunter DG, Piskun NV, Guyton DL, Gramatikov BI, Nassif D: Clinical performance of the Pediatric Vision Screener. Ft. Lauderdale, FL: ARVO Conference; 2004. Invest Ophthalmol Vis Sci. 2004;45:ARVO abstract accession number 3488

    Google Scholar 

  103. Nassif DS, Piskun NV, Gramatikov BI, Guyton DL, Hunter DG: Pediatric Vision Screener 2: pilot study in adults. J Biomed Opt 2004, 9: 1369–1374. 10.1117/1.1805561

    Article  Google Scholar 

  104. Nassif DS, Piskun NV, Hunter DG: The Pediatric Vision Screener III: detection of strabismus in children. Arch Ophthalmol 2006, 124: 509–513. 10.1001/archopht.124.4.509

    Article  Google Scholar 

  105. Loudon SE, Rook CA, Nassif DS, Piskun NV, Hunter DG: Rapid, high-accuracy detection of strabismus and amblyopia using the pediatric vision scanner. Invest Ophthalmol Vis Sci 2011, 52: 5043–5048. 10.1167/iovs.11-7503

    Article  Google Scholar 

  106. Gramatikov BI, Zalloum OHY, Wu YK, Hunter DG, Guyton DL: Birefringence-based eye fixation monitor with no moving parts. J Biomed Opt 2006, 11: 034025–034021. 034025–034011 10.1117/1.2209003

    Article  Google Scholar 

  107. Gramatikov BI, Zalloum OH, Wu YK, Hunter DG, Guyton DL: Directional eye fixation sensor using birefringence-based foveal detection. Appl Opt 2007, 46: 1809–1818. 10.1364/AO.46.001809

    Article  Google Scholar 

  108. Irsch K, Gramatikov B, Wu YK, Guyton D: Modeling and minimizing interference from corneal birefringence in retinal birefringence scanning for foveal fixation detection. Biomed Opt Express 2011, 2: 1955–1968. 10.1364/BOE.2.001955

    Article  Google Scholar 

  109. Agopov M, Gramatikov BI, Wu YK, Irsch K, Guyton DL: Use of retinal nerve fiber layer birefringence as an addition to absorption in retinal scanning for biometric purposes. Appl Opt 2008, 47: 1048–1053. 10.1364/AO.47.001048

    Article  Google Scholar 

  110. Chen Y, Bousi E, Pitris C, Fujimoto J: Optical Coherence Tomography: Introduction and Theory. In Handbook of Biomedical Optics. Edited by: Boas DA, Pitris C, Ramanujam N. Boca Raton, London, New York: CRC Press, Taylor and Francis Group; 2011.

    Google Scholar 

  111. Pan Y, Birngruber R, Rosperich J, Engelhardt R: Low-coherence optical tomography in turbid tissue: theoretical analysis. Appl Opt 1995, 34: 6564–6574. 10.1364/AO.34.006564

    Article  Google Scholar 

  112. Schmitt JM: Optical Coherence Tomography (OCT): A review. IEEE J Select Topics Quantum Electron 1999, 5: 1205–1215. 10.1109/2944.796348

    Article  Google Scholar 

  113. Hee MR: Optical Coherence Tomography: Theory. In Handbook of Optical Coherence Tomography. Edited by: Bouma BE, Tearney GJ. New York, Basel: Marcel Dekker; 2002.

    Google Scholar 

  114. Fercher AF, Mengedoht K, Werner W: Eye-length measurement by interferometry with partially coherent light. Opt Lett 1988, 13: 186–188. 10.1364/OL.13.000186

    Article  Google Scholar 

  115. Riederer SJ: Current technical development of magnetic resonance imaging. IEEE Eng Med Biol Mag 2000, 19: 34–41.

    Article  Google Scholar 

  116. Born M, Wolf E: Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. Cambridge: Cambridge University Press; 1999.

    Book  Google Scholar 

  117. Bouma BE, Tearney GJ: Handbook of Optical Coherence Tomography. New York, Basel: Marcel Dekker; 2002.

    Google Scholar 

  118. Bizheva K, Povazay B, Hermann B, Sattmann H, Drexler W, Mei M, Holzwarth R, Hoelzenbein T, Wacheck V, Pehamberger H: Compact, broad-bandwidth fiber laser for sub-2-microm axial resolution optical coherence tomography in the 1300-nm wavelength region. Opt Lett 2003, 28: 707–709. 10.1364/OL.28.000707

    Article  Google Scholar 

  119. Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, Hee MR, Flotte T, Gregory K, Puliafito CA, Fujimoto JG: Optical coherence tomography. Science 1991, 254: 1178–1181. 10.1126/science.1957169

    Article  Google Scholar 

  120. Izatt JA, Hee MR, Swanson EA, Lin CP, Huang D, Schuman JS, Puliafito CA, Fujimoto JG: Micrometer-scale resolution imaging of the anterior eye in vivo with optical coherence tomography. Arch Ophthalmol 1994, 112: 1584–1589. 10.1001/archopht.1994.01090240090031

    Article  Google Scholar 

  121. Hee MR, Izatt JA, Swanson EA, Huang D, Schuman JS, Lin CP, Puliafito CA, Fujimoto JG: Optical coherence tomography of the human retina. Arch Ophthalmol 1995, 113: 325–332. 10.1001/archopht.1995.01100030081025

    Article  Google Scholar 

  122. Drexler W, Morgner U, Kartner FX, Pitris C, Boppart SA, Li XD, Ippen EP, Fujimoto JG: In vivo ultrahigh-resolution optical coherence tomography. Opt Lett 1999, 24: 1221–1223. 10.1364/OL.24.001221

    Article  Google Scholar 

  123. Fujimoto JG: Optical Coherence Tomography: Introduction. In Handbook of Optical Coherence Tomography. Edited by: Bouma BE, Tearney GJ. New York, Basel: Marcel Dekker; 2002.

    Google Scholar 

  124. Drexler W, Fujimoto JG: State-of-the-art retinal optical coherence tomography. Prog Retin Eye Res 2008, 27: 45–88. 10.1016/j.preteyeres.2007.07.005

    Article  Google Scholar 

  125. Drexler W, Morgner U, Ghanta RK, Kartner FX, Schuman JS, Fujimoto JG: Ultrahigh-resolution ophthalmic optical coherence tomography. Nat Med 2001, 7: 502–507. 10.1038/86589

    Article  Google Scholar 

  126. Drexler W: Ultrahigh-resolution optical coherence tomography. J Biomed Opt 2004, 9: 47–74. 10.1117/1.1629679

    Article  Google Scholar 

  127. Kowalevicz AM Jr, Schibli TR, Kartner FX, Fujimoto JG: Ultralow-threshold Kerr-lens mode-locked Ti:Al(2)O(3) laser. Opt Lett 2002, 27: 2037–2039. 10.1364/OL.27.002037

    Article  Google Scholar 

  128. Unterhuber A, Povazay B, Hermann B, Sattmann H, Drexler W, Yakovlev V, Tempea G, Schubert C, Anger EM, Ahnelt PK, Stur M, Morgan JE, Cowey A, Jung G, Le T, Stingl A: Compact, low-cost Ti:Al2O3 laser for in vivo ultrahigh-resolution optical coherence tomography. Opt Lett 2003, 28: 905–907. 10.1364/OL.28.000905

    Article  Google Scholar 

  129. Unterhuber A, Povazay B, Bizheva K, Hermann B, Sattmann H, Stingl A, Le T, Seefeld M, Menzel R, Preusser M, Budka H, Schubert C, Reitsamer H, Ahnelt PK, Morgan JE, Cowey A, Drexler W: Advances in broad bandwidth light sources for ultrahigh resolution optical coherence tomography. Phys Med Biol 2004, 49: 1235–1246. 10.1088/0031-9155/49/7/011

    Article  Google Scholar 

  130. Adler DS, Ko TH, Konorev AK, Mamedov DS, Prokhorov VV, Fujimoto JJ, Yakubovich SD: Broadband light source based on quantum-well superluminescent diodes for high-resolution optical coherence tomography. Quantum Electron+ 2004, 34: 915–918. 10.1070/QE2004v034n10ABEH002799

    Article  Google Scholar 

  131. Ko TH, Adler DC, Fujimoto JG, Mamedov D, Prokhorov V, Shidlovski V, Yakubovich S: Ultrahigh resolution optical coherence tomography imaging with a broadband superluminescent diode light source. Opt Express 2004, 12: 2112–2119. 10.1364/OPEX.12.002112

    Article  Google Scholar 

  132. Wojtkowski M, Srinivasan VJ, Ko TH, Fujimoto JG, Kowalczyk A, Duker JS: Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation. Opt Express 2004, 12: 2404–2422. 10.1364/OPEX.12.002404

    Article  Google Scholar 

  133. Cense B, Chen TC, Nassif N, Pierce MC, Yun SH, Park BH, Bouma BE, Tearney GJ, de Boer JF: Ultra-high speed and ultra-high resolution spectral-domain optical coherence tomography and optical Doppler tomography in ophthalmology. Bulletin de la Societe belge d'ophtalmologie 2006, 123–132.

    Google Scholar 

  134. Fercher AF, Hitzenberger CK, Kamp G, Elzaiat SY: Measurement of intraocular distances by backscattering spectral interferometry. Opt Commun 1995, 117: 43–48. 10.1016/0030-4018(95)00119-S

    Article  Google Scholar 

  135. Fercher AF, Drexler W, Hitzenberger CK, Lasser T: Optical coherence tomography - principles and applications. Rep Prog Phys 2003, 66: 239–303. 10.1088/0034-4885/66/2/204

    Article  Google Scholar 

  136. Chinn SR, Swanson EA, Fujimoto JG: Optical coherence tomography using a frequency-tunable optical source. Opt Lett 1997, 22: 340–342. 10.1364/OL.22.000340

    Article  Google Scholar 

  137. Fujimoto JG, Pitris C, Boppart SA, Brezinski ME: Optical coherence tomography: an emerging technology for biomedical imaging and optical biopsy. Neoplasia 2000, 2: 9–25. 10.1038/sj.neo.7900071

    Article  Google Scholar 

  138. Hee MR, Huang D, Swanson EA, Fujimoto JG: Polarization-sensitive Low-coherence reflectometer for birefringence characterization and ranging. J Opt Soc Am B 1992, 9: 903–908.

    Article  Google Scholar 

  139. de Boer JF, Milner TE, van Gemert MJC, Nelson JS: Two-dimensional birefringence imaging in biological tissue by polarization-sensitive optical coherence tomography. Opt Lett 1997, 22: 934–936. 10.1364/OL.22.000934

    Article  Google Scholar 

  140. de Boer JF, Milner TE, Nelson JS: Determination of the depth-resolved Stokes parameters of light backscattered from turbid media by use of polarization-sensitive optical coherence tomography. Opt Lett 1999, 24: 300–302. 10.1364/OL.24.000300

    Article  Google Scholar 

  141. Weinreb RN, Zangwill L, Berry CC, Bathija R, Sample PA: Detection of glaucoma with scanning laser polarimetry. Arch Ophthalmol 1998, 116: 1583–1589. 10.1001/archopht.116.12.1583

    Article  Google Scholar 

  142. Pircher M, Gotzinger E, Leitgeb R, Sattmann H, Findl O, Hitzenberger CK: Imaging of polarization properties of human retina in vivo with phase resolved transversal PS-OCT. Opt Express 2004, 12: 5940–5951. 10.1364/OPEX.12.005940

    Article  Google Scholar 

  143. Pircher M, Gotzinger E, Findl O, Michels S, Geitzenauer W, Leydolt C, Schmidt-Erfurth U, Hitzenberger CK: Human macula investigated in vivo with polarization-sensitive optical coherence tomography. Invest Ophthalmol Vis Sci 2006, 47: 5487–5494. 10.1167/iovs.05-1589

    Article  Google Scholar 

  144. De Boer JF, Srinivas SM, Nelson JS, Milner TE, Ducros MG: Polarization-Sensitive Optical Coherence Tomography. In Handbook of Optical Coherence Tomography. Edited by: Bouma BE, Tearney GJ. New York, Basel: Marcel Dekker; 2002:237–274.

    Google Scholar 

  145. Hitzenberger C, Goetzinger E, Sticker M, Pircher M, Fercher A: Measurement and imaging of birefringence and optic axis orientation by phase resolved polarization sensitive optical coherence tomography. Opt Express 2001, 9: 780–790. 10.1364/OE.9.000780

    Article  Google Scholar 

  146. Cense B, Chen HC, Park BH, Pierce MC, de Boer JF: In vivo birefringence and thickness measurements of the human retinal nerve fiber layer using polarization-sensitive optical coherence tomography. J Biomed Opt 2004, 9: 121–125. 10.1117/1.1627774

    Article  Google Scholar 

  147. Pircher M, Gotzinger E, Baumann B, Hitzenberger CK: Corneal birefringence compensation for polarization sensitive optical coherence tomography of the human retina. J Biomed Opt 2007, 12: 041210. 10.1117/1.2771560

    Article  Google Scholar 

  148. Pircher M, Hitzenberger CK, Schmidt-Erfurth U: Polarization sensitive optical coherence tomography in the human eye. Prog Retin Eye Res 2011, 30: 431–451. 10.1016/j.preteyeres.2011.06.003

    Article  Google Scholar 

  149. Gotzinger E, Baumann B, Pircher M, Hitzenberger CK: Polarization maintaining fiber based ultra-high resolution spectral domain polarization sensitive optical coherence tomography. Opt Express 2009, 17: 22704–22717. 10.1364/OE.17.022704

    Article  Google Scholar 

  150. Zotter S, Pircher M, Torzicky T, Baumann B, Yoshida H, Hirose F, Roberts P, Ritter M, Schutze C, Gotzinger E, Trasischker W, Vass C, Schmidt-Erfurth U, Hitzenberger CK: Large-field high-speed polarization sensitive spectral domain OCT and its applications in ophthalmology. Biomed Opt Express 2012, 3: 2720–2732. 10.1364/BOE.3.002720

    Article  Google Scholar 

  151. Torzicky T, Marschall S, Pircher M, Baumann B, Bonesi M, Zotter S, Gotzinger E, Trasischker W, Klein T, Wieser W, Biedermann B, Huber R, Andersen P, Hitzenberger CK: Retinal polarization-sensitive optical coherence tomography at 1060 nm with 350 kHz A-scan rate using an Fourier domain mode locked laser. J Biomed Opt 2013, 18: 26008. 10.1117/1.JBO.18.2.026008

    Article  Google Scholar 

  152. Choplin NT, Zhou Q, Knighton RW: Effect of individualized compensation for anterior segment birefringence on retinal nerve fiber layer assessments as determined by scanning laser polarimetry. Ophthalmology 2003, 110: 719–725. 10.1016/S0161-6420(02)01899-7

    Article  Google Scholar 

  153. Cense B, Wang Q, Lee S, Zhao L, Elsner AE, Hitzenberger CK, Miller DT: Henle fiber layer phase retardation measured with polarization-sensitive optical coherence tomography. Biomed Biomedical Opt 2013, 4: 2296–2306.

    Article  Google Scholar 

  154. Jiao SL, Yao G, Wang LHV: Depth-resolved two-dimensional Stokes vectors of backscattered light and Mueller matrices of biological tissue measured with optical coherence tomography. Appl Opt 2000, 39: 6318–6324. 10.1364/AO.39.006318

    Article  Google Scholar 

  155. de Boer JF, Srinivas SM, Malekafzali A, Chen ZP, Nelson JS: Imaging thermally damaged tissue by polarization sensitive optical coherence tomography. Opt Express 1998, 3: 212–218. 10.1364/OE.3.000212

    Article  Google Scholar 

  156. Agopov M: Retinal Identification. In Biometrics, Ch. 5. Edited by: Yang J. Rijeka, Croatia: InTech; 2011:99–112.

    Google Scholar 

  157. Hill R: Apparatus and method for identifying individuals through their retinal vasculature patterns. US patent No. US 4,109,237. 1978.

    Google Scholar 

  158. Hill R: Rotating beam ocular identification apparatus and method. US patent No. US 4,393,366. 1986.

    Google Scholar 

  159. Maldonado RS, Yuan E, Tran-Viet D, Rothman AL, Tong AY, Wallace DK, Freedman SF, Toth CA: Three-Dimensional Assessment of Vascular and Perivascular Characteristics in Subjects with Retinopathy of Prematurity. Ophthalmology 2014. 2014 Jan 21. doi: 10.1016/j.ophtha.2013.12.004. [Epub ahead of print]

    Google Scholar 

  160. Allingham MJ, Cabrera MT, O'Connell RV, Maldonado RS, Tran-Viet D, Toth CA, Freedman SF, El-Dairi MA: Racial variation in optic nerve head parameters quantified in healthy newborns by handheld spectral domain optical coherence tomography. J AAPOS 2013, 17: 501–506. 10.1016/j.jaapos.2013.06.014

    Article  Google Scholar 

  161. Cabrera MT, Maldonado RS, Toth CA, O'Connell RV, Chen BB, Chiu SJ, Farsiu S, Wallace DK, Stinnett SS, Panayotti GM, Swamy GK, Freedman SF: Subfoveal fluid in healthy full-term newborns observed by handheld spectral-domain optical coherence tomography. Am J Ophthalmol 2012, 153: 167–175. e163 10.1016/j.ajo.2011.06.017

    Article  Google Scholar 

  162. Chavala SH, Farsiu S, Maldonado R, Wallace DK, Freedman SF, Toth CA: Insights into advanced retinopathy of prematurity using handheld spectral domain optical coherence tomography imaging. Ophthalmology 2009, 116: 2448–2456. 10.1016/j.ophtha.2009.06.003

    Article  Google Scholar 

  163. Dayani PN, Maldonado R, Farsiu S, Toth CA: Intraoperative use of handheld spectral domain optical coherence tomography imaging in macular surgery. Retina 2009, 29: 1457–1468. 10.1097/IAE.0b013e3181b266bc

    Article  Google Scholar 

Download references

Acknowledgements

A part of the author’s work related to retinal scanning was funded by The Hartwell Foundation through two grants; 1) “Development of a Pediatric Vision Screener” - an Individual Biomedical Research Award), and 2) “Diagnosis and Management of Infant Retinal Disease” - a 2012 Biomedical Research Collaboration Award with Duke University (co-PI). The author would also like to thank Carl Zeiss Meditec and Heidelberg Engineering for providing images and permission to use them as examples in this review.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris I Gramatikov.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ contributions

BG did the literature search, conceived the review, and wrote the manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gramatikov, B.I. Modern technologies for retinal scanning and imaging: an introduction for the biomedical engineer. BioMed Eng OnLine 13, 52 (2014). https://doi.org/10.1186/1475-925X-13-52

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1475-925X-13-52

Keywords