Editor's Pick

Listening to Light and Seeing Through: Biomedical Photoacoustic Imaging

In today’s clinical practice, human vision remains the major diagnostic imaging tool that can capture the shapes and colors of tissue surfaces. Similarly, a very popular manmade device, the camera, can copy what our eyes can see and store the images in memory. The reason why these devices can “see” is because they constitute optical detectors, which are sensitive to a certain spectrum of incoming light. But can we “see through” the body?

The ability to look inside living tissues and organs is important for diagnosing diseases and monitoring treatment outcomes. However, optical visualization deep within biological tissues is challenged by the significant amount of light scattering. This condition is analogous to a foggy environment while driving. The existing optical imaging methods, for example, confocal or two-photon microscopy, optical coherence tomography, and diffuse optical tomography, suffer from either a shallow imaging depth (i.e., ~1 mm) or poor spatial resolution. Alternatively, conventional medical imaging modalities, such as magnetic resonance imaging, X-ray computed tomography, ultrasound (US) imaging, and nuclear imaging, have been intensively investigated and widely used in clinics. However, none of these can envisage what our eyes can see because these modalities do not use the optical spectrum as a contrast mechanism. Photoacoustic imaging (PAI) is capable of overcoming these limitations by delivering high-resolution optical contrast from depths of many millimeters to centimeters in highly scattering living tissues.

PAI has been extensively explored for biological and medical applications during the last decade. The physical effect is based on energy transduction from light to sound, equivalent to the conversion of lightning into thunder in our daily life. Upon viewing a flash of lightning, one can hear the thunder arriving a few seconds later. If multiple observers (i.e., at least three) listen to the thunder at different locations, the exact origin of the lightning can be calculated by considering the temporal delays with a simple triangulation method. PAI adapts a similar reconstruction method to form multidimensional (i.e., one-, two-, or three-dimensional) images of biological tissues.

The credit for the discovery of the basic physical phenomenon in 1880 belongs to Alexander Graham Bell, who confirmed the sound excitation from mechanically chopped sunlight, naming his invention the “photophone” or “spectrophone.” Today, the terms photoacoustic (PA) and optoacoustic are equally used to describe the effect of acoustic wave generation by transient light absorption. Spectroscopic PA sensing in biological samples was first demonstrated in the early 1970s. Extensive advances in laser, computer, and US technologies facilitated the development of PAI systems throughout the 1990s with the first noninvasive structural and functional images acquired in 2003 from the brains of living mice. Since then, PAI has gained tremendous popularity as a new and powerful addition to the arsenal of biological and medical imaging modalities.

The process of PA excitation includes the following basic steps: 1) illumination and absorption of pulsed or modulated light by the imaged object, 2) a fast temperature rise, and 3) thermoelastic expansion and broadband US (referred to as PA waves) emission. The amplitude of the emitted waves in each and every point in the medium is proportional to the amount of light absorbed at this point. The induced waves travel through the medium and, thus, can be detected by US detectors placed around or within the imaged object. More importantly, because both scattered and unscattered light can generate PA waves, the imaging depth of PAI can be greatly enhanced in biological tissues up to more than 5 cm. The spatial resolution of PAI is mainly determined by the acoustic detection parameters, and thus, it is not directly affected by light scattering and can maintain high resolution in deep tissues.

Preclinical applications of PAI have rapidly developed with imaging scanners, both experimental and commercial, found in many laboratories around the globe. PAI has been applied to image

  • single cells in vivo (e.g., red blood cells and melanoma cells)
  • vascular and lymphatic networks
  • angiogenesis
  • oxygen saturation of hemoglobin in micro blood vessels
  • blood flows
  • metabolic rates
  • functional brain activity
  • drug delivery and treatment responses
  • molecular targeting with biomarkers and contrast agents
  • gene expressions.

PAI is further becoming a gold standard for many novel preclinical imaging applications.

The current clinical explorations mainly focus on imaging breast and melanoma cancers and guiding sentinel node biopsy for breast cancer staging. However, a significant expansion of potential clinical applications is expected in the near future, including

  • prostate, thyroid, and head and neck cancer imaging
  • diagnosis of peripheral and cardiovascular disease
  • monitoring early responses of neoadjuvant therapy
  • functional human neuroimaging
  • gastrointestinal tract imaging using endoscopic probes
  • intravascular imaging using catheters
  • monitoring of arthritis and inflammation
  • label-free histology
  • in vivo flow cytometry.

Our special issue on biomedical PAI captures some of the exciting recent progress. The review contributions and interviews broadly address the diversity of this vivid research field with topics ranging from technology development and image reconstruction methods to contrast enhancement approaches, applications in preclinical research and clinical imaging, as well as major commercialization efforts.


The work of Chulhong Kim was supported by the research funds from a National IT Industry Promotion Agency (NIPA) IT Consilience Creative Program (NIPA2013-H0203-13-1001), a National Research Foundation (NRF) Engineering Research Center grant (NRF-2011-0030075), an NRF Pioneer Research Center Prgram (NRF-2014M3C1A3017229), and an NRF China-Republic of Korea Joint Research Project (NRF-2013K1A3A1A20046921) of the Ministry of Science, ICT and Future Planning, Republic of Korea. Daniel Razansky would like to acknow-ledge funding from the European Commission under starting grant ERC-2010-StG-260991 and initial training networks grant FP7-PEOPLE-2012-ITN-317526, the German-Israeli Foundation for Scientific Research and Development, and the Helmholtz Association of German Research Centers.