1 Introduction

As one of the most important information carriers in the physical world, high-dimensional continuous light signals can be recorded at a significantly high quality in each dimension (space, time, angle, spectrum, and phase), due to the rapid development of imaging techniques. Benefiting from the improved observation ability of the physical world, both basic science and industrial technologies have ushered in milestone progress. Humans now demand full acquisition and perception of the physical world; however, observation of higher dimensions and across different scales challenges the capabilities of conventional imaging devices. On one hand, in the past hundreds of years, the imaging model has been under continual development, and imaging elements such as light sources, optics, sensors, and light modulators have been improved and undergone innovation to achieve high-dimensional, high-resolution light modulation. On the other hand, in the information age, research areas such as digital processing, computer vision, machine learning, and big data processing are achieving leaps in progress, which enables larger and faster information computations. The co-revolution and collision of these two research areas promoted the birth and development of computational imaging.

Computational imaging relies mainly upon coupled sampling and computational decoupling (i.e., reconstruction). Different from conventional imaging methods, computational imaging first couples the target information using new optical devices or materials, and designs new imaging mechanisms to modulate light in a specific way. The coupling is driven by demand, and the measurements contain the target information either directly or indirectly in diverse forms. Then, using the strong computation ability of modern computation resources, researchers design the corresponding reconstruction methods to restore the target visual information. Sometimes, further modulations, such as feedback loops, are introduced into the imaging systems to achieve the imaging goal. In computational imaging, optical systems are endowed with computational characteristics, such as modeling, optimization, coupling, and decoupling, which help overcome the shortcomings and limitations of conventional imaging devices, and enlarge the space for the processing that follows. Unlike conventional imaging methods which rely on readily available imaging devices, computational imaging techniques treat the whole imaging process systematically and target the capturing of high-dimensional, multi-scale, diverse, and high-quality visual information. In other words, computational imaging overcomes the limitations of conventional imaging methods and brings distinctively new insights and opportunities into the development of related fields, such as life and biomedical sciences, materials science, computer vision, and graphics.

2 History of computational photography

Imaging is one of the basic ways by which humans describe and understand the world. Each breakthrough in imaging techniques triggers substantial progress in related disciplines. The idea of combining computation with imaging emerged in the early stages of imaging history. For example, researchers recorded the interference or diffraction of outgoing light from the target object, and computationally reconstructed the phase and spectrum of the coherent light, and even the 3D structure of the target scene. Such information cannot be obtained from conventional intensity imaging methods. This kind of imaging method, based on first recording then reconstructing computationally, has already been applied successfully in diverse imaging tasks, such as in the following two Nobel Prize works from the early years of imaging history: (1) molecular emission spectroscopy with an interferometer (Gebbie, 1961); (2) crystallography through X-ray diffraction (van Tilbeurgh et al., 1993). After this, researchers further used interference and diffraction by introducing specific phase modulations into the light path to cast the spectrum, phase, and angle information into a detectable intensity. Phase contrast microscopy (Zernike, 1955) is a typical example, which coordinates the phase plates on the illumination side and the objective side, and enables microscopic observation of a transparent specimen. Since the invention of the computer, computation power has been increasing dramatically. It enables us to both capture large volumes of visual information and reconstruct the images rapidly providing a stronger basis for research in computational imaging. For example, the synthetic aperture technique (Ryle, 1972) applied in radio telescopes and computed tomography (Brenner and Hall, 2007) makes full use of high computation power and reconstructs high-dimensional and high-resolution information from a large amount of its low-dimensional or low-resolution sampling successfully. This computational imaging strategy is applied frequently and has contributed significantly to the imaging field; high-resolution nuclear magnetic resonance spectroscopy is a typical example (Ferguson and Phillips, 1967).

In the past few decades, computational imaging has grown rapidly due to the interaction and convergence of progress in multiple disciplines. Emerging digital image sensors, such as charge-coupled devices (CCDs) and complementary metal-oxide semiconductors (CMOSs), largely facilitate computational imaging with easy recording, computing, and transmission. The wide availability of various spatial light modulators inspires diverse computational imaging methods. For example, placing a spatial light modulator at different positions of the light path can achieve various phase (Vellekoop et al., 2010) or amplitude (Howard et al., 2013) modulations, and introducing optical gratings or prisms can code high-dimensional visual information. Light modulation techniques also promote the development of adaptive optics (Ji et al., 2010; Bifano, 2011), which overcome the limitations of conventional optics in many extreme conditions. New light sources can also help in designing new imaging mechanisms to relieve the pressure on optical design by compensating for the limitations from the illumination side. Recently, new breakthroughs in femto-laser techniques (Assion et al., 1998) and the popularization of light-emitting diodes (LEDs) (Ding et al., 2014) provide new tools for visual information encoding, such as ultra-high speed imaging methods (Velten et al., 2013) and significantly low-cost LED-based microscopy (Zheng et al., 2013). Structured illumination techniques also play an important role in computational imaging. At the same time, the prospect of new computation theories or approaches, such as compressive sensing (Candès et al., 2006), provides a theoretical basis for the design of even more optimized computational imaging systems.

Overall, computational imaging is becoming a developing trend within various imaging techniques, which has led to great success. Table 1 lists most of the Noble prizes related to computational imaging. We can imagine that future computational imaging techniques will extend the frontiers in our observation abilities, decrease the cost of various high-performance imaging setups, make fuller use of the advantages of progress in related disciplines, introduce more new materials, and combine more imaging modalities. The developments in computational imaging will also advance the developments in diverse research areas.

Table 1 Nobel Prizes related to computational imaging

3 Advanced theoretical methods and computational imaging systems

In recent decades, the tidal wave of information technologies has resulted in the flourishing of computational imaging techniques. The progress in this field is pushing the envelope in our observation abilities either along a single dimension of light or across multiple dimensions jointly, and along the way bringing powerful tools to many disciplines, such as life sciences, medicine, and materials science. We will introduce some leading edge computational imaging methods and systems, from the perspective of the different dimensions of the light signal.

3.1 Spatial dimension

The spatial resolution of conventional imaging methods is limited in two main respects: (1) The resolution of an optical imaging system is always limited by diffraction; (2) Due to the limitation of the resolving power and spatial-bandwidth product of the objective lens, there exists a fundamental tradeoff between the field of view and the spatial resolution. In this section, we review the work carried out in two main areas in high-resolution computational imaging: computational super-resolution imaging and large-field-of-view, high-resolution imaging.

In 1873, the German scientist Ernst Abbe advanced the diffraction limit based on wave optics; i.e., there exists an upper limit in the resolution of an optical system. For a long time, the diffraction barrier limited high-resolution imaging. Recently, by exploiting new imaging principles, researchers have broken the Abbe diffraction limit and achieved super-resolution imaging. The typical research includes stimulated emission depletion (STED) (Hell and Wichmann, 1994; Hein et al., 2008), photo activated localization microscopy (PALM) (Hess et al., 2006; Manley et al., 2008), stochastic optical reconstruction microscopy (STORM) (Rust et al., 2006), and structured illumination microscopy (SIM) (Gustafsson, 2005). The 2014 Nobel Prize in Chemistry was awarded jointly to these outstanding researchers for their breakthroughs in super-resolution imaging. Specifically, in STED, super-resolution images were generated by modulating excitation illumination to reduce the sizes of the emitting fluorochromes to sub-diffraction sizes. In contrast, STORM adopts photo-switchable molecules, and then activates and deactivates the sparsely located molecules that are separated by a distance that exceeds the Abbe diffraction limit. The high-resolution imaging is achieved by locating the sparsely located molecules in each capture. In SIM, the illumination is modulated with structured patterns to encode high-frequency components of the microscopic object. Illustrations and comparisons of the principles of these methods are shown in Fig. 1 (Schermelleh et al., 2010).

Fig. 1
figure 1

Illustrations and comparisons of the super-resolution imaging techniques: (a) stimulated emission depletion (STED); (b) photo activated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM); (c) illumination microscopy (SIM) (Reprinted from Schermelleh et al. (2010), Copyright 2010, with permission from Elsevier)

On the other hand, observation of many biological phenomena requires high-resolution imaging of the microscopic objects over a large field of view, to study the cooperation among different components across a large range (Frenkel, 2010; Greenbaum et al., 2012). Macroscopic photography has similar requirements, as it ranges from surveillance and astronomical and Earth observation to entertainment, etc. Representative works in large-field-of-view, high-resolution imaging include gigapixel imaging (Brady et al., 2012; Marks et al., 2012) and Fourier ptychographic microscopy (FPM) (Zheng et al., 2013). Brady et al. (2012) developed a high-resolution imaging system, AWARE-2, with a cascading optical design. In this gigapixel imaging system, there is a relay lens with high resolving power, and the camera arrays after the relay lens are used to focus and capture images of different fields of view. Each image comprises 9.6 hundred million pixels, which are captured and transmitted in parallel. The high-resolution, large-field-of-view image is reconstructed through computational stitching in the spatial domain. Since this spatial-stitching design requires a large number of cameras, it is very costly. At a much lower cost, Zheng et al. (2013) proposed an alternative method, in which they adopted low numerical aperture objectives to capture images of a wide field of view, and by changing the illumination angles, they captured different frequency areas of the microscopic objects and stitched these large-field-of-view but low-resolution images in the Fourier domain. The high-resolution and large-field-of-view microscopic images are reconstructed through a phase retrieval algorithm. Fig. 2 shows the optical setup of these two typical systems.

Fig. 2
figure 2

Illustrations and comparisons of the super-resolution imaging techniques: (a) gigapixel camera system (Reprinted from Marks et al. (2012), Copyright 2012, with permission from Macmillan Publishers Ltd.); (b) Fourier ptychographic microscopy system (Reprinted from Zheng et al. (2013), Copyright 2013, with permission from Macmillan Publishers Ltd.)

3.2 Temporal dimension

Conventional cameras capture 2D images sequentially to capture dynamic scenes, and their time resolution is limited by the sensitivity of the sensor, the data transfer speed and storage. So far, the fastest commercial high-speed camera is around 1 µs. Recently, research in the ultra-fast imaging field has achieved picosecond time resolution by coding the temporal information to either the spatial or spectral domain. Here, we present three representative works, which have outstanding performances.

Velten et al. (2012) applied a rapid ultra-fast laser into macroscopic imaging and electronically mapped the time-domain information into the spatial domain using a streak camera. The time resolution of this system reached 2 ps. The fundamental principle of this technique is illustrated in Fig. 3. Specifically, the laser scans the scene by line (1D), and for each point in the line, the streak camera spreads the photons that arrive at different times into another spatial dimension that is perpendicular to the scanned spatial dimension, so each captured 2D streak image is actually with one spatial dimension and one time dimension. By scanning the scene by line and stitching the captured images, 2D spatial images with ultra-high time resolution can be computationally reconstructed. With this ultra-fast camera, the light speed is no longer unlimited and the whole propagation process of the light can be recorded, which is called ‘transient imaging’. From the transient images, one can separate different scatterings or reflections of light transport in the target scene, and achieve looking-around-corner effects (Velten et al., 2012). Another important application of such ultra-fast imaging techniques is depth capture. Depth imaging based on time of flight (TOF) is realized by emitting light pulses continuously and retrieving the depth from their traveling time, which is calculated from the phase shift between the emitted light and the received light. Recently, Heide et al. (2013) achieved 3D imaging with a high-frequency illuminated TOF camera.

Fig. 3
figure 3

Femto-photography based on a streak camera: (a) system setup of femto-photography; (b) charge-coupled-device (CCD) imaging scheme; (c) streak camera image; (d) setup for non-line-of-sight imaging; (e) non-line-of-sight image (Reprinted from Velten et al. (2012), Copyright 2012, with permission from Macmillan Publishers Ltd., and from Velten et al. (2013), Copyright 2013, with permission from Springer Science+Business Media)

Goda et al. (2009) invented serial time-encoded amplified imaging (STEAM), which maps the time-domain information to the spectral domain, and uses an ultra-fast single-pixel camera to detect the spectral information. The fundamental principle of this imaging method is illustrated in Fig. 4. Each light pulse first goes through a 2D spectral separation setup, and its different spectral components are mapped to the different spatial positions of the object, and in this way, the 2D information of the object is coded into the spectral dimension. By mapping different spectral components back to the time domain and recording the sequential signals with an ultra-fast single-pixel camera, the spatial information of the object can be reconstructed computationally. STEAM is a continuous imaging system, which can capture the scenic information continuously at 6.1 Mb/s, with a shutter speed of 440 ps. This ultra-fast imaging system has been applied successfully in the detection of high-speed cell flows, combined with micro-fluid techniques.

Fig. 4
figure 4

Ultra-fast imaging system via spectral multiplexing: (a) serial time-encoded amplified imaging (STEAM) system (Reprinted from Goda et al. (2009), Copyright 2009, with permission from Macmillan Publishers Ltd.); (b) sequentially timed all-optical mapping photography (STAMP) system (Reprinted from Nakagawa et al. (2014), Copyright 2014, with permission from Macmillan Publishers Ltd.)

Later, the Goda group invented STAMP. The system codes the ultra-fast time phenomenon into the spectral dimension, and then maps different spectral channels to different spatial positions of a high-resolution 2D sensor for burst mode ultra-fast imaging (Nakagawa et al., 2014). Specifically, as shown in Fig. 4b, an ultra-short laser pulse is split by the temporal mapping device (TMD) into a series of discrete daughter pulses in different spectral bands, which are incident to the target as successive ‘flashes’ for stroboscopic image acquisition. The spatial information of the object at different times is carried by different spectral bands, dispersed to different spatial positions by dispersion optics, and detected with a single 2D sensor.

3.3 Angular dimension

The angle information from visual signals reveals various scenic properties, such as the illumination, the scenic material, and the 3D structures. However, the angle information is almost lost in conventional imaging. In computational imaging, researchers propose various techniques to sample the visual information for different angles, to computationally reconstruct high-dimensional or high-resolution images. Fig. 5 shows some recent computational imaging systems based on multi-angle information, from coherent light to partially coherent light to incoherent light, from microscopic to macroscopic, and from multi-angle illumination to multi-angle sampling.

Fig. 5
figure 5

Computational imaging systems with angular sampling. Coherent illumination: (a) 3D refractive index reconstruction (Reprinted from Choi et al. (2007), Copyright 2007, with permission from Macmillan Publishers Ltd.); (b) Fourier ptychographic microscopy (FPM) (Reprinted from Zheng et al. (2013), Copyright 2013, with permission from Macmillan Publishers Ltd.). Partially coherent illumination: (c) phase-space 4D sampling (Reprinted from Waller et al. (2012), Copyright 2012, with permission from Macmillan Publishers Ltd.). Incoherent illumination: (d) light field microscopy (Reprinted from Levoy et al. (2006), Copyright 2006, with permission from Springer Science+Business Media); (e) camera array-based light field microscopy (Reprinted from Lin et al. (2015), Copyright 2015, with permission from OSA)

In the reconstruction of multi-angle information under coherent illumination, the light field is modeled as a complex field. Combined with theories from wave optics, these methods are applied commonly in microscopy. For example, Choi et al. (2007) built a system for 3D refraction index reconstruction (Fig. 5a). Using coherent illumination at each angle, they retrieved the quantitative phase information through digital holography methods, and then adopted tomography to combine the phase information of different angles and reconstruct the labelfree 3D refraction image of live cells. For thin microscopic samples, coherent illumination of different angles can shift the frequency information and one can reconstruct high-resolution images by capturing and stitching different spatial frequencies. Zheng et al. (2013) proposed a gigapixel microscopic system (Fig. 5), using an LED array to illuminate the microscopic sample from different angles. Their system bypasses the spatial-bandwidth limit of the objective lens and realizes low-cost gigapixel microscopy.

Under partially coherent illumination, the light field is usually described by the Wigner distribution function. With the phase-spatial 4D sampling system displayed in Fig. 5, Waller et al. (2012) used the spatial light modulator and sampled dense angular information for the reconstruction of the 4D phase-amplitude data. This provides a better understanding of the non-linear light transport theory and has already been applied to 3D localization through a scattering medium.

When the illumination is incoherent, a 4D light field is used commonly for the modeling of the scene’s geometry information (Levoy and Hanrahan, 1996). In a macroscopic scene, by illuminating the scene from different angles and applying photometric stereo methods, high-resolution 3D information of the scene can be retrieved. Micro-lens arrays (Ng et al., 2005) and camera array systems (Wilburn et al., 2004) have been successfully used for rapid capturing of the 4D light field, which helps realize fast depth detection and refocusing. At the same time, these 4D light-field-capturing techniques are applied to microscopic imaging (Levoy et al., 2006; Lin et al., 2015). Fast 3D reconstruction of a fluorescent sample and transparent sample can be realized by combining 3D deconvolution algorithms and phase retrieval algorithms. Prevedel et al. (2014) have applied light field microscopy to simultaneous whole-animal 3D imaging of neuronal activity.

In other words, introducing multi-angle information on the illumination side and combining it with the corresponding reconstruction algorithm can overcome the limitations of conventional imaging, introduce more 3D information, compensate for the optical aberration, and realize high-performance imaging. Introducing multi-angle information on the sampling side can help couple 3D information into the 2D sensor, and enable reconstruction of 3D information from 2D measurements.

3.4 Spectral dimension

Current multi-channel imaging techniques are designed mostly to capture three colors: red, green, and blue. Although the three-color imaging systems match the perception of the human vision system well, from the perspective of physics, real-world scenes contain abundant wavelengths and display rich spectral information. This spectrum information can reflect the essential properties of the light source and scene, so spectrum imaging becomes an important tool for both scientific research and engineering applications. Recently, high-resolution hyperspectral imaging has drawn increased attention and made a great progress, from the extension of the spectrum range, to improvement of the spatial resolution, to acceleration of the imaging speed. Hyperspectral imaging can capture abundant scene information in space, time, and spectral dimensions. Benefiting from the encoded rich information, hyperspectral imaging has already been applied widely in military security, environment monitoring, biological science, medical diagnosis, and scientific observation (Backman et al., 2000; Delalieux et al., 2009; Wong, 2009; Kester et al., 2011). With the development of dynamic spectrum imaging techniques, there are also many emerging applications in computer vision and graphics, such as object detection, image segmentation, image recognition, and scene rendering.

The following are some representative methods of computational spectral imaging techniques in the spectral dimension. Computational spectral imaging couples the 3D spectral information into a 2D sensor and then computationally reconstructs the whole volume of the 3D spectrum. Based on this basic principle, researchers have designed different spectrum sampling systems, with various optical implementations and system structures. We can classify them into branches according to their sampling and reconstruction strategies, including computed tomography (Descour and Dereniak, 1995), interferometry (Chao et al., 2005), coded aperture (Willett et al., 2007), and hybrid-camera systems (Ma et al., 2014). In addition, compressive sensing has recently drawn much attention in the computational imaging field. Charles et al. (2011) and Chakrabarti and Zickler (2011) have extensively exploited the sparsity of spectrum data, and proposed spectrum dictionary-based imaging methods. In addition to coded aperture-based hyperspectral imaging, researchers have further effectively exploited the compressibility of spectrum data based on micromirror arrays, principal component imaging (Pal and Neifeld, 2003), feature-specific imaging (Neifeld and Shankar, 2003), fluorescence imaging (Suo et al., 2014), and spatial-temporal coding (Lin et al., 2014). These techniques manage to compressively sense the 3D spectral data through different coupling and decoupling methods.

Recently, Bao and Bawendi (2015) proposed a novel spectrometer based on colloidal quantum dots, a type of highly controllable, tiny, and light sensitive semi-conductor material (Fig. 6). With quantum dot printing replacing conventional Bayer-pattern color filter arrays, the size of the spectrometer is similar to a conventional three-color camera. The size of the spectrometer is reduced dramatically without influencing the resolution, generality, or efficiency. This is the first attempt at using nanometer materials to build spectrometers and represents good progress in the miniaturization of spectrometers. This method paves the way for making high-performance, low-cost, small-volume micro-spectrometers, with broad applications in space exploitation, personalized medical service, diagnostic platforms based on micro-fluid chips, etc.

Fig. 6
figure 6

A colloidal quantum dot spectrometer (Reprinted from Bao and Bawendi (2015), Copyright 2015, with permission from Macmillan Publishers Ltd.)

Micro-scale spectrum sampling has also drawn a considerable attention and been researched extensively (Fig. 7). Orth et al. (2015) built a gigapixel multispectral microscope based on micro-lens arrays. This system captures about 13 spectral channels for each point and the spatial resolution can reach 1.3 billion pixels. It can effectively observe the inner structure of biological samples such as cells. A multispectral microscope with such a high spatial resolution may prove to be a substantial benefit in the development of biomedicine and drug research. For in vivo observation of biological specimens, it is common to use fluorescence staining techniques to label the target object. In the presence of multiple fluorescence dyes, the spectrum of the target object can provide effective classification information. Based on this point, Jahr et al. (2015) invented hyperspectral light sheet microscopy. This imaging system can capture the spectral images of large biology samples by combining active scanning illumination and computational reconstruction. It can not only achieve optical sectioning of the 3D data, but also guarantee the spatial resolution in the order of the cell. This new technique also provides great opportunities for in vivo detection of biological samples.

Fig. 7
figure 7

Research on hyperspectral microscopy: (a) setup for gigapixel multispectral microscopy (Reprinted from Orth et al. (2015), Copyright 2015, with permission from OSA); (b) setup for hyperspectral light sheet microscopy (Reprinted from Jahr et al. (2015), Copyright 2015, with permission from CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/))

3.5 Phase

Phase imaging is applied widely in life sciences especially in microscopy. This takes into consideration that most microscopic samples are almost transparent without fluorescence stain, such as prokaryotes and bacteria. These specimens absorb very small amounts of incident light, so the intensity of the in-focus image does not present noticeable spatial variance. Through phase imaging, we can retrieve the outline of the transparent samples and achieve label-free cell imaging. The phase-contrast microscopy proposed by Zernike (1955) is one of the earliest phase imaging techniques, and provides a new tool for transparent object imaging. In addition, quantitative phase imaging realizes accurate phase measurements of the transparent microscopic samples. Combined with multi-angle or multi-focal-plane acquisition, one can also achieve 3D label-free refractive imaging and nanometer imaging (Levoy et al., 2006; Cotte et al., 2013; Kim et al., 2014).

Quantitative phase imaging techniques can be divided mainly into two main types: iterative phase imaging and non-iterative phase imaging (Fienup, 1982). The former uses the light transport model under coherent or partially coherent light, imposes constraints in either the spatial or the Fourier domain, and designs iterative reconstruction algorithms to retrieve phase information from the intensity measurements. This type of method generally requires complex computations and multiple snapshots. The well-known method in this field is the Gerchberg-Saxton (GS) iteration algorithm (Fienup, 2013). There are mainly two types of non-iterative phase imaging, i.e., non-iterative reconstructions which include imaging under coherent and partially coherent illuminations. Some typical imaging systems are shown in Fig. 8.

Fig. 8
figure 8

Research on quantitative phase imaging. Coherent light: (a) Fourier plane phase modulation (Reprinted from Popescu et al. (2004), Copyright 2004, with permission from OSA); (b) digital holography (Reprinted from Cuche et al. (1999), Copyright 1999, with permission from OSA); (c) multi-angle digital holography (Reprinted from Choi et al. (2007), Copyright 2007, with permission from Macmillan Publishers Ltd.). Partially incoherent light: (d) Shack-Hartman imaging (Reprinted from Stoklasa et al. (2014), Copyright 2014, with permission from Macmillan Publishers Ltd.); (e) light transport function based imaging (Reprinted from Waller et al. (2010b), Copyright 2010, with permission from OSA); (f) white light diffraction tomography (Reprinted from Kim et al. (2014), Copyright 2014, with permission from Macmillan Publishers Ltd.)

Under coherent illumination, refined interference and diffraction patterns can be retrieved to reconstruct highly accurate phase information. Typical examples include digital holography and some phase-shift interference measuring methods (Cuche et al., 1999). Some other methods couple the phase information into the intensity of the focal plane by modulating either phase or amplitude in the Fourier domain. With computational reconstructions, the accuracy and speed of quantitative phase imaging can be further improved (Popescu et al., 2004). One can also reconstruct the 3D refractive index of label-free transparent samples from multi-angle quantitative imaging under coherent light, and assist studies in the biological domain (Choi et al., 2007). In spite of this progress, there are still some problems in quantitative phase imaging under coherent illuminations, including problems introduced from the phase period, the interference of the laser speckle, low spatial resolution, and high price of high power lasers.

Recently, more efforts have concentrated on phase imaging with partially coherent light. The Shack-Hartmann sensor can record the phase information of a target sample under partially coherent light at a good quality (Stoklasa et al., 2014), and further display the optical coherence of the signal. However, both the phase and spatial resolutions are low and thus of limited accuracy. Some Fourier plane phase modulation methods under completely coherent light are further extended to partially coherent light imaging and improve its applicability dramatically (Kim et al., 2014). When the light is partially coherent, the phase information cannot be detected at the focal plane, but is encoded by the intensity variation at the defocused planes. Measuring multiple times along the axial direction and taking advantage of the phase retrieval algorithm of the light transport function lend new insights into quantitative phase reconstruction with partially incoherent light (Teague, 1983; Waller et al., 2010b).

Fast and high-accuracy phase reconstruction would enable dynamic transparent objects. Various computational reconstruction methods have reduced the required measurement number under partially coherent light and realized single-shot quantitative imaging through introducing chromatic aberrations (Waller et al., 2010a) or using a volume holographic microscope (Waller et al., 2010b), etc. These works greatly expand the application scenario and provide new opportunities for label-free dynamic cell observation.

4 Discussions and conclusions

Computational imaging has achieved a great success in the past years, and some work has even been applied in various fields. In this section, we will discuss some trends in computational imaging.

Interdisciplinary integration: Compared with conventional imaging, computational imaging is much more flexible and novel; thus, it is poised to take advantage of the new techniques and breakthroughs from related disciplines. New materials and techniques (either computing or optics) are being invented or discovered continually, and some of them have already been applied in computational imaging. At the same time, increasing efforts have been made in research fields such as brain science, medical examination, disease models, and mechanisms, and the imaging demands from these areas have also inspired researchers to seek new tools from related fields. Therefore, a cross-disciplinary research mode is an inevitable trend in computational imaging, and has become increasingly prominent over the years. For example, the discovery of photo-switchable fluorescence proteins enabled a big breakthrough in super-resolution microscopy techniques (Rust et al., 2006), and the smart application of quantum dots has largely reduced the cost and size of spectrometers (Bao and Bawendi, 2015). At the same time, these multidisciplinary intersections and connection trends are reflected in the fusion of multiple imaging methods. Through the mutually beneficial aspects of different imaging methods, the observation limit is being broken continually. Photoacoustic imaging (Wang and Hu, 2012; Chaigne et al., 2014) and ultrasound assisted time-reverse techniques are representative examples of this kind of cross-reinforcement strategy. Advances in other research fields such as materials science and applied chemistry will also provide more possibilities in the field of computational imaging. In the future, the main objective in the field of computational imaging will be how to fuse different imaging techniques in ingenious ways and to achieve new milestones in imaging.

Multi-dimensional and multi-scale imaging: Complete capture and perception of the physical world is becoming increasingly important. In the field of imaging, we need effective imaging techniques to observe light in different dimensions and at different scales. Highly multiplexed imaging is also listed as one of the eight most worthwhile techniques of concern in Nature Methods 2016 (Strack, 2016). Computational imaging has succeeded in a high-performance observation along various dimensions. In the spatial dimension, imaging systems with tens of million or even a billion pixels have already become available. In the time dimension, high-speed imaging systems have reached femtosecond time resolution, and the significant improvement in imaging speed has also introduced transient optics, which is an entirely new research area. Additionally, in the spectral and angular dimensions, imaging techniques are making the ‘invisible’ become ‘visible’ and the ‘obscure’ become ‘clear’. In the past few decades, spectral video capturing systems, light field cameras, depth cameras, femtosecond spatial-temporal focusing techniques, and new methods are benefiting from the breakthroughs in computational imaging theories and technologies. So far, the above imaging techniques are of outstanding performance in one or two dimensions, but limited in others. Capturing visual signals along multiple dimensions simultaneously is a worthwhile research direction, because it will help in obtaining a complete observation, and the high redundancy of the high-dimensional visual signals provides high feasibility. In addition, researchers from different disciplines are requesting multi-scale observation, i.e., both large observation ranges and high resolution. For example, observation of multi-scale features (sub-cell, cell, tissue, organ, and system) of an organism’s structure will open up new possibilities in biomedical analysis, such as structure-function coupling of neural circuits and organ-targeting tumor metastasis. Multi-scale imaging along both spatial and other dimensions is altogether of great importance and high research value.

Imaging under ultra-weak illuminations: The demand for recording ultra-weak light signals comes from the in vivo imaging of biological samples. For example, during microscopy of in vivo cells, a very weak excitation light is favorable for avoiding damage to the cells, especially for some low-expression fluorescence proteins. Although researchers have achieved a significant progress in developing high-end imaging sensors, such as the invention of intensified charge-coupled device (ICCD) and scientific complementary metal oxide semiconductor (sCMOS), the ultra weak signal still cannot be detected by existing sensors. A possible solution to this task is to take advantage of single-pixel cameras (Gatti et al., 2004; Morris et al., 2015; Bina et al., 2013), which hold great potential for low-illumination imaging due to three advantages: the single-pixel photodiode is of extremely high sensitivity; the photons are collected and recorded in a multiplexed manner; the photodiode is much less expensive compared with full-frame sensors such as ICCD and sCMOS.

Looking through scattering medium: Looking through a scattering medium is also of great importance in various fields, especially for in vivo biomedical imaging. For example, observing the cerebral cortex of the mouse brain is demanding; however, the high scattering medium degenerates the imaging clarity severely, even though observing deeper neural cells is crucial in the field of neural imaging. Due to the fact that the visible light is scattered severely in the medium, the imaging depth is therefore quite small. Two-photon or multi-photon confocal microscopic imaging (Diaspro et al., 2005; Helmchen and Denk, 2005; Horton et al., 2013) has made some progress in this direction; however, the imaging depth is still limited. Overall, clear imaging through scattering media with decent spatial and temporal resolution remains a big challenge.