Towards A Wireless Image Sensor for Real-Time Fluorescence Microscopy in Cancer Therapy

We present a mm-sized, ultrasonically powered lensless CMOS image sensor as a progress towards wireless fluorescence microscopy. Access to biological information within the tissue has the potential to provide insights guiding diagnosis and treatment across numerous medical conditions including cancer therapy. This information, in conjunction with current clinical imaging techniques that have limitations in obtaining images continuously and lack wireless compatibility, can improve continual detection of multicell clusters deep within tissue. The proposed platform incorporates a 2.4×4.7 mm2 integrated circuit (IC) fabricated in TSMC 0.18 μm, a micro laser diode (μLD), a single piezoceramic and off-chip storage capacitors. The IC consists of a 36×40 array of capacitive trans-impedance amplifier-based pixels, wireless power management and communication via ultrasound and a laser driver all controlled by a Finite State Machine. The piezoceramic harvests energy from the acoustic waves at a depth of 2 cm to power up the IC and transfer 11.5 kbits/frame via backscattering. During Charge-Up, the off-chip capacitor stores charge to later supply a high-power 78 mW μLD during Imaging. Proof of concept of the imaging front end is shown by imaging distributions of CD8 T-cells, an indicator of the immune response to cancer, ex vivo, in the lymph nodes of a functional immune system (BL6 mice) against colorectal cancer consistent with the results of a fluorescence microscope. The overall system performance is verified by detecting 140 μm features on a USAF resolution target with 32 ms exposure time and 389 ms ultrasound backscattering.


Comparison of the current illumination scheme with implanted setup
The implanted setup as shown in the conceptual diagram in Fig. 1(a) requires the laser diode to be assembled next to the sensor while illuminating the target via epi-illumination.Compared to transillumination in the current setup shown in Fig. 1(b) and Fig. 17, epi-illumination lowers the background signal due to the excitation light being reflected off the surface of the sample and not directly incident on the surface of the imager.This can positively affect the signal-to-background ratio.However, there are additional effects on signal intensity that need to be addressed: 1. Spacer thickness: To deliver light via epi-illumination from the edge emitter laser diode to the sample, a glass spacer between the sensor and the target is required.The thickness of the spacer increases the distance between the source and the target lowering the light intensity absorbed by the fluorophores.This effect can be studied using the simplified illumination models shown below: Supplementary Figure 1.Trans-illumination and epi-illumination setups with the chip, optical filter, μLD and glass spacer.The pixel array covers about 40% of the entire chip area.The spacer length (L) is chosen to be the same as the pixel array length (2.2mm).The μLD is placed as close as possible to the spacer.
The intensity of the light received incident on the same surface area of the sample in both cases can be calculated as shown below: E is the irradiance at distance d from a point source of light with an overall intensity of I  #$%&'(,*+$,# =  *+$,#  -$%&'( Where  #$%&'(,*+$,# is the light intensity received by a surface area of  -$%&'( being transilluminated from a distance of  *  #$%&'(,(&.=  (&.cos   -$%&'( Where  #$%&'(,(&.is the light intensity received by a surface area of  -$%&'( from epi-illumination at a distance of  ( and incident angle of .The irradiance at the sensor surface from the fluorescent sample is proportional to: Where  is the thickness of the glass spacer and  0 is the thickness of the optical filter.
2. Reflection due to oblique incidence: Compared to normal incidence, oblique illumination increases reflection at the interface between air and the spacer reducing the transmitted light to the target.
The reflection of light at the intersection of air and a second medium with a refractive index of n can be calculated from: Where  & and  # refer to reflections of TM and TE waves, respectively [1]. is the angle of the incoming beam in air and  is the angle of the transmitted rays in the second medium.For normal incidence reflection can be simplified to The ratio of light transmitted at the intersection of air and the medium n can be calculated from:  = 1 −  Assuming n=1.45 for tissue, 96.6% of the light will reach the sample in Suppl.Fig. 1 (a).For oblique incidence, the transmitted power from air to glass (n=1.5) for a range of incoming angles is shown in Suppl.Fig. 2(a).The reflection coefficient is not calculated for the interface of glass-tissue because of their similar refractive indices.Combing the effect of distance (part 1) and reflection (part 2) the relative irradiance of the emitted light for the same laser power is plotted as a function of the thickness of the spacer in Suppl.Fig. 2(b).The process is repeated for a range of μLD-sample distances in trans-illumination.The plots are generated considering a 500μm thick optical filter, s-polarized light with lower transmission for worse case, and L=2.2mm in Suppl.Fig. 1 (b).The dashed lines are generated considering the effect of reflection for both trans-illumination and epi-illumination.

Supplementary Figure 2. (a) Transmission of light from air to glass for different incoming angles of incident. (b) comparison of the irradiance of
As shown in Suppl.Fig. 2(b), compared to trans-illumination at distance of  * =7mm (similar to the experimental setup), with an optimized spacer width of 400μm, the intensity is reduced by 45% for epi-illumination.The loss in signal intensity can be improved by increasing the integration time for each frame.Another effect is the lower resolution due to the larger distance of the sample from the sensor caused by the spacer which linearly diminishes resolution [2].
3. Nonuniform illumination: One solution to improve uniformity is using light-guide plates (LGP) [3], [4] to deliver light from the laser to the tissue and improve uniformity of the light profile within the sensor field of view.Without LGPs, nonuniform illumination can be characterized and corrected computationally [5].In [6], the authors use convolutional neural networks to generate a uniform image from the images taken under nonuniform illumination conditions.The work in [7] summarizes mathematical approaches that are used to correct for uneven illumination in digital images.Given these methods the following steps are needed to correct images based on the nonuniform illumination profile by capturing an illumination map: a.The variability of the pixel responsivity can be captured by taking an image of a fluorescent dye, spread evenly on a glass slide covering the imager array, illuminated by a wide collimated laser beam from the top.b.Once the pixel-to-pixel variation map is determined, the laser diode in the implanted setup can turn on to illuminate the uniformly distributed dye on the sensor.Using (a) and (b) an image of the illumination profile can be obtained.c.This nonuniform illumination map can be used with machine learning-based methods or computational algorithms to correct future captured images for the effects of nonuniform illumination.

Illumination and optical power safety requirements
The laser diode used in this work is a class III medical laser (Pout< 5 mW).According to the American National Standard for Safe Use of Lasers (ANSI Z136.1-2014) the maximum exposure equal to 1.1t 0.25 J/cm 2 .Where t refers to the total exposure time of the laser.For an integration time of Tint=64 ms, the maximum radiant exposure allowed is 0.55 J/cm 2. With the current optical output power, the radiant exposure He, is 50 mW/cm 2 * 64 ms= 0.0032 J/cm 2 which is more than 170x lower than the ANSI limit.

Image outlier detection and correction
In wireless measurements, the bit error rate from backscattering can lead to outlier pixels in the reconstructed images.The outliers can be detected with an algorithm that compares the value of each pixel with the surrounding pixels.For each pixel, the mean and standard deviation of the 8 neighboring pixels is computed (except for the edge or corner pixels with 5 and 3 neighboring pixels respectively).Once the pixel value falls outside a certain range according to the statistics of the surrounding samples (μ±2σ in this case, where μ is the mean and σ is the standard deviation of the neighboring pixels excluding the pixel of interest), its value is replaced by the average of the surrounding pixels.The algorithm is built on the function proposed here.The images before and after applying the outlier detection are shown below: