Abtsract
We present a deep learning-based method for achieving super-resolution in fluorescence microscopy. This data-driven approach does not require any numerical models of the imaging process or the estimation of a point spread function, and is solely based on training a generative adversarial network, which statistically learns to transform low resolution input images into super-resolved ones. Using this method, we super-resolve wide-field images acquired with low numerical aperture objective lenses, matching the resolution that is acquired using high numerical aperture objectives. We also demonstrate that diffraction-limited confocal microscopy images can be transformed by the same framework into super-resolved fluorescence images, matching the image resolution acquired with a stimulated emission depletion (STED) microscope. The deep network rapidly outputs these super-resolution images, without any iterations or parameter search, and even works for types of samples that it was not trained for.