Speakers
The ISCS features 6 world-renowned keynote speakers from light imaging, electron imaging, biomedical imaging, RADAR and remote sensing, astronomical imaging, and signal processing.
Title: DiffuserCam: Multi-dimensional Lensless Imaging
We describe a computational camera that enables single-shot multi-dimensional imaging with simple hardware and scalable software for easy reproducibility. We demonstrate compact hardware and compressed sensing reconstructions for 3D fluorescence measurements with high resolution across a large volume, hyper-spectral imaging, and temporal super-resolution – recovering a video from a single-shot capture. Our inverse algorithms are based on large-scale nonlinear non-convex optimization combined with unrolled neural networks. Applications demonstrated include whole organism bioimaging and neural activity tracking in vivo.
Title: The Advantages of Inpainting for High-resolution, In-situ and Ultrafast Electron Microscopy
For many imaging and microanalysis experiments using state-of-the-art aberration corrected scanning transmission electron microscopy (STEM), the resolution and precision of the result is primarily determined by the tolerance of the sample to the applied electron beam dose. In the case of in-situ experiments, where the goal is to image a chemical or structural process as it evolves, the effect of the beam dose can be harder to unravel, as a change in structure/chemistry is the expectation of the experiment. Recent results at the University of Liverpool have shown that the optimal solution for dose control in any form of scanning/transmission electron microscopy is to form the image from discrete locations of a small electron beam separated by as far as possible in space and time. Here I will discuss the methodology behind the use of Inpainting, with reference to the speed and efficiency of the reconstruction method and the potential for real-time imaging. In addition, the use of simulations to provide a starting point for image interpretation and the use of deep learning approaches to allow the microscope to adapt its own imaging conditions, will be demonstrated.
Title: Plug-and-Play Models for Large-Scale Computational Imaging
Computational imaging is a rapidly growing area that seeks to enhance the capabilities of imaging instruments by viewing imaging as an inverse problem. Plug-and-Play Priors (PnP) is one of the most popular frameworks for solving computational imaging problems through integration of physical and learned models. PnP leverages high-fidelity physical sensor models and powerful machine learning methods to provide state-of-the-art imaging algorithms. PnP models alternate between minimizing a data-fidelity term to promote data consistency and imposing a learned image prior in the form of an “image denoising” deep neural network. This talk presents a principled discussion of PnP, its theoretical foundations, its implementations for large-scale imaging problems, and recent results on PnP for the recovery of continuously represented images. We present several applications of our theoretical and algorithmic insights in bio-microscopy, computerized tomography, and magnetic resonance imaging.
Title: Phenomenological and Algorithmic Considerations in Radar Imaging
Advances in radar technology have led to a broad variety of sensor geometries and data collection modes, and enabled their use in diverse operating environments ranging from subsurface to indoors. Understanding the specificities of the radar geometry/modes and phenomenology associated with the operating environment are essential to gaining effective situational awareness and creating reliable actionable intelligence. In this talk, we examine phenomenological and algorithmic considerations to demarcate the design space and demonstrate how to exploit novel choices within the resulting constraints for developing principled and efficient techniques for radar imaging. We present several examples of recent works with an emphasis on sparsity-driven and machine learning based methods.
Title: Optimisation and Deep Learning for Large-scale High-dynamic Range Computational Imaging in Radio Astronomy
Endowing advanced imaging instruments such as telescopes and medical scanners with an acute vision that enables them to probe the Universe or human body with precision is a complex mathematical endeavour. It requires solving challenging inverse problems for image formation from observed data. In this talk, we will dive into this field of computational imaging, and its specific application in radio astronomy, where algorithms are currently facing a multi-faceted challenge for the robust reconstruction of images at extreme resolution and dynamic range, and from extreme data volumes. We will discuss advanced algorithms at the interface of optimisation and deep learning theories, from SARA, an optimisation algorithm propelled by handcrafted priors, to AIRI, plug-and-play algorithm relying on learned denoisers, and a newborn network series approach R2D2. We will also discuss ongoing work for the transfer of such algorithms to medical imaging. Last but not least, we will take a few seconds to unveil Star Wars hidden facts and misconceptions.
Title: Computational Sensing via Folding: Revisiting the Legacy of Shannon-Nyquist, Prony, Schoenberg, Pisarenko and Radon
Digital data capture is the backbone of all modern-day systems, and “Digital Revolution” has been aptly termed as the Third Industrial Revolution. Underpinning the digital representation is the Shannon-Nyquist sampling theorem and newer developments such as compressive sensing approaches. The fact that there is a physical limit to which sensors can measure amplitudes poses a fundamental bottleneck when it comes to leveraging the performance guaranteed by recovery algorithms. In practice, whenever a physical signal exceeds the maximum recordable range, the sensor saturates, resulting in permanent information loss. Examples include (a) dosimeter saturation during the Chernobyl reactor accident, reporting radiation levels far lower than the true value, and (b) loss of visual cues in self-driving cars coming out of a tunnel (due to sudden exposure to light). In the last decades, recovery strategies have become increasingly non-linear but for most part, the acquisition has remained linear, limiting truly high-dynamic-range (HDR) sensing. To reconcile the gap between theory and practice, we introduce a computational sensing approach—the Unlimited Sensing framework (USF)—that is based on a co-design of hardware and algorithms. On the hardware front, our work is based on non-linear analog-to-digital converters that produce modulo or folded samples. On the algorithms front, we develop new, mathematically guaranteed recovery strategies. In the first part of this talk, we prove a sampling theorem akin to the Shannon-Nyquist criterion. Despite the non-linearity in the sensing pipeline, the sampling rate only depends on the signal’s bandwidth. Our theory is complemented with a stable recovery algorithm. Beyond the theoretical results, we also present a hardware demo that shows the modulo ADC in action. Building on the basic sampling theory result, we consider certain variations on the theme. This includes different signal classes (e.g. smooth, sparse and parametric functions) as well as sampling architectures, such as One-Bit and Event-Triggered sampling. Moving further, we reinterpret the USF as a generalized linear model that motivates a new class of inverse problems. We conclude this talk by presenting a research overview in the context of single-shot HDR imaging, sensor array processing and HDR computed tomography based on the modulo Radon transform.