Join us in London for a full day of display science courses—LIM 2022 Summer School—followed by two exciting days of technical talks and networking opportunities. In-person space is limited for both events. Online is an option for the Technical Program, but not the Summer School. The Post-LIM2022 Networking Event + Demos in Cambridge are free (end of program for details).
2-Minute Interactive Paper Previews Followed by the Interactive Paper Poster Session
15:10 – 16:30 London
Session Chair: Javier Vázquez Corral, Universitat Autònoma de Barcelona (Spain)
The Art and Science of Displaying Visual Space, Robert Pepperell and Alistair Burleigh, Fovotec and Cardiff Metropolitan University (UK)
Abstract: This paper considers the problem of how to display visual space naturalistically in image media. A long-standing solution is linear perspective projection, which is currently used in imaging technologies from cameras to 3D graphics renderers. Linear perspective has many strengths but also some significant weaknesses and over the centuries alternative techniques have been developed for creating more naturalistic images. Here we discuss the problem, its scientific background, and some of the approaches taken by artists and computer graphics researchers to find solutions. We briefly introduce our own approach, which is a form of nonlinear 3D geometry modelled on the perceptual structure of visual space and designed to work on standard displays. We conclude that perceptually modelled nonlinear approaches can make 3D imaging technology more naturalistic than methods based on linear perspective.
Effect of Bit-depth in Stochastic Gradient Descent Performance for Phase-only Computer-generated Holography Displays, Andrew C. Kadis, Benjamin Wetherfield, Jinze Sha, Fan Yang, Youchao Wang, and Timothy Wilkinson, University of Cambridge (UK)
Abstract: SGD (Stochastic gradient descent) is an emerging technique for achieving high-fidelity projected images in CGH (computer-generated holography) display systems. For real-world applications, the devices to display the corresponding holographic fringes have limited bit-depth depending on the specific display technology employed. SGD performance is adversely affected by this limitation and in this piece of work we quantitatively com-pare the impact on algorithmic performance based on different bit-depths by developing our own algorithm, Q-SGD (Quantised-SGD). The choice of modulation device is a key decision in the design of a given holographic display systems and the research goal here is to better inform the selection and application of individual display technologies.
Work In Progress: An Exposure Invariant Neural Network for Colour Correction, Abdullah Kucuk and Graham Finlayson, University of East Anglia; Rafal Mantiuk, University of Cambridge; and Maliha Ashraf; University of Liverpool (UK)
Abstract: Colour correction is the process of converting camera dependent RGB values to a camera independent standard colour space such as CIE XYZ. Regression methods - linear, polynomial and root-polynomial least-squares - are traditionally used to solve for the colour correction transform. However, more recently neural net solutions for colour correction have been developed.
In this paper, we observe that the neural net solution - while delivering better colour correction accuracy compared to the simple (and widely deployed) 3x3 linear correction matrix approach - is not exposure invariant. That is to say, the network is tuned to mapping RGBs to XYZs for a fixed exposure level and when this exposure level changes its performance degrades (and it delivers less accurate colour correction compared to the 3x3 matrix approach which is exposure invariant). We go on to investigate two remedies to the exposure variation problem. First, we augment the data we use to train the network to include responses for many different exposures. Second, we redesign the network so, by construction, it is exposure invariant.
Experiments demonstrate that we can make neural nets that deliver good colour correction across exposure changes. Moreover, the correction performance is found to be better compared with linear colour correction. However, the root-polynomial regression method - which is also exposure invariant - performs better than the derived neural net solution.
Work In Progress: Weibull Tone Mapping (WTM) for the Enhancement of Underwater Imagery, Chloe Game¹, Michael Thompson², and Graham Finlayson¹; ¹University of East Anglia and ²Mott Macdonald Ltd. (UK)
Abstract: Underwater imaging is a preferred survey tool for marine environments, however underwater illumination effects often create datasets of extremely varied and inconsistent quality. Consequently, images must be pre-processed to improve feature visibility, and standardize their appearance. This can be achieved simply and effectively by manipulating their brightness distributions with tone mapping. Given the data collected from underwater imagery is highly diverse, it is crucial to consider the purpose the imagery serves when evaluating its enhancement.
In previous work [1] we described tonal enhancements by domain experts (biologists) to aid annotation of underwater seabed habitats. Tone maps were created using a typical, and interactive, curve manipulation GUI with a set of control points. These can be dragged to alter brightness and contrast. Such tools offer bespoke and targeted image enhancements, that are preferred over more general automatic tools, but are too time-consuming to produce for large datasets.
We found that a smoother and simpler approximation of these tonal manipulations could be derived using our Weibull Tone Mapping (WTM) algorithm. This involves fitting a Weibull Distribution (WD) to brightness histograms of input and user-adjusted output images, then solving for the tone map that mapped the underlying distributions to each other. This tone mapping operation (TMO) was preferred to their own bespoke adjustments, for identifying benthic habitats from imagery. WTM therefore provides the necessary building blocks to develop a targeted enhancement algorithm, that can quickly create smooth tonal manipulations.
In this work we explored how widely applicable the WTM algorithm is to underwater images, by focusing on a larger dataset. Specifically, we introduce WTM as a parameterized enhancement tool, in which analysts can specify a desirable target WD that an image can be rendered to, by modifying its two parameters. Under experimental conditions, 10 observers used WTM to enhance images to aid seabed habitat identification. In the event that a suitable WTM adjustment could not be found, observers could interactively manipulate the WTM tone map using an interactive curve tool with 6 moveable control points, until satisfied. We use this opportunity to further explore desirable TMOs and investigate the capability of WTM to simplify control point tone-mapping tools.
We demonstrate that given the choice, experts typically find a WTM enhancement sufficient for their analyses (81% of images) compared to an advanced adjustment from an interactive tool. Interestingly, in the latter cases, we find that the majority (91%) of TMOs could be approximated by our WTM algorithm, using mean CIE ΔE <5 as our threshold for success. Intra and inter-variability of observers was low and image content did not appear to influence observer tool choice.
These results further illustrate that the WD is a good model and target distribution of underwater image histograms. We see that WTM’s usage extends beyond simplification and smoothing of complex and time-consuming tonal manipulations, to a successful and preferred enhancement tool. This data provides the necessary groundwork to investigate whether a suitable WTM can be derived automatically from images.
Work In Progress: Effects of Size on Perception of Display Flicker: Comparison with Flicker Indices, Hyosun Kim , Eunjung Lee, Hyungsuk Hwang, Youra Kim, Dong-Yeol Yeom; Samsung Display ()
Abstract: Simulating images with 30 Hz, we observed the effect of size on display flicker perception. Additionally, we compared the results with various indices, representing the degree of flickering. As a result, participants perceived flicker to be stronger as the size of stimuli increased. However, none of the flicker indices, such as JEITA, Flicker Visibility, and Flicker Modulation Amplitude reflected this tendency. Since display makers generally use the flicker indices to represent the amount of flicker, these indices need to be supplemented to include the effects of size.
Work In Progress: A Quantum-relativistic Chromatic Adaptation Transform, Nicoletta Prencipe, Université de Bordeaux (France)
Abstract: In the context of my PhD thesis I'm taking part to a mathematical approach to colour perception which descends from the axiomatization developed by Schrödinger, Resnikoff and Berthier.
The axioms can be summarized in the following statement: the space of perceived colours is the cone of positive elements of a formally real Jordan algebra of dimension 3.
There are only two possible choices for such an algebra, i.e. there are only two types of models. Existing colour spaces fall in the first category, while the second type of model has an intrinsic hyperbolic nature and makes use of the adaptation of mathematical concepts from quantum mechanics and special relativity theory.
At an intuitive level it is not hard to explain why it makes sense to talk about modern physics theories in the colour context. Colour perception is indeed a process based on the duality between context of measure and the observing apparatus (which might be the human visual system or a digital camera). This recalls the duality in quantum mechanics: it makes no sense to talk about a perceived colour without specifying the conditions in which it has been measured. Perceived colours are not absolute, but relative to the viewing conditions.
The interest of this new model is also that it permits to introduce some new proposals for color metrics and transforms naturally arising from the mathematical formulation, which might be of interest in colour image processing.
A consequence of the relativistic nature of the model is a set of transformations, which are well-known in special relativity theory: Lorentz boosts.
We propose to use a normalized Lorentz boost as a chromatic adaptation transform (CAT) for AWB. I will describe two different implementations of the boost CAT: one in the HCV colour space and another in a modified HCV obtained adding Hering's opponency to H. I will discuss both visual and quantitative comparisons of the performance of this new method w.r.t. the classic von Kries diagonal CAT.
Work In Progress: LightSim: A Testbed for Evaluating Color Calibration Kits, Wei-Chung Cheng, Food & Drug Administration (US)
Abstract: Since remote work became common due to the pandemic, professionals relying on high-quality color displays, such as pathologists, have lost regular access to professionally calibrated devices. A potential mitigation is the use of consumer-grade display calibration sensors to conduct routine quality assurance or quality control tasks. However, the accuracy of such sensors is yet to be determined and understood.
A testbed was developed to spectrally reproduce display stimuli for testing color calibration kits. The testbed, LightSim, consists of a tunable light source (TLS), an integrating sphere, and a spectroradiometer.
The testbed was characterized as a 1-pixel, 1,024-primary, 40,000-level display in contrast to the regular n-pixel, 3-primary, 256-level displays. Primary spectra of three displays were used to emulate different color gamuts and lighting methods: a virtual reality device based on OLED (Oculus Rift), a professional-grade display based on CCFL-backlighting (NEC PA271), and a consumer-grade display based on LED-backlighting (HP Z24X) were measured to represent the DCI-P3, AdobeRGB, and sRGB color spaces, respectively.
In the experiment, a color calibration sensor (DataColor Spyder X Elite) was tested with the 24 patches of the ColorChecker. The subject sensor allowed the user to select one of four backlighting modes (“White LED”, “Standard LED”, “General”, and “GB LED”). The experiment results show adequate linearity of luminance responses in the mid-range. Most color differences were less than 2.5 ΔE00, except for the darkest patch #24, indicating the limited capability of measuring dark shades. None of the four backlighting modes outperformed the others, and two blue patches #8 and #13 generated the most diverse results. This exercise demonstrates the utility of the LightSim for emulating arbitrary spectra without employing actual displays based on different backlighting methods.