Monday 17 January 2022
IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town
07:00 – 08:10
The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.
Eric R. Fossum, Dartmouth College (United States)
Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.
08:10 – 08:40 EI 2022 Welcome Reception
Tuesday 18 January 2022
Color Management I
Session Chairs:
Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States) and Gabriel Marcu, Apple Inc (United States)
07:00 – 08:20
Yellow Room
07:00COLOR-139
Estimating spectral reflectances using mobile phone cameras, Shoji Tominaga1,2, Shogo Nishi3, and Ryo Ohtera4; 1Norwegian University of Science and Technology (Norway), 2Nagano University (Japan), 3Osaka Electro-Communication University (Japan), and 4Kobe Institute of Computing (Japan) [view abstract]
We discuss a method for estimating the surface-spectral reflectances of objects using mobile phone cameras. First, the recently developed methods are briefly described for measuring and estimating spectral sensitivity functions for mobile phone cameras. Second, we describe the Wiener filter technology for spectral reflectance estimation. It should be noted in the traditional Wiener filter that the sensor outputs are normalized to be constant, because the actual camera outputs are not equal to the calculated sensor outputs, obtained from the numerical sum of products of three spectral functions. In the present study, we generalize the observation model, where an additional gain parameter is introduced so that the calculated camera responses are equal to the real camera outputs. Therefore, this Wiener filter algorithm has two unknown parameters of the noise variance and the gain coefficient. We propose methods for estimating these parameters using a standard sample with the known spectral reflectance. The generalized reflectance estimation method is validated in experiments using multiple mobile phone cameras and LED lamps with different spectral-power distributions.
07:20COLOR-140
Non-standard colorimetry in ICC colour management, Peter Nussbaum, Milan Kresović, and Phil Green, Norwegian University of Science and Technology (Norway) [view abstract]
In ICC v4 colour management, data is exchanged between different colour encodings via a fixed Profile Connection Space (PCS), in which colorimetry is based on a D50 illuminant, a CIE 1931 standard observer and a 0:45 measurement geometry. Colorimetry that is based on a different illuminant, observer or measurement geometry should in principle be transformed into the fixed PCS; however, while a chromatic adaptation method is specified for when illuminants are different, there is no method specified for differences in observer or measurement geometry. The Waypoint method has been proposed as a means of transforming between different colorimetric data encodings. In this study a Waypoint-based method recommended by ICC was evaluated as a mechanism for transforming into the ICC PCS, as applied to a use case in digital textile printing in which source colorimetry is based on the D65 illuminant and the CIE 1964 observer. It was compared with an alternative approach in which a non-ICC PCS was used within a conventional ICC colour management framework. The results show that when both source and destination colorimetry are based on D65/10-degrees, both methods perform equally well. However, when the source and destination colorimetry do not match, the ICC approach of transforming via the standard PCS yields better results.
07:40COLOR-141
Camera response function assessment in multispectral HDR imaging, Majid Ansari-Asl, Jean-Baptiste Thomas, and Jon Yngve Hardeberg, Norwegian University of Science and Technology (Norway) [view abstract]
Recently, spatially varying Bidirectional Reflectance Distribution Functions (svBRDF) is widely used as a model to characterize the appearance of materials with varying visual properties over the surface. One of the challenges in image-based svBRDF capture systems rises for surfaces with high specularity and sparkles, which require a dynamic range higher than the dynamic range of cameras. High Dynamic Range Imaging (HDRI) for svBRDF systems with multispectral camera has not been addressed properly in the literature. In HDRI, Camera Response Function (CRF) plays a crucial role in the precision of results specially when measuring metrological data such as spectral svBRDF. In this work, we investigate the effect of CRF assessment on the precision of measurement. Therefore, we have conducted two experiments to measure absolute CRF using reflective chart method as well as estimate relative CRF by Debevec and Malik’s method for a filter wheel multispectral camera to be used in a svBRDF setup. Results are evaluated on two levels: radiance map construction and reflectance calculation, by comparing to telespectroradiometer measurements as ground truth data. Results showed that although the HDRI with measured absolute CRF outputs radiance measurements with the same physical units and in the same scale as ground truth data, HDRI with estimated relative CRF outperformed in terms of the precision of reflectance measurement.
08:00COLOR-142
Problems in image target-based color correction, Gabriele Simone1, Marco Gaiani2, Andrea Bellabeni2, and Alessandro Rizzi1; 1Università degli Studi di Milano and 2University of Bologna (Italy) [view abstract]
This paper aims at presenting some problems that everyone could experience in the process of image target-based color correction (CC). We have acquired a set of images using a color checker, here we present some measures of these image before and after the color correction and we compare them with the actual values of the color checker. Comments about the results and their departures from the scene are reported together with the changes after color correction. It is shown how real scene acquisitions are subject to many issues that makes color correction process very far from the idealized colorimetric ideas.
AI Applications for Color
Session Chair:
Jan Allebach, Purdue University (United States)
08:50 – 10:10
Yellow Room
08:50COLOR-156
Effect of hue shift towards robustness of convolutional neural networks, Kanjar De1,2 and Marius Pedersen2; 1Lulea University of Technology (Sweden) and 2Norwegian University of Science and Technology (Norway) [view abstract]
Computer vision systems become deployed in diverse real time systems hence robustness is a major area of concern. As a vast majority of the AI enabled systems are based on convolutional neural networks based models which use 3-channel RGB images as input. It has been shown that the performance of AI systems, such as those used in classification, is impacted by distortions in the images. To date most work has been carried out on distortions such as noise, blur, compression. However, color related changes to images could also impact the performance. Therefore, the goal of this paper is to study the robustness of these models under different hue shifts.
09:10COLOR-157
Deep learning approach for classifying contamination levels with limited samples, Min Zhao1, Susana Diaz-Amaya1,2, Amanda J. Deering1, Lia Stanciu1, George T.-C. Chiu1, and Jan Allebach1; 1Purdue University and 2Bayer at Convergence - Bayer Crop Science (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
09:30COLOR-158
Mimicking DBS halftoning via a deep learning approach, Baekdu Choi and Jan P. Allebach, Purdue University (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
09:50COLOR-159
Improvements to color image and machine learning based thin-film nitrate sensor performance prediction: New texture features, repeated cross-validation, and auto-tuning of hyperparameters, Xihui Wang, Jan Allebach, George T.-C. Chiu, Ali Shakouri, and Ye Mi, Purdue University (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
Color Management II
Session Chairs:
Phil Green, Norwegian University of Science and Technology (NTNU) (Norway) and Shoji Tominaga, Norwegian University of Science and Technology (Norway) and Nagano University (Japan) (Japan)
16:15 – 16:55
Yellow Room
16:15COLOR-180
An exploration of color reproduction for inkjet FDM color 3D printing, Piyarat Silapasuphakornwong1, Chulapong Panichkriangkrai2, Parinya Punpongsanon3, Masahiro Suzuki4, and Kazutake Uehira1; 1Kanagawa Institute of Technology (Japan), 2Chulalongkorn University (Thailand), 3Osaka University (Japan), and 4Seisen University (Japan) [view abstract]
Advancing in a consumer-grade full color 3D printing allows creating dedicated aesthetical appealing. However, the fully fabricate of target color stills limited due to unusual mismatch in the 3D color management systems. While the current FDM 3D printing obtained the color management from the standard 2D printing technologies, the 3D aspect such as the characteristics of inks and substrates, the viewing conditions, and the based materials, made such process different from 2D printing. Therefore, as far as our knowledge, it is no suitable method established supporting the color reproduction for inkjet FDM color 3D printing. In this paper, we first analyze the color profile of inkjet FDM color 3D printing, and investigate the color model that could bridge the gap between digital design and actual 3D printed results. Then, we created the color model by manually reproducing each color that mapped every possible color pair to find the closest color based on the least color difference value, which can be rendered in lieu of the original printing colors. We verify our proposed color mapped by 3D print the mapped color and conduct the color measurement compared with target colors. Though the experimental results, we showed that our mapped colors can represent to the user desired as the 80% success rate of mapped colors can be matched though in the control condition.
16:35COLOR-181
Considering chromatic adaptation in camera white balance, Minchen Wei1, Yiqian Li1, and Xiandou Zhang2; 1The Hong Kong Polytechnic University (Hong Kong) and 2Huawei Tech (China) [view abstract]
Chromatic adaptation is an important mechanism in the human visual system. It helps to maintain the color appearance of illuminated objects relatively constant by automatically removing the color cast of the illumination. White balance, an important step in camera ISP pipeline, is designed to simulate the chromatic adaptation mechanism by automatically or manually specifying the white point of a captured scene. Conventional white balance algorithms simply adjust the color appearance of the captured scene to how it would appear under daylight, regardless of the illumination of the scene. Recent studies, however, clearly suggested that incomplete chromatic adaptation happened under some illumination conditions, which should be considered in white balance. In this study, we systematically varied the chromaticities of the illumination in a viewing booth, and also the chromaticities of the illumination in two viewing booths. The observers viewed the viewing booth(s) first and then adjusted the color appearance of the image of the booth(s) shown on a smartphone display by adjusting the image white point. The results clearly suggested the necessity to consider the degree of chromatic adaptation in camera white balance and provided guidance on how white balance should be performed.
Wednesday 19 January 2022
IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges
07:00 – 08:15
This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.
Larry Matthies, Jet Propulsion Laboratory (United States)
Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.
EI 2022 Interactive Poster Session
08:20 – 09:20
EI Symposium
Poster interactive session for all conferences authors and attendees.
Material Appearance I
Session Chairs:
Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) and Lionel Simonot, Université de Poitiers (France)
09:40 – 10:20
Yellow Room
09:40COLOR-221
Light scattering in translucent layers: Angular distribution and internal reflections at flat interfaces, Arthur Gautheron1,2, Raphael Clerc3,4, Vincent Duveiller3,4, Lionel Simonot5, Bruno Montcel1,2, and Mathieu Hebert3,4; 1CREATIS, 2Université Claude Bernard Lyon 1, 3Université Jean Monnet de Saint Etienne, 4Institut d'Optique Graduate School, and 5University de Poitiers (France) [view abstract]
Optical characterization and appearance prediction of translucent materials is required in various fields such as dental restorations or 3D printing technologies [1,2]. However, flux transfer models like the Kubelka-Munk model (2-flux) fail to predict the color variations of translucent objects when their thickness varies. Indeed, they rely on the assumption that the angular distribution of light is Lambertian at any depth within the material, i.e. also at the bordering interfaces of the object. The internal front and back reflectances are therefore typically computed assuming a Lambertian angular distribution. In this paper, the Radiative Transfer Equation, allowing a more accurate description the scattering of light across a layer of translucent material, is used to investigate this point. It turns out that the angular distributions of light are far from being Lambertian, due to the combined effect of light scattering and Fresnel’s reflection. Consequently, the internal reflectances may significantly vary according to the layer’s thickness, refractive index, scattering and absorbing coefficients. This work enables to better understand the scattering of light inside a translucent layer, and invites to revisit the well-known Saunderson correction usually used in 2-flux or 4-flux models.
10:00COLOR-222
Exploring the role of caustics in translucency perception — An eye tracking approach, Davit Gigilashvili, Aditya Sole, Shaikat Deb Nath, and Marius Pedersen, Norwegian University of Science and Technology (Norway) [view abstract]
Translucency is an important appearance attribute. The caustic patterns that are cast by translucent objects onto another surface encapsulate information about subsurface light transport properties of a material. A previous study (Gigilashvili et al., 2020) demonstrated that objects placed on a white surface are judged more translucent by human observers than identical objects placed on a black surface. The authors propose the lack of caustics as a potential explanation for these differences — since a perfectly black surface, unlike its white counterpart, does not permit observing the caustics. We hypothesize that caustics are salient image cues to perceived translucency, and they attract the visual attention of the human observers when assessing translucency of an object. To test this hypothesis, we replicated the experiment reported in the previous study, but in addition to collecting the observer responses, we also conducted eye tracking during the experiment. The data collection is currently underway, and we anticipate that the gaze information, such as gaze paths and gaze maps obtained with an eye tracker will provide deeper insight not only into the role of caustics but also into other potential image cues to translucency in general.
Imaging I
Session Chairs:
Vien Cheung, University of Leeds (United Kingdom) and Alessandro Rizzi, Università degli Studi di Milano (Italy)
10:55 – 11:35
Yellow Room
10:55COLOR-233
Smartphones' skin colour reproduction analysis for neonatal jaundice detection (JIST-first), Mekides A. Abebe1, Jon Yngve Hardeberg1, and Gunnar Vartdal2; 1Norwegian University of Science and Technology and 2Picterus AS (Norway) [view abstract]
Since recent years, smartphone-based color imaging systems are increasingly applied for Neonatal jaundice detection applications. The systems are mostly performing the estimation of bilirubin concentration levels based on the correlation of newborns' skin colour images with that of total serum bilirubin (TSB) and transcutaneous bilirubinometry (TcB) measurements. However, the colour reproduction capacity of smartphone cameras are known to be influenced by various factors resonated from the technological and acquisition process variabilities. For an accurate bilirubin estimation, despite the type of smartphone and illumination conditions used to capture the newborns' skin images, a complete model, or data set, which can represent all the possible real world acquisitions scenarios has to be utilized. Due to various challenges in generating such a model or a data set, some solutions opt towards the application of reduced data set (designed for reference conditions and devices only) and colour correction systems (for the transformation of other smartphone skin images to the reference space). Such approaches will make the bilirubin estimation methods to be highly dependent on the accuracy of their employed colour correction systems, in their capability to reducing device-to-device colour reproduction variability. However, the state-of-the-art methods with similar methodologies usually were only evaluated and validated on a single smartphone camera. But the vulnerability of the systems to wrong jaundice diagnosis can only be shown with a thorough investigation of the colour reproduction variability for extended number of smartphones and illumination conditions. Accordingly, this work presents and discuss the results of such broad investigation, including the evaluation of seven smartphone cameras, ten light sources, and three different colour correction approaches. The overall results show statistically significant colour differences among devices, even after color correction applications, and that more control and caution is required in the application of smartphone devices for skin colour based jaundice diagnosis.
11:15COLOR-234
Motion detection in a color video sequence with an application to monitoring a baby, Yang Yan, Purdue University (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
Appearance and Perception I
Session Chairs:
John McCann, McCann Imaging (United States) and Hyeon-Jeong Suk, Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea)
15:20 – 16:00
Yellow Room
15:20COLOR-245
A measurement of the overall vividness of a color image based on RGB color model, Tieling Chen, University of South Carolina (United States) [view abstract]
The vividness of a color is an important visual feature, but it is not directly reflected in the commonly used color models. A color image triggers a subjective feeling about its overall vividness in the visual system. This is the result of all the colors in the image but not determined by some single color. This paper aims to establish a metric to measure the overall vividness of a color image. It defines the vividness of a single color in the RGB color model as the distance from the grey diagonal of the color cube. For the overall vividness of a color image, the vividness of each pixel is collected to obtain a distribution, and then an appropriate monotonically increasing function is used as a weight to integrate this distribution function. The result of the integration defines the overall vividness of the color image. In actual processing, integration is performed in a discrete sense. With the conversion relationships between the RGB color model and other color models, the overall vividness measurement can be extended to some commonly used user-oriented color models, including HSV and HSL.
15:40COLOR-246
Experimental methods to investigate time-course of chromatic adaptation, Seonyoung Yoon1, Youngshin Kwak1, and Hyosun Kim2; 1Ulsan National Institute of Science and Technology and 2Samsung Display Co., Ltd. (Republic of Korea) [view abstract]
Two different experimental methods, method of adjustment and yellow/blue forced choice, were tested to investigate the time-course of chromatic adaptation. Inside the lighting booth, 2cmx2cm square color stimulus was displayed on the LCD display and the surface of the display was covered with gray paper except for the stimulus area. The lighting of the booth was controlled to have 3,000 K or 6,500 K with 800 lux at the bottom of the booth. During the adjustment method experiment, the observers adjusted the stimulus to preserve an achromatic appearance. In the forced-choice experiment, observers are asked to identify whether the stimuli are yellow or blue. In all experiments, evaluations were performed once every 5 seconds to track color appearance over time. CCT of the booth lighting was changed from 6,500 K to 3,000K or 3,000K to 6,500K every two minutes. The results showed that the observers had difficulties tracking the neutral colors using the adjustment method while the forced-choice experiment showed the more consistent results.
Material Appearance II
Session Chairs:
Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) and Ingeborg Tastl, HP-Labs (United States)
16:15 – 17:15
Yellow Room
16:15COLOR-252
Modeling the 3D shape of a fingernail and pre-distorting an image to be printed on the fingernail to yield the correct appearance, Marshia A. Seto1, Rain Guo2, White He2, Davi He2, George T.-C. Chiu1, and Jan P. Allebach1; 1Purdue University (United States) and 2SunValley (China) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
16:35COLOR-253
Glossy appearance editing for heterogeneous material objects (JIST-first), Yusuke Manabe, Midori Tanaka, and Takahiko Horiuchi, Chiba University (Japan) [view abstract]
With the proliferation of smartphones and social networking services, the opportunities for individuals to take photographs have increased exponentially. In a previous study, the perceived gloss of an object was reduced by representing as a digital image compared with a real object. It is also known that image editing, such as lossy image compression, can reduce the glossiness of an image. Therefore, the glossiness of real objects may be easily changed in digital images; thus, a method for appropriately editing the gloss in digital images is required for post-processing. In this study, we proposed a gloss appearance editing method for various material objects in a single digital image. The proposed method consists of three steps: color space conversion, gloss detection, and gloss editing. The relationship between the proposed method and the respective reflection models of inhomogeneous objects, metallic objects, and translucent objects was analyzed. Consequently, we determined that the gloss editing of the proposed method is equivalent to editing the specular reflection component of an inhomogeneous object, the grazing reflection component of a metallic object, and the specular reflection component of a translucent object. We applied the proposed method to test images including objects of various materials and confirmed its effectiveness through a subjective evaluation by visual inspection and an objective evaluation using image statistics.
Thursday 20 January 2022
Imaging II
Session Chairs:
Alessandro Rizzi, Università degli Studi di Milano (Italy) and Sophie Triantaphillidou, University of Westminster (United Kingdom)
07:00 – 08:00
Yellow Room
07:00COLOR-259
Measuring colorant fading within raster regions of printed scanned customer content using a novel unsupervised clustering method, Runzhe Zhang1, Yousun Bang2, Minki Cho2, Mark Shaw3, and Jan P. Allebach1; 1Purdue University (United States), 2HP Printing Korea Co., Ltd. (Republic of Korea), and 3HP Inc. (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
07:20COLOR-260
Colorization of monochrome night vision videos for a baby monitor based on a reference daylight image of the same scene [PRESENTATION-ONLY], Yang Yan, Purdue University (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
07:40COLOR-261
Image segmentation based on content-color-dependent screening (CCDS) using U-net, Altyngul Jumabayeva and Adnan Yazici, Nazarbayev University (Kazakhstan) [view abstract]
In this work, we propose to use deep learning to segment an image based on its color and its content. We start by using the content-color-dependent screening (CCDS) developed previously in [1]. The goal of CCDS is to apply different color assignments for the two or more regular or irregular halftones within the image depending on the local color and content of the image. If the image content contains high variance of color and texture locally, the artifacts due to halftoning will not be as visible as the artifacts in smooth areas of the image [1]. Therefore, the goal of CCDS was to detect smooth areas of the image and apply the best color assignments to those areas. In order to detect the smooth areas, the image segmentation algorithm involving the retrieval of the cluster-map and the segmented edge-map was proposed [1]. The main drawback of the proposed approach is that for a given image, the result highly depends on the initial parameters, such as the number of clusters, low and high thresholds for edge detection, bilateral filter parameters and others. In this work, we propose to use the well-known U-net architecture to detect the smooth areas of the image. U-net is a type of a convolutional neural network (CNN) designed for fast, accurate image segmentation, and it is used to predict a label for every single pixel [2]. The architecture of the U-net is suitable for this work because it consists of a contracting path to capture context and a symmetrical expansive path that enables precise localization [2]. We believe that using the U-net to detect smooth areas of the image will greatly improve the current approach and provide better results.
Print I
Session Chairs:
Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States) and Gabriel Marcu, Apple Inc (United States)
08:35 – 09:35
Yellow Room
08:35COLOR-277
Printer spectral color characterization adjustment for change in substrates, Anastasiia Gudzenchuk1, Phil Green2, and Hans Don3; 1Norwegian University of Science and Technology (Norway), 2London College of Communication (United Kingdom), and 3Wageningen University & Research (the Netherlands) [view abstract]
The work proposes two different methods of adjusting printer spectral color characterization data for various substrates without additional printing and measuring. The first method, the Spectral Correction Technique, was applied for spectral reflectance data obtained from different printers to predict a spectral color characterization for the additional substrate. Then, as a second method, the reference printer and reference substrate are suggested to predict empirically spectral color characterization with its help. Finally, the obtained result from both methods is evaluated, and found an accurate prediction method is achieved with machine learning.
08:55COLOR-278
Structure-aware halftoning using the iterative method controlling the dot placement (JIST-first), Fereshteh Abedini1, Sasan Gooran1, Vlado Kitanovski2, and Daniel Nyström1; 1Linköping University (Sweden) and 2Norwegian University of Science and Technology (Norway) [view abstract]
Many image reproduction devices, such as printers, are limited to only a few numbers of printing inks. Halftoning, which is the process to convert a continuous-tone image into a binary one, is, therefore, an essential part of printing. An iterative halftoning method, called Iterative Halftoning Method Controlling the Dot Placement (IMCDP), which has already been introduced in the literature, generally results in halftones of good quality. In this paper, we propose a structure-based alternative to this algorithm that improves the halftone image quality in terms of sharpness, structural similarity, and tone preservation. By employing appropriate symmetrical and non-symmetrical Gaussian filters inside the proposed halftoning method, it is possible to adaptively change the degree of sharpening in different parts of the continuous-tone image. This is done by identifying a dominant line in the neighborhood of each pixel in the original image, utilizing the Hough Transform, and aligning the dots along the dominant line. The objective and subjective quality assessments verify that the proposed structure-based method not only results in sharper halftones, giving more three-dimensional impression, but also improves the structural similarity and tone preservation. The adaptive nature of the proposed halftoning method makes it an appropriate algorithm to be further developed to a 3D halftoning method, which could be adapted to different parts of a 3D object by exploiting both the structure of the images being mapped and the 3D geometrical structure of the underlying printed surface.
09:15COLOR-279
Improving an inkjet printer: Removing stray-dots by constraining error diffusion in highlight regions, Sige Hu1, George T.-C. Chiu1, Davi He2, Rain Guo2, White He2, and Jan P. Allebach1; 1Purdue University (United States) and 2SunValley Tek (China) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
Print II
Session Chairs:
Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) and Gabriel Marcu, Apple Inc (United States)
10:00 – 11:00
Yellow Room
10:00COLOR-284
Developing a gamut mapping method for a novel inkjet printer, Baekdu Choi1, Sige Hu1, Rain Guo2, White He2, Davi He2, George T.-C. Chiu1, and Jan P. Allebach1; 1Purdue University (United States) and 2Sunvalley Tek (China) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
10:20COLOR-285
Measuring margin and skew errors in scanned printed customer content, Runzhe Zhang1, Ki-Youn Lee2, Yousun Bang2, Mark Shaw3, and Jan P. Allebach1; 1Purdue University (United States), 2HP Printing Korea Co., Ltd. (Republic of Korea), and 3HP Inc. (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
10:40COLOR-286
Measuring CMYK color plane misregistration from scanned printed customer content image, Yi Yang1, Ki-Youn Lee2, Yousun Bang2, Mark Shaw3, and Jan P. Allebach1; 1Purdue University (United States), 2HP Printing Korea Co., Ltd. (Republic of Korea), and 3HP Inc. (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
Tuesday 25 January 2022
IS&T Awards & PLENARY: Physics-based Image Systems Simulation
07:00 – 08:00
Three quarters of a century ago, visionaries in academia and industry saw the need for a new field called photographic engineering and formed what would become the Society for Imaging Science and Technology (IS&T). Thirty-five years ago, IS&T recognized the massive transition from analog to digital imaging and created the Symposium on Electronic Imaging (EI). IS&T and EI continue to evolve by cross-pollinating electronic imaging in the fields of computer graphics, computer vision, machine learning, and visual perception, among others. This talk describes open-source software and applications that build on this vision. The software combines quantitative computer graphics with models of optics and image sensors to generate physically accurate synthetic image data for devices that are being prototyped. These simulations can be a powerful tool in the design and evaluation of novel imaging systems, as well as for the production of synthetic data for machine learning applications.
Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). Joyce received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.
PANEL: The Brave New World of Virtual Reality
08:00 – 09:00
Advances in electronic imaging, computer graphics, and machine learning have made it possible to create photorealistic images and videos. In the future, one can imagine that it will be possible to create a virtual reality that is indistinguishable from real-world experiences. This panel discusses the benefits of this brave new world of virtual reality and how we can mitigate the risks that it poses. The goal of the panel discussion is to showcase state-of-the art synthetic imagery, learn how this progress benefits society, and discuss how we can mitigate the risks that the technology also poses. After brief demos of the state-of-their-art, the panelists will discuss: creating photorealistic avatars, Project Shoah, and digital forensics.
Panel Moderator: Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Panelist: Matthias Neissner, Technical University of Munich (Germany)
Panelist: Paul Debevec, Netflix, Inc. (United States)
Panelist: Hany Farid, University of California, Berkeley (United States)
Invited: Postmondrianism
Session Chairs:
Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States) and Gabriel Marcu, Apple Inc (United States)
09:50 – 10:30
Yellow Room
COLOR-353
Postmondrianism (Invited), Scott Daly, Dolby Laboratories, Inc. (United States) [view abstract]
While most color scientists hearing the word “Mondrian” will think of the ubiquitous color test targets of flat color regions separated by step edges, most display engineers will think of pixel grid geometries and shadow masks upon seeing a Mondrian painting. Now is a good time to celebrate the centennial of Piet Mondrian’s well-known simple geometric paintings, dating from 1921 and developing through their peak in 1922. The art world often describes this work as being a culmination of the abstraction movement starting with cubism and ending with his neo-plasticism and suprematism, while simultaneously opening the gate toward minimalism and the precisionists. Human vision and computer vision scientists can look at his artistic pathway through a different lens and find that different jargon is more descriptive. Mondriaan was a theosophist who spent over a decade examining imagery and trying to find its most essential elements, its eigenfunctions. On this path using the tools of direct observation through his personal visual sensitivities, he touched upon numerous phenomena eventually discovered by vision scientists, such as opponent processes, isoluminant color, non-uniform sampling, T- and X-junctions, and primary colors.
Appearance and Perception II
Session Chairs:
Phil Green, Norwegian University of Science and Technology (NTNU) (Norway) and John McCann, McCann Imaging (United States)
10:55 – 11:35
Yellow Room
10:55COLOR-363
Initial findings on changing the background in pseudo-isochromatic charts, Reiner Eschbach1,2 and Peter Nussbaum1; 1Norwegian University of Science and Technology (Norway) and 2Monroe Community College (United States) [view abstract]
Color vision deficiency is a common affliction for males, impacting about 8% of the population. The effects of this – colloquial – color blindness is the inability or limited ability to distinguish colors. A typical problem might be that a color changing LED, changing between red and green to indicate two states, will not be differentiated. There are multiple ways to detect and distinguish color deficient people, ranging from genetic testing, to simple pseudo-isochromatic charts, as exemplified by the likely most known Ishihara charts, to elaborate color matching and color sorting tests, as exemplified by the Farnsworth-Munsell 100 Hue test. In this talk describe initial experimental data on changing the background color – or the color “in between the dots” - for a pseudo-isochromatic chart color deficiency test. For the experiment, we replaced the white background with 4 different neutral gray-level values and measured the performance of known color deficient observers on these charts as a function of the new background. Though preliminary, the data show a significant difference in the performance of color deficient observers, despite the main pseudo-isochromatic colors staying the same.
11:15COLOR-364
Similarity between two color areas, Tieling Chen, University of South Carolina (United States) [view abstract]
The paper introduces a method to compare two areas of colors, with each area presenting a perceptual single major color such as on a piece of cloth or a piece of tile. Each color area contains a cloud of colors in similar hues and all colors present a scattered distribution in the RGB color cube. Existing color difference formulas comparing two single colors do not work well in this case. The new method presented in the paper includes two aspects, a new color model that better describes color distributions and the technique of comparing two areas of colors by their color distributions under the new color model. The new model uses cylindrical coordinates to describe the color features, and an area of colors shows a distribution pattern on each color feature. For two areas of colors that are perceptually similar, their distributions on each component of the new color model are compared for similarity, and a combination of these sub-similarities gives an overall similarity between the two areas of colors. The proposed method can be applied to many industrial processes where color similarity comparison is a main concern such as color fastness test of fabrics and tile classification by color.
Beauty
Session Chairs:
Jan Allebach, Purdue University (United States) and Scott Daly, Dolby Laboratories, Inc. (United States)
15:20 – 16:00
Yellow Room
15:20COLOR-373
A color image analysis tool to help users choose a makeup foundation color, Yafei Mao1, Christopher Merkle2, and Jan P. Allebach1; 1Purdue University and 2MIME Inc. (United States) [view abstract]
We submitted this paper last year, but then withdrew it, according to our sponsor's instructions, since he was concerned about publicizing our method at that time. One year later, a patent application has been submitted; and we have our sponsor's written permission to present this paper at EI-2022. However, we did promise him that the work would not be put in the public domain until the start of the conference -- 16 January 2022. So we again cannot provide any details about the paper.
15:40COLOR-374
New image processing algorithm towards more realistic expression on hair coloring, Boram Kim and Hyeon-Jeong Suk, Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [view abstract]
The hair dye market provides dye products and hair coloring AR services to consumers. The hair coloring service purposes to provide a more realistic user experience by checking in advance whether the color will suit them before consumers actually dye their hair. However, rendering malfunctions to check the final color of the real world. On this account, we propose a new color rendering processing method. The new algorithm determines the brightness of the hair area for the input image, calls the color value adjusted from the lookup table according to the determined hair brightness and reflects both brightness value and adjusted color value to the final rendering. Since the lookup table was created to reflect the actual hair dye result data, this new image processing shows a better hair color rendering result which closes to the real hair dye coloring.
Applications
Session Chairs:
Scott Daly, Dolby Laboratories, Inc. (United States) and Reiner Eschbach, Norwegian University of Science and Technology (Norway) and Monroe Community College (United States)
16:15 – 17:15
Yellow Room
16:15COLOR-375
Printed paper-based devices for detection of food-borne contaminants: New device design and new colorimetric image analysis methods, Qiyue Liang, Min Zhao, Ana M. Ulloa Gomez, George T.-C. Chiu, Lia Stanciu, Amanda J. Deering, and Jan P. Allebach, Purdue University (United States) [view abstract]
Prior to the conference, the sponsor of this work may want to submit a patent application protecting IP associated with it. For that reason, we cannot reveal further details about the work at this time.
16:35COLOR-376
Prototyping of low-cost color enhancement lighting using multicolor LEDs, Camille Kabore1, Masaru Tsuchida2, Ikunori Suzuki1, Satoshi Sugaya1, Akisato Kimura2, and Noboru Harada2; 1Institute of Technologists and 2NTT Communication Science Laboratories (Japan) [view abstract]
We prototyped a lighting system for color enhancement with maintaining a white appearance using low-cost multicolor LEDs. We bought LEDs of several colors in local electronic parts shops, and spectral power distributions of them were evaluated for synthesizing objective lights. Five LEDs were chosen for natural color observation of the object, and three of them were used for color enhancement. Experiments were conducted using an assembled LED lighting, and a color chart and reddish and blueish color patches were enhanced with maintaining white appearance chromatically.
16:55COLOR-377
Pokemon color adjustments for augmented reality contents, Taesu Kim, Donggun Lee, and Hyeon-Jeong Suk, Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [view abstract]
The methods for rendering content realistic in Augmented Reality (AR) have already been advanced, but the consideration from the context of actual users on image coloring adjustment remains to be investigated. This study aims to investigate the image color adjustment on augmented reality (AR) contents in wild situations. In the experiment, seven designers were recruited to adjust the hue, saturation, and lightness of three Pokemon characters in 40 different environments using Photoshop. As a result, saturation and lightness adjustments were found to be the most dominant. Also, designers with more interest in Pokemon tended to make Pokemon more saturated. Thus, this study demonstrated that illuminant-aware color tuning is anticipated to render natural-looking AR contents.