CO-ORGANIZER



CO-LOCATED EVENT


IMPORTANT DATES

Author Deadlines
Call for Paper Submissions
» Journal-first (JIST or JPI) 15 Jan
» Conference
31 March
Acceptance Notification
» Journal-first (JIST or JPI) by 21 April
» Conference by 28 April
Final Manuscripts Due
» Journal-first (JIST or JPI) 10 May
» Conference 14 May

Program Deadlines
Registration Opens mid-April
Early Registration Ends 26 May
Attending In-person Reg Ends 21 June
Summer School 28 June
Technical Sessions 29-30 June

   

LIM 2023 Program

Join us in London for a full day of material appearance courses—LIM 2023 Summer School—followed by two exciting days of technical talks and networking opportunities. Online attendance is an option for the Technical Program, but not the Summer School. MANER Conference London 2023, the second Material Appearance Network for Education and Research event, is integrated into the LIM 2023 program and is a sponsor of one of the keynote talks.

AT-A-GLANCE

28 June: LIM 2023 Summer School
29-30 June: LIM Technical Program
 

LIM Technical Program

Thursday 29 June 2023
Opening Keynote
10:00 – 11:00 London
Session chairs: Graham Finlayson, Marina Bloj and Lionel Simonot

SPONSORED BY

10:00
Lighting up appearances, Sylvia Pont, professor of Perceptual Intelligence, Faculty of Industrial Design Engineering, TUDelft
 (the Netherlands)

Abstract: The appearance of materials in the wild is not stable, though we usually experience it as such. One important factor influencing material appearance concerns light, which varies spatially, directionally, spectrally, and temporally. Here “light” or the “light field” is defined as the actual light in a space, which can be outside and inside, and concerns the final result of lighting (sources) and the space’s characteristics (geometry, materials). Light-material-shape-space interactions can be weak or strong, optically and perceptually—which effects do not necessarily agree. In a multidisciplinary approach, combining ecological physics, computational science, perception research and design, we investigated several phenomena of this kind, applied resulting insights in designs, and developed our lighting design framework and education approach. I present an overview thereof, illustrating the idea that appearance can be tuned to boost certain features of materials, and designed in a user-centric, contextualized, scientifically informed manner. Finally, I present our first study on the exciting challenge of multisensory, cross-modal, dynamic interactions using pseudo-cues to boost material experiences.

11:00 – 11:30
BREAK
Appearance and gloss measurement
11:30 – 12:40 London
Session Chair: Gael Obein, CNAM (France)
11:30
 Focal Talk: Physics and measurement of properties linked to appearance, Gael Obein, director of LNE-CNAM (France)

Abstract: The measurement of the appearance of objects as perceived by individuals is necessary to meet industrial needs (quality control at the end of the production line, realistic reproduction of a 3D object, generation of new visual effects) and societal needs (development of virtual reality, creation of digital twins of cultural heritage objects). This need for measurement, initially addressed by colorimetry, has become more complex over the past 20 years with the arrival of new effects such as "sparkle" in the automotive industry, iridescence in cosmetics, and new demands such as measuring translucency for 3D printing or satin finish for natural-looking objects. To characterize these new effects, traditional measurement techniques have naturally evolved toward bidirectional quantities such as BRDF, BTDF, SVBRDF, or BSSRDF! Metrologists have developed instruments capable of measuring these new quantities. Today, there are solutions for acquiring them all, using rotation platforms, robotic arms, HDR imaging sensors, and very bright LED sources.

12:00
A handheld image-based gloss meter for complete gloss characterization, Stijn Beuckels1, Jan Audenaert1, Pierre Morandi2, and Frédéric B. Leloup1; 1KU Leuven (Belgium) and 2Rhopoint Instruments Ltd. (UK)     

Abstract: Nowadays, industrial gloss evaluation is mostly limited to the specular gloss meter, focusing on a single attribute of surface gloss. The correlation of such meters with the human gloss appraisal is thus rather weak. Although more advanced image-based gloss meters have become available, their application is typically restricted to niche industries due to the high cost and complexity. This paper extends a previous design of a comprehensive and affordable image-based gloss meter (iGM) for the determination of each of the five main attributes of surface gloss (specular gloss, DOI, haze, contrast and surface-uniformity gloss). Together with an extensive introduction on surface gloss and its evaluation, the iGM design is described and some of its capabilities and opportunities are illustrated.

12:20
The measurement of specular gloss using a conoscopic goniospectrophotometer, Lou Gevaux, Alice Dupiau, Kévin Morvant, and Gael Obein, LNE-CNAM (France) 

Abstract: The measurement of specular gloss using a glossmeter is normalized in the ISO 2813 Standard, which is widely used for many industrial applications. In practice, the principle of the measurement relies on using a primary standard that approximates a perfectly polished back glass surface and an optical design where rectangular diaphragms are used for the source and detection apertures. Any deviation in the refractive index or the polishing level of the standard artefact, or in the machining of the rectangular diaphragms ends in measurement uncertainties. To tackle these issues, we propose to calculate the specular gloss from the bidirectional reflectance distribution function (BRDF) measured using a goniospectrophotometer equipped with a conoscopic detection. With such an instrument, no calibration sample is needed anymore, and the geometry of measurement given in the standard can be applied with good accuracy. The method has been implemented and tested on samples of various gloss values.

Day 1 Interactive Paper Previews
12:20 – 12:40 London
Session Chair:  Adytia Sole, NTNU (Norway)
Naturalness perception of 2.5D prints: elevation and size of prints relation, Altynay Kadyrova and Marius Pedersen, NTNU (Norway); Stephen Westland, University of Leeds (UK); Clemens Weijkamp, Canon Production Printing Netherlands B.V. (the Netherlands); and Takahiko Horiuchi, Chiba University (Japan)

Abstract: Naturalness is a complex concept and a number of parameters might impact naturalness perception. In this work, we addressed how the combination of different elevation levels and size of prints impacts naturalness perception of 2.5D prints. The results of a subjective ranking experiment showed that observers perceived 2.5D prints as more natural at higher elevation with larger size of print. Moreover, we observed that elevation seems to be a more dominant parameter than size of the print for observers when evaluating naturalness.

Quantifying visual differences between color visualizations on different displays, Eric Kirchner and Lan Njo, AkzoNobel Paints and Coatings (the Netherlands), and Esther Perales and Aurora Larrosa Navarro, University of Alicante (Spain)

Abstract: We introduce a quantitative metric to analyze the visually perceived differences between colors that are visualized on different displays, or that are visualized on the same display but using different visualization methods. This metric is validated by analyzing perceived visual differences as scored by observers using an iPad Air 2 display under different ambient light conditions. Our results show that the metric calculations are well aligned with the visual data from this experiment.
      We use the new metric to investigate the reproducibility of spectroradiometer data from three different displays (iPad Air 2, and iPad models from 2017 and 2018). Our results show that color visualizations based on these datasets are virtually identical for the iPad Air 2 and iPad 2017. For the iPad 2018, in circa 10% of the colors a visually noticeable difference occurs between visualizations based on the older and on the new dataset. We use the same metric to also compare color visualizations on these three displays. Our results show that color visualizations between iPad 2017 and iPad Air 2 are often visually different, with color differences larger than CIEDE2000 = 4.0 for 50% of the colors. But comparing iPad 2017 with iPad 2018 the color visualizations are often visually identical. For 90% of the colors, color differences CIEDE2000 < 2.4.

Measuring method for gloss unevenness with three directional lights, Shinichi Inoue1, Yoshinori Igarashi2, Takeyuki Hoshi2, and Toshifumi Satoh1; 1Tokyo Polytechnic University and 2Chuo Precision Industrial Co. Ltd. (Japan)

Abstract: In this paper, we introduce the analysis for gloss unevenness by using developed multiple directional incident lights optics. Gloss unevenness is strongly related to recognition of material texture. However, it looks different depending on the angles, so it has been difficult to measure quantitatively. The gloss unevenness caused by surface roughness is able to analyze by the distribution of normal on the surface. We have developed the optical system that simultaneously illuminates with light from three different directions, angles, and captures images of gloss unevenness in one shot. We confirmed that the normal distribution of the surface can be estimated by analyzing the image. This proposed method can not only measure the gloss unevenness image, but also estimate the shape of the surface. As an application of this gloss unevenness observation technology, it is also possible to detect of scratches and coating unevenness in industrial quality control.

Multi-layer halftoning for poly-jet 3D printing, Fereshteh Abedini1, Raed Hlayhel2, Sasan Gooran1, Daniel Nyström1, and Aditya Suneel Sole2; 1Linköping University (Sweden) and 2NTNU (Norway)

Abstract: Accurate color reproduction is an essential parameter in many 3D printing applications.  Although current technologies in full-color 3D printing have enabled the reproduction of thousands of colors, reproducing the precise target color is still challenging and requires tuning. In this paper, we integrate halftoning with a multi-layer printing approach, where ink is deposited at variable depths, to improve the reproduction of tones and fine details in poly-jet 3D printing. The proposed approach is implemented using a manually controlled ink placement add-on for a commercial 3D printer and is compared to the default software of that printer. Results demonstrate that the proposed multi-layer halftoning performs more accurately in reproducing the tones and details of the target appearance.

Performance considerations for ray tracing in gradient-index optics with symplectic numerical methods, Ben McKeon and Alexander V. Goncharov, University of Galway (Ireland)

Abstract: The primary objective of this paper is to demonstrate the utility of symplectic numerical techniques for ray tracing within gradient-index media. The relevant mathematics are explained in brief, deriving the optical Hamiltonian independently of the Lagrangian optical formalism before constructing a symplectic ray tracing algorithm. Numerical experiments with the Lüneburg and Maxwell fish-eye lenses compare the effectiveness of symplectic methods with standard numerical integration techniques, challenging the idea that the increased accuracy of higher-order numerical methods justifies their elevated computational cost. Further uses for symplectic ray tracing are also discussed..

Detection and correction of errors in psychophysical color difference Munsell re-renotation dataset, Dmitry Nikolaev1,2, Olga Basova1, Galim Usaev1,3, Mikhail Tchobanou4, and Valentina Bozhkova1; 1Institute for Information Transmission Problems of the Russian Academy of Sciences, 2LLC Smart Engines Service, 3Moscow Insitute of Physics and Technology, and 4Huawei Technologies (Russia)

Abstract: The Munsell dataset holds a prominent position in the field of color science. This dataset describes large color differences covering a wide color gamut, making it highly valuable for the development of color models. Currently, the widely used version is the Munsell Renotation, which is the second version of the dataset. In this paper, we analyze the third version, known as the Munsell Re-renotation, identify significant errors within it, and provide corrections for obvious typos. We propose a novel method for detecting nonuniformities, utilizing the L1-STRESS measure and the proLab uniform color space (UCS). Our findings demonstrate that the revised version of the Munsell Re-renotation dataset achieves significantly better consistency with established UCSs compared to the original Munsell Re-renotation data. Additionally, we discuss modifications of the STRESS measure for data with unknown scales. Unlike previous modifications, the proposed measure, STRESSgroup, is identical to the classic STRESS measure when the scales are the same.

Mondrian representation of real world scene statistics, D. Andrew Rowlands and Graham D. Finlayson, University of East Anglia (UK)

Abstract: In material appearance we are interested in objectively measuring a physical aspect of a material, such as reflectance, and also in understanding how we see that material. In perceptual experiments we typically display simple stimuli to an observer, record their response, and then try to build a theory of why the observer responded in a given way. Often the latter model is implemented as a computer algorithm, and many of these are, for example, now implemented in camera pipelines for smartphones. However, the stimuli that are shown to observers are necessarily either very simple, such as rectangular patches of colour, or small in number. This raises the question as to whether the recorded responses to simple stimuli actually shed light on how we perceive scenes in the real world.
     In this paper, we look at a specific example of perceptual stimuli: Mondrian images, and investigate the extent to which, in the sense of their autocorrelation matrix, they represent the real world. We show that by modelling paths of pixels through an image using a statistical model that captures the statistics of real Mondrians, the autocorrelation matrix of Mondrians is Toeplitz, and moreover this Toeplitz structure is also found in real images. Although Mondrian images do not contain typical visual cues, our path model can be tuned to replicate the statistics of real images in the autocorrelation sense. The practical utility of this method is that paths through images and their autocorrelation statistics are a key tool for developing algorithms to predict the perceptual response to complex scenes. For example, this approach is at the foundation of retinex image processing. Experiments validate our method.

13:00 –  14:00
LUNCH BREAK
Color and gloss perception
14:00 – 15:10 London
Session Chair: Hannah Smithson, University of Oxford (UK)
14:00
Focal Talk: Gloss perception: Large-scale measurement and deep learning, Hannah Smithson, professor, Experimental Psychology, University of Oxford (UK); authors Takuma Morimoto, Arash Akbarinia, Katherine Storrs, Jacob Cheeseman, Roland Fleming, Karl Gegenfurtner, and Hannah Smithson

Abstract: Visual perception of material properties is useful for remote determination of the physical characteristics of objects. Here we ask what information in the proximal image is used by human observers use to make gloss judgements? There are two novel features of our study. The first is to use online psychophysical testing to collect a large dataset of human judgements (nparticipants= 297) of many computer-generated images (nimages= 3888). The second is to use machine learning to uncover image features that consistently drive human judgements. Importantly, in this approach, the networks are trained to reproduce human judgements, rather than the physical ground-truth. The data-driven analysis revealed a simple, biologically plausible filter model that explains the majority of successes and failures in human gloss constancy, as well as reproducing known effects outside the training set. We discuss the value of large-scale online testing and deep learning in perception research.

14:30
A simple and cost effective colorimeter for characterising observer variability in colour matching experiments, Luvin Munish Ragoo and Ivar Farup, NTNU (Norway)

Abstract: In colour science, colour matching functions (CMFs) are essential for measuring how sensitive the human eye is to various light wavelengths and determining the colour of stimuli in various viewing situations. It has traditionally taken a lot of time and effort to conduct colour-matching studies to describe an observer’s perception of colour. This article presents a simple and compact 3D-printed colorimeter designed to conduct colour-matching experiments. A pilot study was conducted using the colorimeter with four observers participating in a maximum saturation-type colour matching experiment., where they would match spectral lights in the 400-720 nm range to three narrow band LED primaries. The study aimed to assess the accuracy and performance of the system in measuring individual observer CMFs. Results showed that the CMFs of the four observers showed normal characteristics of a colour-normal observer. However, the limited number of measurements per observer may have contributed to the lack of smoothness in the CMFs. The CMFs of one of the observers were compared with Stiles and Burch 1955 RGB CMFs, after normalising to the same primaries. We noted that the red and green functions fell within the expected range, while the blue function showed some unusual characteristics. The limitations of the colorimeter and overall pilot study were also discussed. In conclusion, the colorimeter showed promising results in measuring CMFs, however the limitations need to be addressed to improve matching accuracy. Additionally, further measurements are required to better characterise intra-observer and inter-observer variabilities.

14:50
A mathematical model for gloss prediction of 2D prints, Donatela Saric1,2, Andreas Kraushaar1, and Aditya Suneel Sole2; 1Fogra Research Institute for Media Technologies (Germany) and 2NTNU (Norway)

Abstract: Predicting the final appearance of a print is crucial in the graphic industry. The aim of this work is to build a mathematical model to predict the visual gloss of 2D printed samples. We conducted a psychophysical experiment where the observers judged the gloss of samples with different colours and different gloss values. For the psychophysical experiment, a new reference scale was built. Using the results from the psychophysical experiment, a mathematical prediction model for the visual assessment of gloss has been developed. By using the Principal Component Analysis to explain and predict the perceived gloss, the dimensions were reduced to three dimensions: specular gloss measured at 60°, Lightness (L*) and Distinctness of Image (DOI).

15:10 – 16:10
Posters and Coffee
Interdisciplinary 1
16:10 – 17:20 London
Session Chair: Clotilde Boust, Center for Research and Conservation for French Museum (France)
16:10
Focal Talk: Color and gloss measurements in cultural heritage conservation science : recent advances in France, Clotilde Boust, head, Imaging Group, Center for Research and Conservation for French Museum, Paris (France)

Abstract: The Ministry of Culture in France has its own center dedicated to the conservation of works of art from its 1200 national museums, called Center of research and restoration for French museums. It works on all types of artworks: jewelry, painting, furniture, bronze statues... The research department has several instruments, optical orchemical, dedicated to analyse the artwork’s materiality before restoration or for art history purpose. Except color that has been studied since the opening of the laboratory in 1931, other apparence attributes were not included in analysis workflow. Recent advances in color and gloss measurements and their application to cultural heritage analysis and restoration are presented in this talk.

16:40
The role of background blur and contrast in perceived translucency of see-through filters, Asma Alizadeh Mivehforoushi, Davit Gigilashvili, and Jon Yngve Hardeberg, NTNU (Norway)

Abstract: Translucency is an appearance attribute, which primarily results from subsurface scattering of light. The visual perception of translucency has gained attention in the past two decades. However, the studies mostly address thick and complex 3D objects that completely occlude the background. On the other hand, the perception of transparency of flat and thin see-through filters has been studied more extensively. Despite this, perception of translucency in see-through filters that do not completely occlude the background remains understudied. In this work, we manipulated the sharpness and contrast of black-and-white checkerboard patterns to simulate the impression of see-through filters. Afterward, we conducted paired-comparison psychophysical experiments to measure how the amount of background blur and contrast relates to perceived translucency. We found that while both blur and contrast affect translucency, the relationship is neither monotonic, nor straightforward.

17:00
JIST-first Dot off dot screen printing with RGBW reflective inks, Alina Pranovich1, Sergiy Valyukh1, Sasan Gooran1, Jeppe Revall Frisvad2, and Daniel Nyström1; 1Linköping University (Sweden) and 2Technical University of Denmark (Denmark)

Abstract: Recent advances in pigment production resulted in the possibility to print with RGBW primaries instead of CMYK and performing additive color mixing in printing. The RGBW pigments studied in this work have the properties of structural colors, as the primary colors are a result of interference in a thin film coating of mica pigments. In this work, we investigate the angle-dependent gamut of RGBW primaries. We have elucidated optimal angles of illumination and observation for each primary ink and found the optimal angle of observation under diffuse illumination. We investigated dot off dot halftoned screen printing with RGBW inks on black paper and in terms of angle-dependent dot gain. Based on our observations, optimal viewing condition for the given RGBW inks is in a direction of around 30∘ to the surface normal. Here, the appearance of the resulting halftoned prints can be estimated well by Neugebauer formula (weighted averaging of the individual reflected spectra). Despite the negative physical dot gain during the dot off dot printing, we observe angularly dependent positive optical dot gain for halftoned prints. Application of interference RGBW pigments in 2.5D and 3D printing is not fully explored due to the technological limitations. In this work, we provide colorimetric data for efficient application of the angle-dependent properties of such pigments in practical applications.

17:20 –  18:30
Drinks Reception
Friday 30 June 2022
Invited talk
09:15 – 10:30 London
Session Chairs: Marina Bloj and Lionel Simonot
09:15
CONFERENCE WELCOME & AWARDS
09:30
Invited Talk: Keeping up appearances, Susanne Klein, EPSRC Manufacturing Fellow, Centre for Fine Print Research, UWE, Bristol
 (UK)

Abstract: Appearance, definition from the New Shorter English Dictionary: 1. The action of coming into view or becoming visible. 2. The action of appearing formally at any proceedings. 3. The action or state of seeming or appearing to be. 4. State or form as perceived. 5. Outward show or aspect. 6. A phenomenon, an apparition. 7. A gathering of people. 8. The action or an instance of coming before the world.
     From the definition, it can be understood that appearance is always a show with an audience. Without the viewer, appearance does not exist. What does it mean for ‘Material Appearance’? In this lecture I would like to explore how in portraits, as examples of an outward show coming before the world, the recording and the reproduction of appearance relies on shared knowledge. Is appearance in the eye of the beholder? What shortcuts can be taken? What misinterpretations will happen when the cultural background of the audience is different from the people who have recorded and recreated appearance. The case studies will come from different continents and different eras.

10:30 –  11:00
Coffee Break
Appearance in 3D
11:00 – 12:10 London
Session Chair: Davide Deganello, Swansea University (UK)
11:00
Focal Talk: Exploring the interplay of flexography and rheology: Implications for printed patterns, Davide Deganello, professor, Mechanical Engineering, Swansea University (UK)

Abstract: Flexography, a leading high-speed roll-to-roll printing process, is extensively employed in the production of fine-patterned materials, such as packaging and labels. Recent advancement of the technology has led to a growing interest in the adoption of the technology for functional printing applications, such as high-volume printed electronics and biosensors.
Crucial to its efficacy is the relationship with the rheological properties of the inks utilised, which ultimately affect the quality and uniformity of the printed patterns. Following an introduction to the process, the talk focuses on the analysis of rheology for flexographic inks and the interplay of phenomena such as viscous fingering on the uniformity of deposited patterns. Evidence from research on the formulation and flexographic printing of novel printable Boger fluids is used to illustrate how ink elasticity significantly influences print uniformity and produced patterns. Following this, the impact of rheology on the morphological and electrical properties of conductive polymer printed layers is discussed, drawing attention to how variations in ink elasticity and print velocity result in different print characteristics. The talk concludes by underlining the potential of rheological manipulation in tailoring flexographic print outcomes for applications in the field of printed electronics and security.

11:30
JIST-first Digital pre-distorted one-step phase retrieval algorithm for real-time hologram generation for holographic displays, Jinze Sha, Adam Goldney, Andrew Kadis, Jana Skirnewskaja, and Timothy D. Wilkinson, University of Cambridge (UK)

Abstract: In a computer-generated holographic projection system, the image is reconstructed via the diffraction of light from a spatial light modulator. In this process, several factors could contribute to non-linearities between the reconstruction and the target image. This paper evaluates the non-linearity of the overall holographic projection system experimentally, using binary phase holograms computed using the one-step phase retrieval (OSPR) algorithm, and then applies a digital pre-distortion (DPD) method to correct for the non-linearity. Both a notable increase in reconstruction quality and a significant reduction in mean squared error were observed, proving the effectiveness of the proposed DPD-OSPR algorithm.

11:50
Investigation on color characterization methods for 3D Printer, Ruili He, Kaida Xiao, Michael Pointer, University of Leeds (UK); Yoav Bressler, Stratasys Ltd. (Israel); and Qiang Liu, Wuhan University (China)

Abstract: In this study, the third order polynomial regression (PR) and deep neural networks (DNN) were used to perform color characterization from CMYK to CIELAB color space, based on a dataset consisting of 2016 color samples which were produced using a Stratasys J750 3D color printer. Five output variables including CIE XYZ, the logarithm of CIE XYZ, CIELAB, spectra reflectance and the principal components of spectra were compared for the performance of printer color characterization. The 10-fold cross validation was used to evaluate the accuracy of the models developed using different approaches, and CIELAB color differences were calculated with D65 illuminant. In addition, the effect of different training data sizes on predictive accuracy was investigated. The results showed that the DNN method produced much smaller color differences than the PR method, but it is highly dependent on the amount of training data. In addition, the logarithm of CIE XYZ as the output provided higher accuracy than CIE XYZ.

Day 2 Interactive Paper Previews
12:10 – 12:30 London
Session Chair:  Adytia Sole, NTNU (Norway)
Facial redness perception based on realistic skin models, Yan Lu, Kaida Xiao, and Cheng Li, University of Leeds (UK)

Abstract: Facial redness is an important perceptual attribute that receives many concerns from application fields such as dermatology and cosmetics. Existing studies have commonly used the average CIELAB a* value of the facial skin area to represent the overall facial redness. Yet, the perception of facial redness has never been precisely examined. This research was designed to quantify the perception of facial redness and meanwhile investigate the perceptual difference between the faces and the uniform patches. Eighty images of real human faces and uniform skin colour patches were scaled in terms of their perceived redness by a panel of observers. The results showed that the CIELAB a* was not a good predictor of facial redness since the perceived redness was also affected by the L* and b* values. A new index, RIS was developed to accurately quantify the perception of facial skin redness, which promised a much higher accuracy (R2 = 0.874) than the a* value (R2 = 0.461). The perceptual difference between facial redness and patch redness was also discussed.

Does motion increase perceived magnitude of ranslucency?, Davit Gigilashvili, David Norman Díaz Estrada, and Lakshay Jain, NTNU (Norway)

Abstract: The visual mechanisms behind our ability to distinguish translucent and opaque materials is not fully understood. Disentanglement of the contributions of surface reflectance and subsurface light transport to the still image structure is an ill-posed problem. While the overwhelming majority of the works addressing translucency perception use static stimuli, behavioral studies show that human observers tend to move objects to assess their translucency. Therefore, we hypothesize that translucent objects appear more translucent and less opaque when observed in motion than when shown as still images. In this manuscript, we report two psychophysical experiments that we conducted using static and dynamic visual stimuli to investigate how motion affects perceived translucency.

Color change of printed surfaces due to a clear coating with matte finishing, Fanny Dailliez1,2, Mathieu Hébert2, Lionel Simonot3, Lionel Chagas1, Anne Blayo1, and Thierry Fournel2; 1Université Grenoble Alpes, 2Université Jean Monnet Saint-Etienne, and 3Université de Poitiers, Institut Pprime (France), and 4University of Saint-Etienne (France)

Abstract: When a clear layer is coated on a diffusing background, light is reflected multiple times within the transparent layer between the background and the air-layer interface. If the background is lit in one point, the angular distribution of the scattered light and Fresnel’s angular reflectance of the interface induce a specific irradiance pattern on the diffuser: a ring-like halo. In the case where the background is not homogenously colored, e.g. a half-tone print, the multiple reflection process induces multiple con-volutions between the ring-like halo and the halftone pattern, which increases the probability for light to meet differently col-ored areas of the background and thus induces a color change of the print. This phenomenon, recently studied in the case of a smooth layer surface (glossy finishing) is extended here to rough surface layer (matte finishing) in order to see the impact of the surface roughness on the ring-like halo, and thereby on the print color change. A microfacet-based bi-directional reflectance dis-tribution function (BRDF) model is used to predict the irradi-ance pattern on the background, and physical experiments have been carried out for verification. They show that the irradiance pattern in the case of a rough surface is still a ring-like halo, and that the print color change is similar to the one observed with a smooth interface, by discarding the in-surface reflections which can induce additional color change.

LEDSimulator technology: A research tool for colour and texture, Jinyi Lin and Ming Ronnier Luo, Zhejiang University (China); Keith Hower, Black Swan Textiles, LLC (US); and TImwei Huang, Thousand Light Lighting (Changzhou) Ltd. (China)

Abstract: This paper describes LEDSimulator, a system that exhibits the impact of texture on colour appearance and serves as a colour communication tool for supply chain management. LEDSimulator is capable of accurately displaying coloured textures, achieving successful colour reproduction between media, and expediting the production cycle. The key technologies that accomplish this are introduced here, including: 1) visual colour matching on textures, 2) projector characterization modeling using the conventional and an advanced reduced LUT approach, and 3) a model to achieve metameric cross-media reproduction.

Wide-field gloss scanner designed to assess appearance and condition of modern paintings, Mathieu Hébert1, Pauline Hélou-De la Grandière2, Yann Pozzi1, Mathieu Thoury3, and Lionel Simonot41Université Jean Monnet Saint-Etienne,2Cy-Paris Université, 3Université Paris-Saclay, and 4Université de Poitiers, Institut Pprime (France)

Abstract: When one seeks to characterize the appearance of art paintings, color is the visual attribute that usually focuses most attention: not only does color predominate in the reading of the pictorial work, but it is also the attribute that we best know how to evaluate scientifically, thanks to spectrophotometers or imaging systems that have become portable and affordable, and thanks to the CIE color appearance models that allow us to convert the measured physical data into quantified visual values. However, for some modern paintings, the expression of the painter relies at least as much on gloss as on color; Pierre Soulages (1919-2022) is an exemplary case. This complicates considerably the characterization of the appearance of the paintings because the scientific definition of gloss, its link with measurable light quantities and the measurement of these light quantities over a whole painting are much less established than for color. This paper re-ports on the knowledge, challenges and difficulties of characterizing the gloss of painted works, by outlining the track of an imaging system to achieve this.

Influence of the hue of absorption pigments on the perception of graininess, Esther Perales1, Alejandro Ferrero2, Julián Espinosa1, Jorge Pérez1, Mercedes Gutiérrez1, Marjetka Miloseveic3, Juan Carlos Fernández-Becáres3, and Joaquín Campos2; 1Universidad de Alicante (Spain), 2Instituto de Optica, Consejo Superior de Investigaciones Científicas (Spain), and 3AkzoNobel Technology Group Color (the Netherlands)

Abstract: Valid and traceable instrumental measurements of all the visual attributes that characterize the appearance of a material (color, gloss, texture and translucency) are necessary to ensure good product quality control. The objective of this work is to evaluate the visual attribute of texture associated with special effect pigments in order to be able to establish a measurement scale. In particular, this study evaluates the influence of the hue of absorption pigments on the perception of graininess. For this purpose, nine samples with a systematic variation of hue angle were used. A visual experiment based on the comparison of triplets was designed, and a multidimensional scaling (MDS) analysis was applied to obtain relative values of perceived graininess. The results confirm that the hue angle of the absorption pigments does not influence the perception of graininess.

Advancing material appearance measurement: A cost-effective multispectral imaging system for capturing SVBRDF and BTF, Majid Ansari-Asl1, Markus Barbieri2, Gael Obein3, and Jon Yngve Hardeberg1; 1NTNU (Norway), 2Barbieri Electronic SNC (Italy), and 3CNAM (France)

Abstract: This paper introduces a novel system for measuring the appearance of materials by capturing their reflectance represented by Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) and Bidirectional Texture Function (BTF). Inspired by goniospectrophotometers, our system uses a fully-aligned and motorized turntable that rotates the sample around three axes to scan the entire hemispherical range of incident-reflection directions. The camera remains fixed while the light source can be rotated around one axis providing the fourth degree of freedom. To ensure high precision color measurement and spectral reproduction for reliable relighting purposes, we use a high-resolution multispectral camera and a broadband LED light source. We provide an overview of our instrument in this paper, and discuss its limitations to be addressed in the future works.

Depth perception assessment for 3D display using real time controllable random dot stereogram, Young-sang Ha, Rang-kyun Mok, and Beom-shik Kim, Samsung Display (Republic of Korea)

Abstract: This paper proposes a method that can subjectively evaluate the actual depth range of 3D display. It presents the positive and negative depth of 3D display using visual stimulation images such as random dot stereogram (RDS). Develop a system that allows subjects to control the depth range of RDS images in real time to increase evaluation accuracy. Through this, the subject evaluates the clarity of the image form and the permissible level of recognition of the stereoscopic image in the depth range. We can finally determine the depth range of the 3D display using the acquired cognitive evaluation result. Finally, the depth enhancement according to the light field display (LFD) experimental conditions is quantified using a statistical analysis method called T-test. This experimental method can be a successful approach to developing a 3D stereoscopic evaluation system and producing 3D content that affects perceptual factors.

12:40 – 13:40
LUNCH BREAK
Interdisciplinary 2
13:40 – 14:50 London
Session Chair: Belen Masia, Universidad de Zaragoza (Spain)
13:40
Focal Talk: Towards high-level, intuitive descriptors of material appearance, Belen Masia, associate professor, Computer Science Department, Universidad de Zaragoza (Spain)

Abstract: Material appearance perception depends both on the physical interaction between light and material, and on how our visual system processes the information reaching our eyes. Currently, there is a disconnect between the physical properties underlying material appearance models used in simulations, and perceptual properties that humans rely on when interpreting visual depictions of material appearance. Our goal is to bridge this gap, creating high-level, intuitive descriptors of visual appearance that are linked to the underlying physical properties of the material. This can in turn benefit final applications such as material appearance editing, acquisition, gamut mapping, or compression. Here, we review two approaches proposing representations of appearance that are better aligned with human perception: one based on the use of intuitive attributes, and another exploring the use of natural language descriptions of material appearance. All our data and models are publicly available.

14:10
Optimizing Gabor texture features for materials recognition by convolutional neural networks, Francesco Bianconi, Università degli Studi di Perugia; Claudio Cusano, University of Pavia; and Paolo Napoletano and Raimondo SchettiniUniversity of Milano - Bicocca (Italy)

Abstract: In this paper, we present a novel technique that allows for customized Gabor texture features by leveraging deep learning neural networks. Our method involves using a Convolutional Neural Network to refactor traditional, hand-designed filters on specific datasets. The refactored filters can be used in an off-the-shelf manner with the same computational cost but significantly improved accuracy for material recognition. We demonstrate the effectiveness of our approach by reporting a gain in discrimination accuracy on different material datasets. Our technique is particularly appealing in situations where the use of the entire CNN would be inadequate, such as analyzing non-square images or performing segmentation tasks. Overall, our approach provides a powerful tool for improving the accuracy of material recognition tasks while retaining the advantages of handcrafted filters.

14:30
Color appearance of iridescent objects, Katja Doerschner1, Robert Ennis1, Philipp Börner1, Frank J. Maile2, and Karl R. Gegenfurtner1; 1Giessen University and 2SCHLENK Metallic Pigments GmbH (Germany)

Abstract: Iridescent objects and animals are quite mesmerizing to look at, since they feature multiple intense colors, whose distribution can vary quite dramatically as a function of viewing angle. These properties make them a particularly interesting and unique stimulus to experimentally investigate the factors that contribute to single color impressions of multi-colored objects. Our stimuli were 3D printed shapes of varying complexity that were coated with three different types of iridescent paint. For each shape-color combination, participants performed single and multi-color matches for different views of the stationary object, as well as single color matches for a corresponding rotating stimulus. In the multi-color matching task, participants subsequently rated the size of the surface area on the object that was covered by the match-identified color. Results show that single-color appearance of iridescent objects varied with shape complexity, view, and object motion. Moreover, hue similarity of color settings in the multi-color match task best predicted single-color appearance, however this predictor was weaker for predicting single color matches in the motion condition. Taken together our findings suggest that the single-color appearance of iridescent objects may be modulated by chromatic factors, spatial-relations and the characteristic dynamics of color changes that are typical for this type of material.

14:50 – 16:00
Posters and Coffee
Closing Keynote
16:00 – 17:00 London
Session Chair: Marina Bloj and Lionel Simonot
16:00
Computational imaging for realistic appearance capture, Abhijeet Ghosh, professor of Graphics and Imaging, Department of Computing, Imperial College London
 (UK)

Abstract: This talk provides an overview of the research we have been conducting in the Realistic Graphics and Imaging group at Imperial College London and at Lumirithmic (Imperial spin-out) on measurement based appearance modeling for realistic computer graphics. The talk  spans practical techniques for both material and facial appearance capture and techniques for diffuse-specular separation of reflectance. The first part of the talk covers our work on acquiring shape and reflectance of planar material samples. This includes free-form hand-held capture using a mobile device, as well as exploiting polarization imaging, and also resolving materials exhibiting iridescence due to surface diffraction. The second part focuses on computational illumination for high-quality facial appearance capture, and here I cover some previous work on using specialized Light Stages and its impact in film VFX (at USC-ICT), as well as a novel desktop-based high-quality facial capture system developed at Lumirithmic.

17:00
Wrap up and best paper award. Announcement of LIM 2024