13 - 17  January, 2019 • Burlingame, California USA

Image Processing: Algorithms and Systems XVII

Conference Keywords: Filtering and Denoising, Fusion Algorithms, Video Processing, Tools and Systems, Color

Related EI Short Courses:

Monday January 14, 2019

Image Restoration I

Session Chairs: Karen Egiazarian, Tampere University of Technology (Finland) and Atanas Gotchev, Tampere University (Finland)
8:50 – 10:20 AM
Regency C

8:50IPAS-250
Additive spatially correlated noise suppression by robust block matching and adaptive 3D filtering (JIST-first), Oleksii Rubel1, Vladimir Lukin1, and Karen Egiazarian2; 1National Aerospace University (Ukraine) and 2Tampere University of Technology (Finland)

9:10IPAS-251
A snowfall noise elimination using moving object compositing method adaptable to natural boundary, Yoshihiro Sato, Koya Kokubo, and Yue Bao, Tokyo City University (Japan)

9:30IPAS-252
Patch-based image despeckling using low-rank Hankel matrix approach with speckle level estimation, Hansol Kim, Paul Oh, Sangyoon Lee, and Moon Gi Kang, Yonsei University (Republic of Korea)

9:50IPAS-253
Leveraging training data in computational image reconstruction (Invited), Davis Gilton1, Greg Ongie2, and Rebecca Willett2; 1University of Wisconsin, Madison and 2University of Chicago (United States)



10:20 – 10:40 AM Coffee Break

Image Restoration II

Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United States) and Atanas Gotchev, Tampere University (Finland)
10:40 AM – 12:10 PM
Regency C

10:40IPAS-254
General Adaptive Neighborhood Image Processing (GANIP) (Invited), Johan Debayle, Ecole Nationale Supérieure des Mines (France)

11:10IPAS-255
Gradient management and algebraic reconstruction for single image super resolution, Leandro Delfin1, Raul Pinto Elias1, and Humberto de Jesus Ochoa-Dominguez2; 1CENIDET and 2Universidad Autónoma de Ciudad Juarez (Mexico)

11:30IPAS-256
Image stitching by creating a virtual depth, Ahmed Eid, Brian Cooper, and Tomasz Cholewo, Lexmark (United States)

11:50IPAS-257
Enhanced guided image filter using trilateral kernel for disparity error correction, Yong-Jun Chang and Yo-Sung Ho, Gwangju Institute of Science and Technology (Republic of Korea)



Phase Imaging

Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United States) and Karen Egiazarian, Tampere University of Technology (Finland)
12:10 – 12:50 PM
Regency C

12:10IPAS-258
Phase masks optimization for broadband diffractive imaging, Nikolay Ponomarenko, Vladimir Katkovnik, and Karen Egiazarian, Tampere University of Technology (Finland)

12:30IPAS-259
Phase extraction from interferogram using machine learning, Daichi Kando and Satoshi Tomioka, Hokkaido University (Japan)



12:50 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


3:00 – 3:30 PM Coffee Break

Panel: Sensing and Perceiving for Autonomous Driving

Panelists: Boyd Fowler, OmniVision Technologies (United States); Jun Pei, Cepton Technologies Inc. (United States); Christoph Schroeder, Mercedes-Benz R&D Development North America, Inc. (United States); and Amnon Shashua, Mobileye, An Intel Company (Israel)
Panel Moderator: Wende Zhang, General Motors (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.

Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. This panel will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving.

Moderator: Dr. Wende Zhang Technical Fellow at General Motors

Panelist: Dr. Boyd Fowler CTO, Omnivision Technologies

Panelist: Dr. Jun Pei CEO and Co-Founder, Cepton Technologies Inc.

Panelist: Dr. Amnon Shashua Professor of Computer Science at Hebrew University, President and CEO, Mobileye, an Intel Company, and Senior Vice President, Intel Corporation

Panelist: Dr. Christoph Schroeder Head of Autonomous Driving N.A. Mercedes-Benz R&D Development North America, Inc.


5:00 – 6:00 PM All-Conference Welcome Reception

Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

Image Quality

Session Chairs: Marco Carli, Università degli Studi Roma TRE (Italy) and Karen Egiazarian, Tampere University of Technology (Finland)
8:50 – 10:10 AM
Regency C

8:50IPAS-260
Combined no-reference IQA metric and its performance analysis (Invited), Oleg Ieremeiev1, Vladimir Lukin1, Nikolay Ponomarenko1,2, and Karen Egiazarian2; 1National Aerospace University (Ukraine) and 2Tampere University of Technology (Finland)

9:10IPAS-261
Evaluating the effectiveness of image quality metrics in a light field scenario, Giuliano Arru, Marco Carli, and Federica Battisti, Università degli Studi Roma TRE (Italy)

9:30IPAS-262
Parameter optimization in H.265 rate-distortion by single frame semantic scene analysis, Ahmed Hamza1, Abdelrahman Abdelazim2, and Djamel Ait-Boudaoud1; 1University of Portsmouth and 2Blackpool and the Fylde College (United Kingdom)

9:50IPAS-263
Additional lossless compression of JPEG images based on BPG, Nikolay Ponomarenko1, Oleksandr Miroshnichenko2, Vladimir Lukin2, and Karen Egiazarian1; 1Tampere University of Technology (Finland) and 2National Aerospace University (Ukraine)



10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Object Recognition

Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United States) and Atanas Gotchev, Tampere University (Finland)
10:50 AM – 12:30 PM
Regency C

10:50IPAS-264
Uncertainty quantification for semi-supervised multilabel classification in image processing and ego motion analysis from body worn cameras, Yiling Qiao1, Chang Shi1, Chenjian Wang1, Hao Li1, Matthew Haberland1,2, Andrew Stuart3, and Andrea Bertozzi1; 1UCLA, 2Cal Poly San Luis Obispo, and 3California Institute of Technology (United States)

11:10IPAS-265
On-street parked vehicle detection via view-normalized classifier, Wencheng Wu, University of Rochester (United States)

11:30IPAS-266
Multi-class detection and orientation recognition of vessels in maritime surveillance, Amir Ghahremani, Yitian Kong, Egor Bondarev, and Peter De With, Eindhoven University of Technology (the Netherlands)

11:50IPAS-267
Construction of facial emotion database through subjective experiments and its application to deep learning-based facial image processing, Tomoyuki Takanashi, Keita Hirai, and Takahiko Horiuchi, Chiba University (Japan)

12:10IPAS-268
Improving person re-identification performance by customized dataset and better person detection, Herman Groot, Egor Bondarev, and Peter De With, Eindhoven University of Technology (the Netherlands)



12:30 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.


3:00 – 3:30 PM Coffee Break

Computational Models for Human Optics

Session Chair: Jennifer Gille, Oculus VR (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.


3:30EISS-704
Eye model implementation: Tools for modeling human visual optics (Invited), Andrew Watson, Apple Inc. (United States)

Dr. Andrew Watson is the Chief Vision Scientist at Apple, Inc. in Cupertino, California. In that position he leads the application of vision science to a broad range of Apple technologies, applications, devices and displays. Dr. Watson was an undergraduate at Columbia University and received a PhD in Psychology from the University of Pennsylvania. He subsequently held postdoctoral positions at the University of Cambridge in England and at Stanford University in California. From 1982 to 2016 he was the Senior Scientist for Vision Research at NASA Ames Research Center in California. He is the author of over 100 scientific papers, and he has seven patents, in areas such as acuity measurement, image compression, video quality, and measurement of display artifacts. He has 17748 citations and an h-index of 58. In 1990, he received NASA’s H. Julian Allen Award, and in 1993 he was appointed Ames Associate Fellow for exceptional scientific achievement. In 2011, he received the Presidential Rank Award from the President of the United States.

3:50EISS-700
Wide field-of-view optical model of the human eye (Invited), James Polans, Verily Life Sciences (United States)

Dr. James Polans is an engineer who works on surgical robotics at Verily Life Sciences in South San Francisco. Dr. Polans received his Ph.D. in biomedical engineering from Duke University under the mentorship of Joseph Izatt. His doctoral work explored the design and development of wide field-of-view optical coherence tomography systems for retinal imaging. He also has a M.S. in electrical engineering from the University of Illinois at Urbana-Champaign.

4:10EISS-702
Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, University of Arizona (United States)

Prof. Jim Schwiegerling is a Professor in the College of Optical Sciences at the University of Arizona. His research interests include the design of ophthalmic systems such as corneal topographers, ocular wavefront sensors and retinal imaging systems. In addition to these systems, Dr. Schwiegerling has designed a variety of multifocal intraocular and contact lenses and has expertise in diffractive and extended depth of focus systems.

4:30EISS-705
Berkeley Eye Model (Invited), Brian Barsky, University of California, Berkeley (United States)

Prof. Brian Barsky is Professor of Computer Science and Affiliate Professor of Optometry and Vision Science at UC Berkeley. He attended McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned an MS degree. His PhD is in computer science from the University of Utah, Salt Lake City. He is a Fellow of the American Academy of Optometry. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

4:50EISS-701
Modeling retinal image formation for light field displays (Invited), Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona (United States)

Prof. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

5:10EISS-703
Ray-tracing 3D spectral scenes through human optics (Invited), Trisha Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United States)

Trisha Lian is an Electrical Engineering PhD student at Stanford University. Before Stanford, she received her bachelor’s in Biomedical Engineering from Duke University. She is currently advised by Professor Brian Wandell and works on interdisciplinary topics that involve image systems simulations. These range from novel camera designs to simulations of the human visual system.


5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

10:00 AM – 3:30 PM Industry Exhibition

10:10 – 11:00 AM Coffee Break

12:30 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Light Field Imaging and Display

Session Chair: Gordon Wetzstein, Stanford University (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.


3:30EISS-706
Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)

Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer Science, and Director of the Center for Visual Computing, at the University of California, San Diego. Ramamoorthi received his PhD in computer science in 2002 from Stanford University. Prior to joining UC San Diego, Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, mathematical representations, and computational algorithms for understanding and rendering the visual appearance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual appearance pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions to computer graphics rendering and physics-based computer vision, awarded on Dec 2017, and an IEEE Fellow for contributions to foundations of computer graphics and computer vision, awarded Jan 2017.

4:10EISS-707
The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States)

Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University in 2005. Prior to founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is developing Leia Loft™ — a whole new canvas.

4:30EISS-708
Light field insights from my time at Lytro (Invited), Kurt Akeley, Google Inc. (United States)

Dr. Kurt Akeley is a Distinguished Engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University in 2004, where he implemented and evaluated a stereoscopic display that passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Dr. Akeley worked with OpenGL at NVIDIA Incorporated, was a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Lytro's CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback.

4:50EISS-709
Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)

Dr. Kari Pulli has spent two decades in computer imaging and AR at companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera Technologies Group at Intel, influencing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed algorithms for combining images from a heterogeneous camera array into a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia Fellow at Nokia Research, where he focused on computational photography, computer vision, and AR. Kari holds computer science degrees from the University of Minnesota (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and worked as a researcher at Stanford, University of Oulu, and MIT.

5:10EISS-710
Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States)

Dr. Matthew Hirsch is a co-founder and Chief Technical Officer of Lumii. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Matthew received his bachelors from Tufts University in Computer Engineering, and his Masters and Doctorate from the MIT Media Lab. Between degrees, he worked at Analogic Corp. as an Imaging Engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Matthew has also taught courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY.


Image Processing: Algorithms and Systems XVII Interactive Posters Session

Session Chairs: Federica Battisti, University degli Studi di Roma Tre (Italy) and Viacheslav Voronin, Don State Technical Univ. (Russian Federation)
5:30 – 7:00 PM
The Grove

The following works will be presented at the EI 2019 Symposium Interactive Papers Session.


IPAS-269
Background subtraction using Multi-Channel Fused Lasso, Xin Liu and Guoying Zhao, University of Oulu (Finland)

IPAS-270
Depth from stacked light field images using generative adversarial network, Ji-Hun Mun and Yo-Sung Ho, Gwangju Institute of Science and Technology (GIST) (Republic of Korea)

IPAS-271
Depth-based saliency estimation for omnidirectional images, Federica Battisti and Marco Carli, Università degli Studi Roma TRE (Italy)

IPAS-272
Driver drowsiness detection in facial images, Fadi Dornaika, Jorge Reta, Ignacio Arganda-Carreras, and Abdelmalik Moujahid, University of the Basque Country (Spain)

IPAS-273
Illumination invariant NIR face recognition using directional visibility, Srijith Rajeev1, Shreyas Kamath1, Qianwen Wan1, Karen Panetta1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten Island (United States)

IPAS-274
Microscope image matching in scope of multi-resolution observation system, Evan Eka Putranto1, Usuki Shin2, and Kenjiro Miura1; 1Shizuoka University and 2Research Institute of Electronics (Japan)

IPAS-275
Multi-frame super-resolution utilizing spatially adaptive regularization for ToF camera, Haegeun Lee, Jonghyun Kim, Jaeduk Han, and Moon Gi Kang, Yonsei University (Republic of Korea)

IPAS-276
Pixelwise JPEG compression detection and quality factor estimation based on convolutional neural network, Kazutaka Uchida1, Masayuki Tanaka2,1, and Masatoshi Okutomi1; 1Tokyo Institute of Technology and 2National Institute of Advanced Industrial Science and Technology (Japan)

IPAS-277
The quaternion-based anisotropic gradient for the color images, Viacheslav Voronin1, Vladimir Frants2, and Sos Agaian3; 1Don State Technical University, 2Moscow State University of Technology “STANKIN” (Russian Federation), and 3CUNY/ The College of Staten Island (United States)



Thursday January 17, 2019

Imaging Systems

Session Chairs: Atanas Gotchev, Tampere University (Finland) and Michael Kriss, MAK Consultants (United States)
8:50 – 10:30 AM
Regency B

This session is jointly sponsored by: Image Processing: Algorithms and Systems XVII, and Photography, Mobile, and Immersive Imaging 2019.


8:50PMII-278
EDICT: Embedded and distributed intelligent capture technology (Invited), Scott Campbell, Timothy Macmillan, and Katsuri Rangam, Area4 Professional Design Services (United States)

9:10IPAS-279
Modeling lens optics and rendering virtual views from fisheye imagery, Filipe Gama, Mihail Georgiev, and Atanas Gotchev, Tampere University of Technology (Finland)

9:30PMII-280
Digital distortion correction to measure spatial resolution from cameras with wide-angle lenses, Brian Rodricks1 and Yi Zhang2; 1SensorSpace, LLC and 2Facebook Inc. (United States)

9:50IPAS-281
LiDAR assisted large-scale privacy protection in street view cycloramas, Clint Sebastian1, Bas Boom2, Egor Bondarev1, and Peter De With1; 1Eindhoven University of Technology and 2CycloMedia Technology B.V. (the Netherlands)

10:10IPAS-282
Phase imaging of 3D specimen through dark-field FiMic, Gabriele Scrofani, Jorge Sola-Pikabea, Emilio Sánchez-Ortiga, Juan Carlos Barreiro, Manuel Martínez-Corral, and Genaro Saavedra, University of Valencia (Spain)



10:30 – 10:50 AM Coffee Break

Medical Imaging - Perception II

Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United States) and Mark McCourt, North Dakota State University (United States)
10:50 AM – 12:10 PM
Grand Peninsula Ballroom A

This medical imaging session is jointly sponsored by: Human Vision and Electronic Imaging 2019, and Image Processing: Algorithms and Systems XVII.


10:50IPAS-222
Specular reflection detection algorithm for endoscopic images, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/ The College of Staten Island (United States)

11:10IPAS-223
Feedback alfa-rooting algorithm for medical image enhancement, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/ The College of Staten Island (United States)

11:30HVEI-224
Observer classification images and efficiency in 2D and 3D search tasks (Invited), Craig Abbey, Miguel Lago, and Miguel Eckstein, University of California, Santa Barbara (United States)

11:50HVEI-226
Image recognition depends largely on variety (Invited), Tamara Haygood1, Christina Thomas2, Tara Sagebiel2, Diana Palacio2, Myrna Godoy2, and Karla Evans1; 1University of York (United Kingdom) and 2UT M.D. Anderson Cancer Center (United States)



No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
View 2019 Proceedings
View 2018 Proceedings
View 2017 Proceedings
View 2016 Proceedings

Conference Chairs
Sos Agaian, College of Staten Island, CUNY (United States); Karen Eguiazarian, Tampere University of Technology (Finland); Atanas Gotchev, Tampere University of Technology (Finland)

Program Committee
Gözde Akar, Middle East Technical University (Turkey); Junior Barrera, Universidade de São Paulo (Brazil); Jenny Benois-Pineau, Bordeaux University (France); Giacomo Boracchi, Politecnico di Milano (Italy); Reiner Creutzburg, Technische Hochschule Brandenburg (Germany); Alessandro Foi, Tampere University of Technology (Finland); Paul Gader, University of Florida (United States); John Handley, University of Rochester (United States); Vladimir Lukin, National Aerospace University (Ukraine); Vladimir Marchuk, Don State Technical University (Russian Federation); Alessandro Neri, Radiolabs (Italy); Marek Ogiela, AGH University of Science and Technology (Poland); Ljiljana Platisa, Universiteit Gent (Belgium); Françoise Prêteux, Ecole des Ponts ParisTech (France); Giovanni Ramponi, University degli Studi di Trieste (Italy); Ivan Selesnick, Polythechnic Institute of New York University (United States); Damir Sersic, University of Zagreb (Croatia)