Conference Sponsors

Projection System






3D Theatre Partners











        13 - 17  January, 2019 • Burlingame, California USA

Stereoscopic Displays and Applications XXX

Conference Keywords: Stereoscopic, VR, and True 3D Displays, Applications of Stereoscopic Displays, Stereoscopic Cinema and TV, Stereoscopic Content Production, Stereoscopic Human Factors and Design

Related EI Short Courses:

Monday January 14, 2019

30th SD&A Special Session

Session Chair: Takashi Kawai, Waseda University (Japan)
8:50 – 10:20 AM
Grand Peninsula Ballroom BC

8:50SD&A-625
3D image processing - From capture to display (Invited), Toshiaki Fujii, Nagoya University (Japan)

9:10SD&A-626
3D TV based on spatial imaging (Invited), Masahiro Kawakita, Hisayuki Sasaki, Naoto Okaichi, Masanori Kano, Hayato Watanabe, Takuya Oomura, and Tomoyuki Mishina, NHK Science and Technology Research Laboratories (Japan)

9:30SD&A-627
Stereoscopic capture and viewing parameters: Geometry and perception (Invited), Robert Allison and Laurie Wilcox, York University (Canada)

9:50
30 Years of SD&A - Milestones and statistics, Andrew Woods, Curtin University (Australia)

10:10
Conference Opening Remarks



10:20 – 10:50 AM Coffee Break

Autostereoscopic Displays I

Session Chair: Gregg Favalora, Draper (United States)
10:50 AM – 12:30 PM
Grand Peninsula Ballroom BC

10:50SD&A-628
A Full-HD super-multiview display with a deep viewing zone, Hideki Kakeya and Yuta Watanabe, University of Tsukuba (Japan)

11:10SD&A-629
A 360-degrees holographic true 3D display unit using a Fresnel phase plate, Levent Onural, Bilkent University (Turkey)

11:30SD&A-630
Electro-holographic light field projector modules: progress in SAW AOMs, illumination, and packaging, Gregg Favalora, Michael Moebius, Valerie Bloomfield, John LeBlanc, and Sean O'Connor, Draper (United States)

11:50SD&A-631
Thin form-factor super multiview head-up display system, Ugur Akpinar, Erdem Sahin, Olli Suominen, and Atanas Gotchev, Tampere University of Technology (Finland)

12:10SD&A-632
Dynamic multi-view autostereoscopy, Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong)



12:30 – 2:00 PM Lunch

Monday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President & CEO, Mobileye, an Intel Company, and Senior Vice President of Intel Corporation (United States)

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Shashua will describe the challenges and the kind of computer vision and machine learning algorithms involved, but will do that through the perspective of Mobileye's activity in this domain. He will then describe how OrCam leverages computer vision, situation awareness and language processing to enable Blind and Visually impaired to interact with the world through a miniature wearable device.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 24 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President and CEO of Mobileye and a Senior Vice President of Intel Corporation leading Intel's Autonomous Driving Group.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam's device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


3:00 – 3:30 PM Coffee Break

Autostereoscopic Displays II

Session Chair: John Merritt, The Merritt Group (United States)
3:30 – 3:50 PM
Grand Peninsula Ballroom BC

SD&A-633
Spirolactam rhodamines for multiple color volumetric 3D digital light photoactivatable dye displays, Maha Aljowni, Uroob Haris, Bo Li, Cecilia O'Brien, and Alexander Lippert, Southern Methodist University (United States)



SD&A Keynote I

Session Chair: Andrew Woods, Curtin University (Australia)
3:50 – 4:50 PM
Grand Peninsula Ballroom BC

SD&A-658
KEYNOTE: From set to theater: Reporting on the 3D cinema business and technology roadmaps, Tony Davis, RealD Inc. (United States)

Tony Davis is the VP of Technology at RealD where he works with an outstanding team to perfect the cinema experience from set to screen. Tony Davis has a Masters in Electrical Engineering from Texas Tech University, specializing in advanced signal acquisition and processing. After several years working as a Technical Staff Member for Los Alamos National Laboratory, Mr. Davis was Director of Engineering for a highly successful line of medical and industrial X-ray computed tomography systems at 3M. Later, he was the founder of Tessive, a company dedicated to improvement of temporal representation in motion picture cameras.




5:00 – 6:00 PM All-Conference Welcome Reception

SD&A Conference 3D Theatre

Session Chairs: John Stern, Intuitive Surgical, Inc. (United States) and Andrew Woods, Curtin University (Australia)
6:00 – 7:30 PM
Grand Peninsula Ballroom BC

This ever-popular session of each year's Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.




SD&A Conference Annual Dinner

7:50 – 10:00 PM
Offsite - details provided with registration

The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.



Tuesday January 15, 2019

7:30 – 8:45 AM Women in Electronic Imaging Breakfast

Light Field Imaging and Displays

Session Chair: Hideki Kakeya, University of Tsukuba (Japan)
8:50 – 10:10 AM
Grand Peninsula Ballroom BC

8:50
SD&A 3D Theater winners announcement, John Stern, SD&A committee

9:10SD&A-635
Understanding ability of 3D integral displays to provide accurate out-of-focus retinal blur with experiments and diffraction simulations, Ginni Grover, Oscar Nestares, and Ronald Azuma, Intel Corporation (United States)

9:30SD&A-636
EPIModules on a geodesic: Toward 360-degree light-field imaging, Harlyn Baker¹, Gregorij Kurillo¹, Allan Miller¹, Alessandro Temil¹, Tom Defanti², and Dan Sandin³; ¹EPIImaging, LLC, ²University of California, San Diego, and ³University of Illinois (United States)

9:50SD&A-637
A photographing method of Integral Photography with high angle reproducibility of light rays, Shotaro Mori, Yue Bao, and Norigi Oishi, Tokyo City University Graduate School (Japan)



10:00 AM – 7:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Stereoscopic Vision Testing

Session Chair: John Stern, Intuitive Surgical, Inc. (United States)
10:50 – 11:30 AM
Grand Peninsula Ballroom BC

10:50SD&A-638
Operational based vision assessment: Stereo acuity testing research and development, Marc Winterbottom1, Eleanor O'Keefe2, Maria Gavrilescu3, Mackenzie Glaholt4, Asao Kobayashi5, Yukiko Tsujimoto5, Amanda Douglass6, Elizabeth Shoda2, Peter Gibbs3, Charles Lloyd7, James Gaska1, and Steven Hadley1; 1U.S. Air Force School of Aerospace Medicine, 2KBRwyle, 3Defence, Science & Technology, 4Defence Research and Development Canada (Canada), 5Aeromedical Laboratory, Japan Air Self Defense Force (Japan), 6Deakin University (Australia), and 7Visual Performance, LLC (United States)

11:10SD&A-639
Operational based vision assessment: Evaluating the effect of stereoscopic display crosstalk on simulated remote vision system depth discrimination, Eleanor O'Keefe1, Charles Lloyd2, Tommy Bullock3, Alexander Van Atta1, and Marc Winterbottom3; 1KBRwyle, 2Visual Performance, and 3U.S. Air Force School of Aerospace Medicine (United States)



SD&A Keynote 2

Session Chair: Nicolas Holliman, University of Newcastle (United Kingdom)
11:30 AM – 12:30 PM
Grand Peninsula Ballroom BC

SD&A-640
KEYNOTE: What good is imperfect 3D?, Miriam Ross, Victoria University of Wellington (New Zealand)

Dr. Miriam Ross is Senior Lecturer in the Film Programme at Victoria University of Wellington. She works with new technologies to combine creative methodologies and traditional academic analysis. She is the author of South American Cinematic Culture: Policy, Production, Distribution and Exhibition (2010) and 3D Cinema: Optical Illusions and Tactile Experiences (2015) as well as publications and creative works relating to film industries, mobile media, virtual reality, stereoscopic media, and film festivals.




12:30 – 2:00 PM Lunch

Tuesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, Professor of Optical Sciences, University of Arizona (United States)

Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes with proper focus cues to stimulate natural eye accommodation responses and address the well-known vergence-accommodation conflict in conventional stereoscopic displays.

Dr. Hong Hua is a Professor of Optical Sciences at the University of Arizona. With over 25 years of experience, Dr. Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Dr. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Dr. Hua has published over 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Dr. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Dr. Hua received her Ph.D. degree in Optical Engineering from the Beijing Institute of Technology in China in 1999. Prior to joining the UA faculty in 2003, Dr. Hua was an Assistant Professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.


3:00 – 3:30 PM Coffee Break

Visualization Facilities

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Björn Sommer, University of Konstanz (Germany)
3:30 – 5:30 PM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.


3:30SD&A-641
Tiled stereoscopic 3D display wall – Concept, applications and evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk Schreiber, University of Konstanz (Germany)

3:50SD&A-642
The quality of stereo disparity in the polar regions of a stereo panorama, Daniel Sandin1,2, Haoyu Wang3, Alexander Guo1, Ahmad Atra1, Dick Ainsworth4, Maxine Brown3, and Tom DeFanti2; 1Electronic Visualization Lab (EVL), University of Illinois at Chicago, 2California Institute for Telecommunications and Information Technology (Calit2), University of California San Diego, 3The University of Illinois at Chicago, and 4Ainsworth & Partners, Inc. (United States)

4:10SD&A-644
Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D SPACE (United States)

4:30SD&A-645
State of the art of multi-user virtual reality display systems, Juan Munoz Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at Little Rock (United States)

4:50SD&A-646
StarCAM - A 16K stereo panoramic video camera with a novel parallel interleaved arrangement of sensors, Dominique Meyer1, Daniel Sandin2, Christopher Mc Farland1, Eric Lo1, Gregory Dawe1, Haoyu Wang2, Ji Dai1, Maxine Brown2, Truong Nguyen1, Harlyn Baker3, Falko Kuester1, and Tom DeFanti1; 1University of California, San Diego, 2The University of Illinois at Chicago, and 3EPIImaging, LLC (United States)

5:10SD&A-660
Development of a camera based projection mapping system for non-flat surfaces, Daniel Adams, Steven Tri Tai Pham, Kale Watts, Subhash Ramakrishnan, Emily Ackland, Ham Tran Ly, Joshua Hollick, and Andrew Woods, Curtin University (Australia)



5:30 – 7:30 PM Symposium Demonstration Session

Wednesday January 16, 2019

360, 3D, and VR

Session Chairs: Neil Dodgson, Victoria University of Wellington (New Zealand) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)
8:50 – 10:10 AM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.


8:50SD&A-647
Enhanced head-mounted eye tracking data analysis using super-resolution, Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten Island (United States)

9:10SD&A-648
Effects of binocular parallax in 360-degree VR images on viewing behavior, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University (Japan)

9:30SD&A-659
Subjective comparison of monocular and stereoscopic vision in teleoperation of a robot arm manipulator, Yuta Miyanishi, Erdem Sahin, Jani Makinen, Ugur Akpinar, Olli Suominen, and Atanas Gotchev, Tampere University (Finland)

9:50SD&A-650
Time course of sickness symptoms with HMD viewing of 360-degree videos (JIST-first), Jukka Häkkinen1, Fumiya Ohta2, and Takashi Kawai2; 1University of Helsinki (Finland) and 2Waseda University (Japan)



10:00 AM – 3:30 PM Industry Exhibition

10:10 – 10:50 AM Coffee Break

Autostereoscopic Displays III

Session Chair: Chris Ward, Lightspeed Design, Inc. (United States)
10:50 – 11:30 AM
Grand Peninsula Ballroom BC

10:50SD&A-655
A study on 3D projector with four parallaxes, Shohei Yamaguchi and Yue Bao, Tokyo City University (Japan)

11:10SD&A-652
The looking glass: A new type of superstereoscopic display, Shawn Frayne, Looking Glass Factory, Inc. (United States)



SD&A Keynote 3

Session Chair: Andrew Woods, Curtin University (Australia)
11:30 AM – 12:40 PM
Grand Peninsula Ballroom BC

This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.


SD&A-653
KEYNOTE: Beads of reality drip from pinpricks in space, Mark Bolas, Microsoft Corporation (United States)

Mark Bolas loves perceiving and creating synthesized experiences. To feel, hear and touch experiences impossible in reality and yet grounded as designs that bring pleasure, meaning and a state of flow. His work with Ian McDowall, Eric Lorimer and David Eggleston at Fakespace Labs; Scott Fisher and Perry Hoberman at USC's School of Cinematic Arts; the team at USC's Institute for Creative Technologies; Niko Bolas at SonicBox; and Frank Wyatt, Dick Moore and Marc Dolson at UCSD informed results that led to his receipt of both the IEEE Virtual Reality Technical Achievement and Career Awards. See more at https://en.wikipedia.org/wiki/Mark_Bolas


Conference Closing Remarks


12:40 – 2:00 PM Lunch

Wednesday Plenary

2:00 – 3:00 PM
Grand Peninsula Ballroom D

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, Senior Scientist, Google (United States)

Paul Debevec will discuss the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. Th lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust.

Paul Debevec is a Senior Scientist at Google VR, a member of GoogleVR's Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures" with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine's "Pixel Perfect: The Scientist Behind the Digital Cloning of Actors" article by Margaret Talbot.


3:00 – 3:30 PM Coffee Break

Light Field Imaging and Display

Session Chair: Gordon Wetzstein, Stanford University (United States)
3:30 – 5:30 PM
Grand Peninsula Ballroom D

This session is jointly sponsored by the EI Steering Committee.


3:30EISS-706
Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States)

Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer Science, and Director of the Center for Visual Computing, at the University of California, San Diego. Ramamoorthi received his PhD in computer science in 2002 from Stanford University. Prior to joining UC San Diego, Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, mathematical representations, and computational algorithms for understanding and rendering the visual appearance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual appearance pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions to computer graphics rendering and physics-based computer vision, awarded on Dec 2017, and an IEEE Fellow for contributions to foundations of computer graphics and computer vision, awarded Jan 2017.

4:10EISS-707
The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States)

Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University in 2005. Prior to founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is developing Leia Loft™ — a whole new canvas.

4:30EISS-708
Light field insights from my time at Lytro (Invited), Kurt Akeley, Google Inc. (United States)

Dr. Kurt Akeley is a Distinguished Engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University in 2004, where he implemented and evaluated a stereoscopic display that passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Dr. Akeley worked with OpenGL at NVIDIA Incorporated, was a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Lytro's CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback.

4:50EISS-709
Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)

Dr. Kari Pulli has spent two decades in computer imaging and AR at companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera Technologies Group at Intel, influencing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed algorithms for combining images from a heterogeneous camera array into a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia Fellow at Nokia Research, where he focused on computational photography, computer vision, and AR. Kari holds computer science degrees from the University of Minnesota (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and worked as a researcher at Stanford, University of Oulu, and MIT.

5:10EISS-710
Industrial scale light field printing (Invited), Matthew Hirsch, Lumii Inc. (United States)

Dr. Matthew Hirsch is a co-founder and Chief Technical Officer of Lumii. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Matthew received his bachelors from Tufts University in Computer Engineering, and his Masters and Doctorate from the MIT Media Lab. Between degrees, he worked at Analogic Corp. as an Imaging Engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Matthew has also taught courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY.


Stereoscopic Displays and Applications XXX Interactive Posters Session

5:30 – 7:00 PM
The Grove

The following works will be presented at the EI 2019 Symposium Interactive Papers Session.


SD&A-654
A comprehensive head-mounted eye tracking review: Software solutions, applications, and challenges, Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten Island (United States)

SD&A-656
Saliency map based multi-view rendering for autostereoscopic displays, Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong)

SD&A-657
Semi-automatic post-processing of multi-view 2D-plus-depth video, Braulio Sespede1, Florian Seitner2, and Margrit Gelautz1; 1TU Wien and 2Emotion3D (Austria)



No content found

No content found

 

Important Dates
Call for Papers Announced 1 Mar 2018
Journal-first Submissions Due 30 Jun 2018
Abstract Submission Site Opens 1 May 2018
Review Abstracts Due (refer to For Authors page
 · Early Decision Ends 30 Jun 2018
· Regular Submission Ends 8 Sept 2018
· Extended Submission Ends 25 Sept 2018
 Final Manuscript Deadlines  
 · Fast Track Manuscripts Due 14 Nov 2018 
 · Final Manuscripts Due 1 Feb 2019 
Registration Opens 23 Oct 2018
Early Registration Ends 18 Dec 2018
Hotel Reservation Deadline 3 Jan 2019
Conference Begins 13 Jan 2019


 
2019 SD&A Presentation Videos
View 2019 Proceedings
2018 SD&A Presentation Videos
View 2018 Proceedings
2017 SD&A Presentation Videos
View 2017 Proceedings
2016 SD&A Presentation Videos
View 2016 Proceedings

Conference Chairs
Gregg Favalora, Draper (United States); Nicolas Holliman, Newcastle University (United Kingdom); Takashi Kawai, Waseda University (Japan); Andrew Woods, Curtin University (Australia)

Program Committee
Neil Dodgson, Victoria University of Wellington (New Zealand); Davide Gadia, Università degli Studi di Milano (Italy); Hideki Kakeya, University of Tsukuba (Japan); Stephan Keith, SRK Graphics Research (United States); Michael Klug, Magic Leap, Inc. (United States); Björn Sommer, University of Konstanz (Germany); John Stern, Intuitive Surgical, Inc. (retired) (United States); Chris Ward, Lightspeed Design, Inc. (United States)

Founding Chair
John O. Merritt, The Meritt Group (USA)