Monday 17 January 2022
IS&T Welcome & PLENARY: Quanta Image Sensors: Counting Photons Is the New Game in Town
07:00 – 08:10
The Quanta Image Sensor (QIS) was conceived as a different image sensor—one that counts photoelectrons one at a time using millions or billions of specialized pixels read out at high frame rate with computation imaging used to create gray scale images. QIS devices have been implemented in a CMOS image sensor (CIS) baseline room-temperature technology without using avalanche multiplication, and also with SPAD arrays. This plenary details the QIS concept, how it has been implemented in CIS and in SPADs, and what the major differences are. Applications that can be disrupted or enabled by this technology are also discussed, including smartphone, where CIS-QIS technology could even be employed in just a few years.
Eric R. Fossum, Dartmouth College (United States)
Eric R. Fossum is best known for the invention of the CMOS image sensor “camera-on-a-chip” used in billions of cameras. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. At Dartmouth he is a professor of engineering and vice provost for entrepreneurship and technology transfer. Fossum received the 2017 Queen Elizabeth Prize from HRH Prince Charles, considered by many as the Nobel Prize of Engineering “for the creation of digital imaging sensors,” along with three others. He was inducted into the National Inventors Hall of Fame, and elected to the National Academy of Engineering among other honors including a recent Emmy Award. He has published more than 300 technical papers and holds more than 175 US patents. He co-founded several startups and co-founded the International Image Sensor Society (IISS), serving as its first president. He is a Fellow of IEEE and OSA.
08:10 – 08:40 EI 2022 Welcome Reception
Wednesday 19 January 2022
IS&T Awards & PLENARY: In situ Mobility for Planetary Exploration: Progress and Challenges
07:00 – 08:15
This year saw exciting milestones in planetary exploration with the successful landing of the Perseverance Mars rover, followed by its operation and the successful technology demonstration of the Ingenuity helicopter, the first heavier-than-air aircraft ever to fly on another planetary body. This plenary highlights new technologies used in this mission, including precision landing for Perseverance, a vision coprocessor, new algorithms for faster rover traverse, and the ingredients of the helicopter. It concludes with a survey of challenges for future planetary mobility systems, particularly for Mars, Earth’s moon, and Saturn’s moon, Titan.
Larry Matthies, Jet Propulsion Laboratory (United States)
Larry Matthies received his PhD in computer science from Carnegie Mellon University (1989), before joining JPL, where he has supervised the Computer Vision Group for 21 years, the past two coordinating internal technology investments in the Mars office. His research interests include 3-D perception, state estimation, terrain classification, and dynamic scene analysis for autonomous navigation of unmanned vehicles on Earth and in space. He has been a principal investigator in many programs involving robot vision and has initiated new technology developments that impacted every US Mars surface mission since 1997, including visual navigation algorithms for rovers, map matching algorithms for precision landers, and autonomous navigation hardware and software architectures for rotorcraft. He is a Fellow of the IEEE and was a joint winner in 2008 of the IEEE’s Robotics and Automation Award for his contributions to robotic space exploration.
Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2022 Posters
08:20 – 09:20
EI Symposium
Poster interactive session for all conferences authors and attendees.
MOBMU-205
P-20: Chatbot integrated with machine learning deployed in the cloud and performance evaluation, Ganesh Reddy Gunnam, Rahul Mundlamuri, Devasena Inupakutika, Sahak Kaghyan, and David Akopian, The University of Texas at San Antonio (United States) [view abstract]
Recently human-machine digital assistants gained popularity and commonly used in question-and-answer applications and similar consumer-supporting domains. A class of more sophisticated digital assistants employing longer dialogs follow the trend, and there are several commercial platforms supporting their prototyping such as Google DialogFlow, Manychat, Chatfuel, Amazon Lex, etc. This paper explores cloud deployment of chatbots systems and their performance assessment methodologies. The performance measures includes system response delays and natural language processing capabilities. A case study platform supporting so-called deep-logic chatbots with long cycling capabilities is implemented and used for the assessment. To enable human-like conversations with a chatbot, huge training data, complex natural language understanding models are required and need to be adjusted and trained continuously. We explore implementation formats supporting auto scaling, and uninterrupted availability. In particular, we employ an architecture consisting of separate dialog management, authentication, and Natural Language Understanding (NLU) services. Finally, we present a performance evaluation of such loosely coupled chatbot system. Keywords: Cloud Deployment, Natural language understanding, Chatbot, Performance assessment
MOBMU-206
P-21: Chatbot integration with Google Dialogflow environment for conversational intervention, Rahul Mundlamuri, Devasena Inupakutika, David Akopian, Ganesh Reddy Gunnam, and Sahak Kaghyan, The University of Texas at San Antonio (United States) [view abstract]
Chatbots are computer programs that execute protocols for supporting human-machine conversations and perform various functions such as searching the web, ordering food, making appointments, and many more. To facilitate timely responses and actions, and enable interactive human-like conversations, chatbots require Natural Language Processing (NLP) to understand user's messages and respond appropriately. NLP is an area of computer science and artificial intelligence concerned with the interactions between computers and human languages. Google Dialogflow is a natural language understanding platform that makes it easy to design and integrate a conversational user interface into your mobile app, web application, device, bot, interactive voice response system, and so on. Dash Messaging is a smart chatbot platform that enables creation of chatbots based on the provided protocol for long-term conversations. In this paper, we discuss how to integrate Google Dialogflow NLP service to a case study chatbot launched with our Dash Messaging platform.
MOBMU-207
P-22: Interactive books - Status report, Harvey R. Levenson, Cal Poly (United States) [view abstract]
Study of user responses to printed interactive books using Ricoh’s Clickable Paper application. Describes how Ricoh’s Clickable Paper has taken a major step forward with the adoptions of the first book, Introduction to Graphic Communication, a second book, The Sound of Bamboo, and a third book in process on Black history and African American journeys to success. The diversity of these topics shows how Ricoh’s Clickable Paper is applicable for interdisciplinary fields. A survey of user reactions to the Clickable Paper, with emphasis on Introduction to Graphic Communication, in its third year of use by twenty-four schools. The Sound of Bamboo, focusing on music, is in its first year of adoption. Reviews to are included. The third book is scheduled for release in February 2022 in time for Black History Month. Content Analysis is the research method used to quantify qualitative responses. Results are positive from students, schools, faculty members, and other users. Scanning abilities of the Clickable Paper received the greatest number of positive responses along with the value of videos, and how the interactive books served an important need during the pandemic. Recommendations for improvements and remedies are provided.
Tuesday 25 January 2022
IS&T Awards & PLENARY: Physics-based Image Systems Simulation
07:00 – 08:00
Three quarters of a century ago, visionaries in academia and industry saw the need for a new field called photographic engineering and formed what would become the Society for Imaging Science and Technology (IS&T). Thirty-five years ago, IS&T recognized the massive transition from analog to digital imaging and created the Symposium on Electronic Imaging (EI). IS&T and EI continue to evolve by cross-pollinating electronic imaging in the fields of computer graphics, computer vision, machine learning, and visual perception, among others. This talk describes open-source software and applications that build on this vision. The software combines quantitative computer graphics with models of optics and image sensors to generate physically accurate synthetic image data for devices that are being prototyped. These simulations can be a powerful tool in the design and evaluation of novel imaging systems, as well as for the production of synthetic data for machine learning applications.
Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Joyce Farrell is a senior research associate and lecturer in the Stanford School of Engineering and the executive director of the Stanford Center for Image Systems Engineering (SCIEN). Joyce received her BS from the University of California at San Diego and her PhD from Stanford University. She was a postdoctoral fellow at NASA Ames Research Center, New York University, and Xerox PARC, before joining the research staff at Hewlett Packard in 1985. In 2000 Joyce joined Shutterfly, a startup company specializing in online digital photofinishing, and in 2001 she formed ImagEval Consulting, LLC, a company specializing in the development of software and design tools for image systems simulation. In 2003, Joyce returned to Stanford University to develop the SCIEN Industry Affiliates Program.
PANEL: The Brave New World of Virtual Reality
08:00 – 09:00
Advances in electronic imaging, computer graphics, and machine learning have made it possible to create photorealistic images and videos. In the future, one can imagine that it will be possible to create a virtual reality that is indistinguishable from real-world experiences. This panel discusses the benefits of this brave new world of virtual reality and how we can mitigate the risks that it poses. The goal of the panel discussion is to showcase state-of-the art synthetic imagery, learn how this progress benefits society, and discuss how we can mitigate the risks that the technology also poses. After brief demos of the state-of-their-art, the panelists will discuss: creating photorealistic avatars, Project Shoah, and digital forensics.
Panel Moderator: Joyce Farrell, Stanford Center for Image Systems Engineering, Stanford University, CEO and Co-founder, ImagEval Consulting (United States)
Panelist: Matthias Neissner, Technical University of Munich (Germany)
Panelist: Paul Debevec, Netflix, Inc. (United States)
Panelist: Hany Farid, University of California, Berkeley (United States)
Cybersecurity and Forensics I
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
09:15 – 10:20
Blue Room
09:15
Conference Introduction
09:20MOBMU-350
Evaluation and test of various tools for OSINT-based Instagram investigation, Deepak Jamwal1, Klaus Schwarz1, and Reiner Creutzburg1,2; 1SRH Berlin University of Applied Sciences and 2Technische Hochschule Brandenburg (Germany) [view abstract]
This article aims to understand and evaluate the different online tools available, which can help carry out an Instagram-based open source investigation. As part of this study, the various tools were evaluated based on their features, and some comparison analyses and use cases were conducted. The objective here was to provide an in-depth understanding of all the tools openly available to use and assess which tools are amongst the best and can help in an OSINT-based investigation on Instagram. The analysis part focuses on the different aspects and features of the tools and analyses a total of eighty-three tools based on which a list of top ten tools was characterized based on a qualitative research methodology and the features those respective tools offer. Furthermore, various well-established Instagram accounts and hashtags were analyzed in this research to assess the output generated by the tools. In the last, a use case-based investigation was also carried out in which the thesis tries to find an effective marketing tool for Instagram and at the same time find the best tool that can help in carrying out a social media-based investigation whenever an incident occurs. According to the findings and information gathered, this paper suggests a set of good Instagram-based OSINT tools that can analyze content on Instagram pages effectively and analytically with features like sentiment analysis, location-based analysis, audience analysis. That allows excellent insights to the user, be it a company or an investigator, and help them make critical strategical decisions.
09:40MOBMU-351
Evaluation and test of various tools for OSINT-based Twitter investigation, Arun Khajuria1, Klaus Schwarz1, and Reiner Creutzburg1,2; 1SRH Berlin University of Applied Sciences and 2Technische Hochschule Brandenburg (Germany) [view abstract]
This article aims to understand and evaluate the different online tools available, which can help carry out Twitter-based open source investigation. As part of this study, the various tools were evaluated based on their features, and some comparison analyses and use cases were conducted. The objective here was to provide an in-depth understanding of all the tools openly available to use and assess which tools are amongst the best and can help in an OSINT-based investigation on Twitter. The analysis part focuses on the different aspects and features of the tools and analyses a total of eighty-three tools based on which a list of top ten tools was characterized based on a qualitative research methodology and the features those respective tools offer. Furthermore, various well-established Twitter accounts and hashtags were analyzed in this research to assess the output generated by the tools. In the last, a use case-based investigation was also carried out in which the thesis tries to find an effective marketing tool for Twitter and at the same time find the best tool that can help in carrying out a social media-based investigation whenever an incident occurs. According to the findings and information gathered, this paper suggests a set of good Twitter-based OSINT tools that can analyze content on Twitter pages effectively and analytically with features like sentiment analysis, location-based analysis, audience analysis. That allows excellent insights to the user, be it a company or an investigator, and help them make critical strategical decisions.
10:00MOBMU-352
Evaluation and test of various tools for OSINT-based Facebook investigation, Chinmay Bhosale1, Klaus Schwarz1, and Reiner Creutzburg1,2; 1SRH Berlin University of Applied Sciences and 2Technische Hochschule Brandenburg (Germany) [view abstract]
This article aims to understand and evaluate the different online tools available, which can help carry out Facebook-based open source investigation. As part of this study, the various tools were evaluated based on their features, and some comparison analyses and use cases were conducted. The objective here was to provide an in-depth understanding of all the tools openly available to use and assess which tools are amongst the best and can help in an OSINT-based investigation on Facebook. The analysis part focuses on the different aspects and features of the tools and analyses a total of eighty-three tools based on which a list of top ten tools was characterized based on a qualitative research methodology and the features those respective tools offer. Furthermore, various well-established Facebook accounts were analyzed in this research to assess the output generated by the tools. In the last, a use case-based investigation was also carried out in which the thesis tries to find an effective marketing tool for Facebook and at the same time find the best tool that can help in carrying out a social media-based investigation whenever an incident occurs. According to the findings and information gathered, this paper suggests a set of good Facebook-based OSINT tools that can analyze content on Facebook pages effectively and analytically with features like sentiment analysis, location-based analysis, audience analysis. That allows excellent insights to the user, be it a company or an investigator, and help them make critical strategical decisions.
Cybersecurity and Forensics II
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
10:45 – 11:45
Blue Room
10:45MOBMU-360
Evaluation and test of various tools for OSINT-based Telegram investigation, Chinonso Ashimole1, Shubham Saroha1, Klaus Schwarz1, and Reiner Creutzburg1,2; 1SRH Berlin University of Applied Sciences and 2Technische Hochschule Brandenburg (Germany) [view abstract]
This article aims to understand and evaluate the different online tools available, which can help carry out Telegram-based open source investigation. As part of this study, the various tools were evaluated based on their features, and some comparison analyses and use cases were conducted. The objective here was to provide an in-depth understanding of all the tools openly available to use and assess which tools are amongst the best and can help in an OSINT-based investigation on Telegram. The analysis part focuses on the different aspects and features of the tools and analyses a total of eighty-three tools based on which a list of top ten tools was characterized based on a qualitative research methodology and the features those respective tools offer. Furthermore, various well-established Telegram accounts were analyzed in this research to assess the output generated by the tools. In the last, a use case-based investigation was also carried out in which the thesis tries to find an effective marketing tool for Telegram and at the same time find the best tool that can help in carrying out a social media-based investigation whenever an incident occurs. According to the findings and information gathered, this paper suggests a set of good Telegram-based OSINT tools that can analyze content on Telegram pages effectively and analytically with features like sentiment analysis, location-based analysis, audience analysis. That allows excellent insights to the user, be it a company or an investigator, and help them make critical strategical decisions.
11:05MOBMU-361
Improving detection of manipulated passport photos - Training course for border control inspectors to detect morphed facial passport photos - Part II: Training course materials, Franziska Schwarz1, Klaus Schwarz2, and Reiner Creutzburg1,2; 1Technische Hochschule Brandenburg and 2SRH Berlin University of Applied Sciences (Germany) [view abstract]
Morphing is a well-researched topic in computer graphics and image processing. Unlike cross-fading, morphing transforms a source image into a target image using distortions and the adjustment of predefined features (control points). This transition is intended to create results that are as realistic as possible, and it happens notably when the face on the source and target images does not differ too much. Therefore, the typical morphing process consists of warping essential image elements (e.g., facial features such as eyes, mouth, and facial contours) in the source and target image with the help of selected control points in such a way that these areas can be brought into conformity with each other or align themselves. For effects as close to reality as possible, the source and target images must not differ too much. The training course developed for this work is intended to introduce the dangers posed by the malicious manipulation of facial photographs, the so-called morphing attack. An algorithm matches the images of two different people so that the resulting facial image combines the identification features of both people. Research has shown that these images are difficult for the human eye to distinguish from real, unaltered photographs. The face morphing attack exploits the weakness in the application for identification. If a morphed passport photo remains undiscovered, a genuine identity document with a manipulated photo is issued that may allow two different persons to cross a border without authorization. This targeted training course consists of 10 modules for a one-week training course and is designed to help identify morphed facial images and reduce their acceptance at ID and passport control.
11:25MOBMU-362
Recognition of objects from looted excavations by smartphone app and deep learning, Waldemar Berchtold, Huajian Liu, Martin Steinebach, Simon Bugert, and York Yannikos, Fraunhofer Institute for Secure Information Technology (Germany) [view abstract]
In this paper, we present a development for recognizing objects from looted excavations. Experts with an archaeological background are not always available where an object needs to be assessed for tradability. For this purpose, we developed a smartphone app that can provide on-site assistance in the initial assessment of archaeological objects. The app sends captured images to a server for recognition and receives results with similar objects and their metadata along with an associated probability. A user can thus use these information to infer the provenance of the photographed object. To this end, a classifier was trained using a transfer learning procedure and the features of the trained network were used for an image matching procedure. The developed application will be tested by law enforcement agencies with a total of 15 smartphones for six months starting in early October.
Autonomy and Mobility
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
15:00 – 16:00
Blue Room
15:00MOBMU-371
Autonomous self-driving vehicles - Design of professional laboratory exercises in the field of automotive mechatronics, Franziska Schwarz1, Klaus Schwarz2, and Reiner Creutzburg1,2; 1Technische Hochschule Brandenburg and 2SRH Berlin University of Applied Sciences (Germany) [view abstract]
Self-driving cars are gradually making their way into road traffic and represent the main component of the new form of mobility. Major companies such as Tesla, Google, and Uber are researching the continuous improvement of self-driving vehicles and their reliability. Therefore, it is of great interest for trained professionals to deal with and understand the principles and requirements for autonomous driving. This paper aims to describe the new concept of a Bachelor / Master level university course for automotive technology students to address new mobility and self-driving cars. For the practice-oriented course, hardware in a low budget range (US $80) was used, which nevertheless has all the necessary sensors and requirements for a comprehensive practical introduction to the topic of self-driving automotive technology. The modular structure of the course contains lectures and exercises on the following topics: The first Exercise deals with the construction and modification of the Car-Kit, followed by the setup of the used Raspberry Pi. Since the car kit and Raspberry Pi are ready to use, the third exercise will steer the car remotely. The autonomous lane lectures and exercises follow this navigation with color spaces and masking, the Canny edge detection, Hough transform, steering, and stabilization.
15:20MOBMU-372
A robust indoor localization approach exploiting multipath, Rahul Mundlamuri, Devasena Inupakutika, and David Akopian, The University of Texas at San Antonio (United States) [view abstract]
In recent years, localization systems have gained significance in the indoor environment because they are used at airports, highrise buildings, and parking garages. The performance of traditional localization technologies like global navigation satellite system (GNSS) degrades in the indoor environment because of the strong presence of multipath components, low received signal strength, and strong signal attenuation. Additionally, indoor localization techniques like trilateration and triangulation have limitations because these techniques require a direct line-of-sight environment and need multiple access points (APs). Thus, WLAN fingerprinting-based indoor localization has gained popularity due to its stable performance and being widely available. The general fingerprinting approach is received signal strength-based, which has performance limitations due to signal fluctuations and multipath components. Channel state information (CSI) based-fingerprinting has proven more stable in indoor localization. In this paper, we present the performance study of CSI based fingerprinting in three different scenarios: no, fixed, and moving multipath components. We utilize artificial neural network(ANN) and compare the localization error of each scenario with received signal strength-based fingerprinting.
Wednesday 26 January 2022
Infrastructure Solutions I
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
07:00 – 08:00
Blue Room
07:00MOBMU-387
Evaluation of AI-based use cases for enhancing the cyber security defense of small and medium-sized companies (SMEs), Daniel Kant1, Andreas Johannsen1, and Reiner Creutzburg1,2; 1Technische Hochschule Brandenburg and 2SRH Berlin University of Applied Sciences (Germany) [view abstract]
Small and medium-sized enterprises (SMEs) are increasingly facing the challenges of an emerging cyber threat landscape through offensive AI. The introduction, integration, adoption, and usage of AI-based cyber security solutions can be pretty challenging for SMEs, having less personnel and financial and technological resources than large companies. For micro-enterprises in particular, an IT security incident can threaten the very existence of the company. With the increasing threat of offensive AI that can help make cyberattacks more efficient, there is a need to deploy more effective cyber security solutions; especially the adaptation of resilient defenses must be driven forward. SMEs, in particular, need ready-to-use security solutions that reflect state-of-the-art to be more resilient to cyber-attacks. There are already AI-based solutions on the market (both for office IT and operational technologies (OT)), but they are often not very well suited for SMEs. This paper aims to give a proper global analysis of trends concerning the prevalence, prerequisites, and need for adopting and using AI-based cybersecurity solutions in small and medium-sized enterprises.
07:20MOBMU-388
The importance of the digital twin for the smart factory, Reiner Creutzburg, Sören Hirsch, Robert Flassig, Sven Thamm, and Andreas Johannsen, Technische Hochschule Brandenburg (Germany) [view abstract]
A digital twin is a digital replica of a living or non-living physical entity. Digital twin refers to a digital replica of potential and actual physical assets (physical twin), processes, people, places, systems, and devices that can be used for various purposes. The digital representation provides both the elements and the dynamics of how an Internet of things (IoT) device operates and lives throughout its life cycle. Definitions of digital twin technology used in prior research emphasize two important characteristics. Firstly, each definition emphasizes the connection between the physical model and the corresponding virtual model or virtual counterpart. Secondly, this connection is established by generating real-time data using sensors. The concept of the digital twin can be compared to other concepts such as cross-reality environments or co-spaces and mirror models, which aim to, by and large, synchronize part of the physical world (e.g., an object or place) with its cyber representation (which can be an abstraction of some aspects of the physical world). Digital twins integrate IoT, artificial intelligence, machine learning, and software analytics with spatial network graphs to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition, or position. This learning system learns from itself, using sensor data that convey various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment of which it may be a part. A digital twin also integrates historical data from past machine usage to factor into its digital model. In this paper, we highlight the importance of digital twins and give an overview of recent developments and applications.
07:40MOBMU-389
The role and importance of key enabling technologies as building blocks for smart factories, Reiner Creutzburg, Sören Hirsch, Robert Flassig, Steffen Doerner, Sven Thamm, and Andreas Johannsen, Technische Hochschule Brandenburg (Germany) [view abstract]
In this paper, we will review the key enabling technologies for the future smart factory. In particular, we describe the following 20 technologies: Big Data Technology and Advanced Data Analytics, Next-Generation Sensors, IoT and IIoT - Internet of Things, Intelligent Internet of Things, RFID /RTLS - Radio Frequency, AGV - Automatic Guided Vehicles, HMI - Human Machine Interfaces, SCADA - Supervisory Control and Data Acquisition, MES - Manufacturing Execution System, CMMS - Computerized Maintenance Management System, Additive Manufacturing (3D Printing), Augmented / Virtual Reality, Efficient Energy Technologies, Collaborative Robots & Exoskeletons, AI - Artificial Intelligence & Machine Learning, Cloud Computing, Cybersecurity, Collaborative Platforms, PLM - Product Lifecycle Management, Digital Twin (CPE - Cyber-Physical Equivalence), CPS - Cyber-Physical Systems. Finally, the significance and interaction of these new technologies in Smart Factories and Industry 4.0 are discussed.
Infrastructure Solutions II
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
08:30 – 09:30
Blue Room
08:30MOBMU-397
The hybridization of renewable energy resources, Saiful Islam1, Michael Hartmann1, and Reiner Creutzburg1,2; 1SRH Berlin University of Applied Sciences and 2Technische Hochschule Brandenburg (Germany) [view abstract]
The hybridization of renewable energy resources is a known topic in sustainable technology. Many projects are going on based on the topic. The use of Photovoltaic, wind energy, and other renewable resources can be helpful to optimize the load in the utility grid. Countries like Europe and other western countries have electricity storage, whether the developing countries are still struggling to make sure the stable utility grid connection to the distribution network system. In this research, we would like to discuss the different energy production processes sustainably. As we know, the energy sources are volatile and cannot always assure stable production to keep the requirements or demand properly. We want to use the combination of the sources in a way so that we can make the balance between the demand and the supply system. This research will be an overview in terms of technical and financial sites. Also, by using the different combinations of the Internet of things and data analysis method, we will see the correlation between the different sources and their production. Based on the production data, we can determine the financial feasibility and the outcome of the system. The main problem of renewable energy sources is uncertainty. In terms of wind energy, the velocity is also not stable according to the location. We want to show a predictive model by using the intelligent formula by which we can maintain the hybrid system. The production data from different sources will tell us their contribution to the system. This contribution will help us monitor the system and control which sources have more contribution on the demand side. The predictive model will have consisted of renewable sources such as photovoltaic, wind, utility grid, and inverter systems. In the research, the tool such as Artificial Intelligence can be implemented by sustainable management. The arrangement information is prepared to extricate data and based on resources. The renewable sources data are variant according to their location and it has an impact in terms of energy production. Data acquisition and analysis could help the current technologies such as smart grid, microgrid, and their control systems. This exploration aims to introduce a predictive foundation for the management of enormous volumes of data through large Information instruments (sensors) to help the coordination of environmentally friendly power. The main difference between the conventional electricity system and the renewable energy system is the variability of sources, with conventional sources such as utility grids and diesel generators and renewable sources consisting of photovoltaic (PV), wind, etc.
08:50MOBMU-398
Community research partnership: A case study of San Antonio Research Partnership Portal, Mohammad Nadim and David Akopian, The University of Texas at San Antonio (United States) [view abstract]
Currently, smart and growing cities like San Antonio require a lot of research collaboration to solve community problems. Finding research professionals and partners remains a significant difficulty for many government city departments. The information of research opportunities is either hosted on the respective city department website or researchers are contacted through a personal relationship with city department staff. Therefore, a researcher interested in establishing a collaboration with the city department would either need to navigate several websites or try to create a personal relationship with city department staff. In this paper, we will demonstrate the development of, Partnership Portal, a collaborative platform for researchers and government city departments in San Antonio. This portal will help the researchers to collaborate with government city departments and to use up-to-date administrative data for producing effective solutions of challenges faced by the community.
Imaging and Human-Machine Interfaces
Session Chairs:
David Akopian, The University of Texas at San Antonio (United States) and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)
10:00 – 11:00
Blue Room
10:00MOBMU-400
Combination of RAW images and videos for 30K panoramic projection using ACES workflow, Eberhard Hasche1, Reiner Creutzburg1,2, and Oliver Karaschewski1; 1Technische Hochschule Brandenburg and 2SRH Berlin University of Applied Sciences (Germany) [view abstract]
According to our recent paper [1], the concept of creating a still image panorama with the additional inclusion of video footage up to 30K resolution has proven to be successful in various application examples. However, certain aspects of the production pipeline need some optimization, especially the color workflow and the spatial placement of the video content. This paper aims to compare two workflows to overcome these problems. In particular, the following two methods are described in detail: 1) Improving the current workflow with the Canon EOS D5 Mark IV camera as the central device, 2) Establishing a new workflow using the new possibilities of the Apple iPhone 12 Pro MAX. The following aspects are the subject of our investigation: a) The fundamental idea is to use the ACES as the central color management system. It is investigated if the direct import from RAW to ACEScg via dcraw and rawtoaces shows advantages. In addition, the conversion from Dolby Vision to ACES for the video processing is investigated, and the result is evaluated. Furthermore, the influence of stitching programs (e.g., PTGUI) on the color workflow is observed and optimized. b) The second part of the paper deals with the spatial integration of the videos into the still panoramas. Due to the different crop factors, specific focal lengths must be applied when using the Canon EOS D5 Mark IV; this distorts the image and video materials differently and makes it difficult to place the video footage in the panorama. We investigate if the usage of the lens distortion removal algorithm improves results. Furthermore, the comparison of the performance and capabilities of the Apple iPhone 12 Pro MAX is also evaluated regarding this aspect. Finally, the recorded resolution of detailed vegetation and foliage in video footage is compared. The paper summarizes the results of the new proposed workflow and indicates necessary further investigation. [1] Hasche, Eberhard; Benning, Dominik; Karaschewski, Oliver; Carstens, Florian; Creutzburg, Reiner: Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques. In: Electronic Imaging, Mobile Devices and Multimedia: Technologies, Algorithms & Applications 2020, pp. 206-1-206-14(14), https://doi.org/10.2352/ISSN.2470-1173.2020.3.MOBMU-206
10:20MOBMU-401
Application scenarios and usability for modern 360 degree video projection rooms in the MICE industry, Reiner Creutzburg1, Eberhard Hasche1,2, and Dirk Hagen2; 1Technische Hochschule Brandenburg and 2SRH Berlin University of Applied Sciences (Germany) [view abstract]
360-degree image and movie content has gained popularity over the media and the MICE (Meeting, Incentive, Conventions, and Exhibitions) industry in the last few years. There are three main reasons for this development. First, on the one hand, it is the immersive character of this media form, and, on the other hand, the development of recording and presentation technology has made significant progress in terms of resolution and quality. Third, after a decade of dynamic rising, the MICE Industry focuses on a disruptive change for more digital-based solutions. 360-degree panoramas are particularly widespread in VR and AR technology. However, despite the high immersive potential, these forms of presentation have the disadvantage that the users are isolated and have no social contact during the performance. Therefore, efforts have been made to project 360-degree content in specially equipped rooms or planetariums to enable a shared experience for the audience. One application area for 360-degree panoramas and films is conference rooms in hotels, conference centers, and any other venues that create an immersive environment for their clients to stimulate creativity. This work aims to overview the various application scenarios and usability possibilities for such conference rooms. In particular, we consider applications in construction, control, tourism, medicine, art exhibition, architecture, music performance, education, partying, organizing and carrying out events, and video conferencing. These applications and use scenarios were successfully tested, implemented, and evaluated in the 360-degree conference room “Dortmund” in the Hotel Park Soltau in Soltau, Germany. Finally, the advantages, challenges, and limitations of the proposed method are described.
10:40MOBMU-402
Brain computer interface (BCI) – UX-design for visual and non-visual interaction by mental commands in the context of technical possibilities, Julia Schnitzer, Technische Hochschule Brandenburg (Germany) [view abstract]
How do we interact with the digital world in the future by only using our thoughts to interact? To interact with the digital world, we use gestures (most common: swipe gestures), voice, facial expressions and mental commands. The latter has become more important by recent press releases about scientific achievements, for example, from Neuralink [1] using invasive technology to control a computer game by thoughts only [2]. Another study says an application can display text on a screen by just thinking about a word [3]. Currently, BCIs are mainly developed for neurotechnology and healthcare. However, it is a question of time when these services will be available for commercial products for everyday life. (Until then, manufacturers have to minimize health risks and face privacy and safety regulations.) The thesis is that in the future, you will be able to interact with your digital devices just by using your thoughts and you’ll do much faster. This makes sense in many situations where you f. e. do not have a free hand (manufacturing, sports) or cannot speak (underwater). But for such services, you cannot rely on conventional visual design guidelines or interaction patterns, frameworks or libraries. Navigation interacting with a digital service by mental command has a lot of advantages, the human brain comes naturally with some specific features that force us to find a radically new way to interact with digital interfaces. The following study introduces you first into working conditions with mental commands, comparing invasive and non-invasive technologies, and later on comparing several possibilities of interaction patterns for a BCI of everyday usage. [1] Neuralink is a US neurotechnology company founded in July 2016 by Elon Musk and eight other persons. Neuralink’s goal is to develop a device for communication between human brain and computers, a so-called brain-computer-interface(BCI). [2] The Guardian, Fr. 9th April 2021 “Elon Musk startup shows monkey with brain chip implants playing video game”. https://www.theguardian.com/technology/2021/apr/09/elon-musk- neuralink-monkey-video-game . [3] Scientific American, 15th July 2021, Neuroscience, author Emily Willingham: “New Brain Implant Transmits Full Words from Neural Signals”.