QoMEX 2020 Live Stream | Day 3
Schedule
Qomex 2020 Live - How does it work?
This year QoMEX 2020 will be held online - We're embracing numerous streaming and virtual platforms to deliver the conference!
First we have YouTube - Both keynotes and oral sessions will live streamed on YouTube. As part of this process, we've embedded Slido to allow questions from the audience. This way we can deliver an interactive experience with the presenters and the audience!
Second to this, we have virtual rooms! The virtual rooms are listed below and you can find the link by clicking the VR icons below!
VR Icon:
- VR Icons will bring you a Virtual Space in your browser, where you can have one-on-one discussion with the presenters and other attendees.
Camera Icon:
There are two different camera icons - one Blue and one Red
- Blue Camera Icons allow you to join the short breakout sessions on Zoom with other members of the multimedia community.
- Red Camera Icons will bring you a YouTube recording of the presentation.
Thursday 28th May
Conference Schedule | |||
All times are UTC | Thursday 28th May | ||
Virtual Poster Sessions | |||
11:00 | Poster Session 2 Chair(s): Sebastian Egger-Lampl |
||
Variation in QoE of passive gaming video streaming for different Packet Loss Ratios. Abdul Wahab, Nafi Ahmad, and John Schormans |
|||
Development and evaluation of a test setup to investigate distance differences in immersive virtual environments. Stephan Fremerey, Muhammad Sami Suleman, Abdul Haq Azeem Paracha, and Alexander Raake |
|||
Influence of Hand Tracking as a way of Interaction in Virtual Reality on User Experience. Jan-Niklas Voigt-Antons, Tanja Kojic, Danish Ali, and Sebastian Möller |
|||
Can visual scanpath reveal personal image memorability? Investigation of HMM tools for gaze patterns analysis. Waqas Ellahi, Toinon Vigier, and Patrick Le Callet |
|||
Influence of video delay on quality, presence, and sickness in viewport adaptive immersive streaming. Carlos Cortés, Pablo Pérez, Jesús Gutiérrez, and Narciso García |
|||
Transformation of Mean Opinion Scores to Avoid Misleading of Ranked based Statistical Techniques. Babak Naderi, and Sebastian Möller |
|||
Towards Analysing the Interaction between Quality and Storytelling for Event Video Recording. Eckhard Stoll, Stephan Breide, and Alexander Raake |
|||
goDASH - GO accelerated HAS framework for rapid prototyping. Darijo Raca, Maëlle Manifacier, and Jason J Quinlan |
|||
A QoE Evaluation of an Augmented Reality Procedure Assistance Application. Eoghan Hynes, Ronan Flynn, Brian Lee, and Niall Murray |
|||
Matched Quality Evaluation of Temporally Downsampled Videos with Non-Integer Factors. Christian Herglotz, Geetha Ramasubbu, and Andre Kaup |
|||
Towards the Impact of Gamers Strategy and User Inputs on the Delay Sensitivity of the Cloud Games. Saeed Shafiee Sabet, Steven Schmidt, Saman Zadtootaghaj, Carsten Griwodz, and Sebastian Möller |
|||
11:45 | Organisational | ||
12:00 | Keynote 3: Stephen Brewster | ||
13:00 | Break | ||
Oral Sessions | |||
13:30 | Session 5: Video Quality Chair(s): Peter Schelkens and Lea Skorin-Kapov |
||
Bitstream-based Model Standard for 4K/UHD: ITU-T P.1204.3 - Model Details, Evaluation, Analysis and Open Source Implementation. Rakesh Rao Ramachandra Rao, Steve Göring, Peter List, Werner Robitza, Bernhard Feiten, Ulf Wuestenhagen, and Alexander Raake |
|||
Inclusion of End User Playback-Related Interactions in YouTube Video Data Collection and ML-Based Performance Model Training. Ivan Bartolec, Irena Orsolic, and Lea Skorin-Kapov |
|||
Classification of Viewing Abandonment Reasons for Adaptive Bitrate Streaming. Shoko Takahashi, Kazuhisa Yamagishi, and Jun Okamoto |
|||
5 Minute Break | |||
Session 5: Novel Chair(s): Peter Schelkens and Lea Skorin-Kapov |
|||
You Drive Me Crazy! Interactive QoE Assessment for Telepresence Robot Control. Hamed Z. Jahromi, Ivan Bartolec, Edwin Gamboa, Andrew Hines, and Raimund Schatz |
|||
Assessing Interactive Gaming Quality of Experience Using a Crowdsourcing Approach. Steven Schmidt, Babak Naderi, Saeed Shafiee Sabet, Saman Zadtootaghaj, and Sebastian Möller |
|||
14:45 | Break | ||
Hubs Sessions | |||
15:00 - 16:30 |
Session 6: QoMEX Virtual Lobby 1 |
||
Influence of Emotions on Eye Behavior in Omnidirectional Content. Wei Tang, Shiyi Wu, Toinon Vigier, and Matthieu Perreira Da Silva |
|||
Development and Validation of Pictographic Scales for Rapid Assessment of Affective States in Virtual Reality. Christian Krüger, Tanja Kojic, Luis Meier, Sebastian Möller, and Jan-Niklas Voigt-Antons |
|||
Quality Enhancement of Gaming Content using Generative Adversarial Networks. Nasim Jamshidi Avanaki, Saman Zadtootaghaj, Nabajeet Barman, Steven Schmidt, Maria G. Martini, and Sebastian Möller |
|||
Prenc - Predict Number Of Video Encoding Passes With Machine Learning. Steve Göring, Rakesh Rao Ramachandra Rao, and Alexander Raake |
|||
Evaluating the User in a Sound Localization Task in a Virtual Reality Application. Adrielle Nazar Moraes, Ronan Flynn, Andrew Hines, and Niall Murray |
|||
Comparing emotional states induced by 360 videos via head-mounted display and computer screen. Jan-Niklas Voigt-Antons, Eero Lehtonen, Andres Pinilla Palacios, Danish Ali, Tanja Kojic, and Sebastian Möller |
|||
15:00 - 16:30 |
Session 6: QoMEX Virtual Lobby 2 |
||
Towards a Perceived Audiovisual Quality Model for Immersive Content. Randy F Fela, Nick Zacharov, and Soren Forchhammer |
|||
Dataset Cleaning - A Cross Validation Methodology for Large Facial Datasets using Face Recognition. Viktor Varkarakis, and Peter Corcoran |
|||
Affects of Perceived-actions within Virtual Environments on User Behavior on the Outside. Asim Hameed, and Andrew Perkis |
|||
PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality. Evangelos Alexiou, Nanyang Yang, and Touradj Ebrahimi |
|||
Foveated Video Coding for Real-Time Streaming Applications. Oliver Wiedemann, Vlad Hosu, Hanhe Lin, and Dietmar Saupe |
|||
Fusion of Digital Fingerprint Quality Assessment Metrics. Christophe Rosenberger, and Christophe Charrier |
|||
Blind Image Quality Assessment with Visual Sensitivity Enhanced Dual-Channel Deep Convolutional Neural Network. Min Zhang, Wenjing Hou, Lei Zhang, and Jun Feng |
|||
16:30 | Awards & Closing |