QoMEX 2020

  • Home
  • QoMEX Live
    • Day 1
    • Day 2
    • Day 3
  • Authors
    • Call for Papers
    • Paper Submission Guidelines
    • Camera-Ready Paper Submission
    • Author Kit
    • Presentation Guidelines
    • Awards
    • Submit your Paper
    • Special Sessions
    • Inclusion and Diversity
  • Program
    • Overall Program
    • Social Events
    • Qualinet Meeting
    • Keynotes
    • Accepted Papers
  • Attending
    • Getting Here
    • Location
    • Accommodation
    • Visa Application
    • Registration Policies and Fees
    • Travel Grants
    • COVID-19 & QoMEX 2020
  • Committees
  • Sponsorship
  • Past QoMEX

QoMEX 2020

  • Home
  • QoMEX Live
    • Day 1
    • Day 2
    • Day 3
  • Authors
    • Call for Papers
    • Paper Submission Guidelines
    • Camera-Ready Paper Submission
    • Author Kit
    • Presentation Guidelines
    • Awards
    • Submit your Paper
    • Special Sessions
    • Inclusion and Diversity
  • Program
    • Overall Program
    • Social Events
    • Qualinet Meeting
    • Keynotes
    • Accepted Papers
  • Attending
    • Getting Here
    • Location
    • Accommodation
    • Visa Application
    • Registration Policies and Fees
    • Travel Grants
    • COVID-19 & QoMEX 2020
  • Committees
  • Sponsorship
  • Past QoMEX
  • Home >>
  • Program >>
  • Keynotes

Keynotes

Professor Stephen Brewster

 

Professor Stephen Brewster is a Prof. of Human-Computer Interaction in the School of Computing Science at the University of Glasgow. He  leads the Glasgow Interactive Systems Group (GIST) and within that, the Multimodal Interaction Group. His research focuses on multimodal HCI, or using multiple sensory modalities and control mechanisms (particularly audio, haptics and gesture) to create a rich, natural interaction between human and computer. His research has a strong experimental focus, applying perceptual research to practical situations with a long-term focus on accessibility of interfaces for people with disabilities using audio and haptics. His research also considers interfaces for mobile devices and in-car interactions. He is a Fellow of the Royal Society of Edinburgh, a member of the ACM SIGCHI Academy and an ACM Distinguished Speaker.

Overview of talk:

Title: How to use sound for new multimedia experiences

In this talk I will discuss how sound can be used in new and different ways to form novel user experiences. The first topic will be Audio Augmented Reality and how we can create ‘personal audio spaces’ that can augment the world around us, enhancing the media we consume in a simple, low-cost way. The second topic will focus on ultrasound and how this can be used to build new kinds of highly novel user interactions. Focused ultrasound can create mid-air haptic displays beamed to a user’s hands. It can also be used to levitate ‘phyical pixels’ to create rich, physical displays that float in front of a user for a whole new way of experiencing information.

Dr. Hayley Hung

 

Dr. Hayley Hung is an Associate Professor at Delft University of Technology where she leads the Socially Perceptive Computing Lab. Her research focuses on devising domain specific machine learning techniques to automatically interpret human-human social and affective behavior from multi-modal data streams. Her group strives to solve these problems in unconstrained and ecologically valid settings where the captured behaviour and human experience aligns most with real life. In 2016, she was awarded a Dutch Personal Grant (Vidi) for the project "MINGLE: Modelling Social Group Dynamics and Interaction Quality in Complex Scenes using Multi-Sensor Analysis of Non-Verbal Behaviour". Her research contributions have also been recognised by an invited talk at the ACM Multimedia 2016 Rising Star Session.

Overview of talk:

Title: Towards Enhancing Human Social Experience in the Wild

Humans interact with one another on a daily basis. Social bonding is a key component in human collaboration and with it, comes the possibility to achieve more as a group than as an individual. In today's society, one can consider social bonding to be important in relationships with a romantic partner, friends , and family or with professional colleagues. Studying how social interactions unfold and how these can affect or enhance social relationships taps into human's instinctive perception of the experience of social interactions. While the text above may sound like the start of a social science presentation, in this talk, I argue that in order to enhance the quality of human social experience where it could have the greatest benefit, we need an inherently interdisciplinary approach combining both social science and computer science. The drive for an interdisciplinary approach stems a lot from the idea that computational tools that could have the most impact for enhancing social experience must necessarily be embedded in people's everyday lives. Fortunately, with the rising popularity of wearable technologies, there is an opportunity to digitize momentary social experiences as they unfold in the real world.

However, when we step away from more restrictive social settings to cases where people are free to move around as they wish, most research (stemming from the ubiquitous  and pervasive computing community )  have tended to apply proxies such as co-location as a measure of social interaction. This approach strips away the possibility of measuring interaction quality, pushing the research focus more towards larger scale sociological studies that try to find generalisable patterns of human behaviour and its relation with their affective experience. In this talk, I argue that human experience has an inherently personal component that should be explored if we want to close the loop on enhancing the quality of human social experience. This starts by first reconsidering traditional approaches to measuring human affective experience.  Through examples from my prior work, I demonstrate that this provides intriguing new opportunities for investigating unconventional approaches to multimodal data processing which opens up a new field re-exploring phenomena from a machine perspective that goes beyond the commonly understood modalities of sight, and hearing.

I conclude the talk by discussing open opportunities regarding new directions of potential research on social experience monitoring and enhancement  with respect to topics such as privacy, data labelling, personalisation, and multimodal experience enhancement. 

Professor Mel Slater

 

Mel Slater is distinguished Investigator at the University of Barcelona, Spain, where he leads the Event Lab  (Experimental Virtual Environments for Neuroscience and Technology). He was previously Professor of Virtual Environments at University College London in the Department of Computer Science. He has been involved in research in virtual reality since the early 1990s, and has been the primary supervisor of 40 PhDs in graphics and virtual reality since 1989. He held a European Research Council Advanced Grant TRAVERSE 2009-2015 and now is working on a second Advanced Grant MoTIVE 2018-2022. He is Field Editor of Frontiers in Virtual Reality, and Chief Editor of the Human Behaviour in Virtual Reality section.

Overview of talk:

Title: Practical Illusions of Virtual Reality

I will argue that VR uniquely offers three illusions: Place Illusion (a strong perceptual illusion of being in the virtual place), Plausibility Illusion (a cognitive illusion that perceived events in which the participant might be engaging are really happening); Body ownership illusion (that the life sized virtual body that is visually coincident with the participant’s real body can be felt as the real body, including possible vicarious agency over independent actions of the virtual body). These are illusions in the sense that participants know for sure that these are not actually occurring in the physical world, yet nevertheless are practical illusions since people tend to act as if they were true. In this talk I will introduce these concepts, and give examples of their measurement and far reaching consequences.  

 

 

Important Dates

  • Special Session Proposals:
    November 19th – 2019
    November 21st – 2019
  • Full Paper Registration:
    January 22nd – 2020
  • Full Paper Submission:
    January 29th – 2020
    This is a Firm Date.
  • Full Paper - Notification:
    March 9th – 2020
  • Full Paper - Camera Ready Deadline:
    March 25th – 2020
  • Short Paper / Demo / Industry Submission:
    March 29th – 2020
    This is a Firm Date.
  • Short Paper / Demo / Industry Notification:
    April 20th – 2020
  • Short Paper / Demo Camera Ready:
    April 27th – 2020
  • Conference:
    May 26th – 28th 2020

Download Call For Short Papers

Download Call For Papers (CLOSED)

Download Call For Special Sessions (CLOSED)

Download Call For Participation

Diamond Supporters

Silver Supporters

Other Supporters

Technical Sponsors

© 2023 QoMEX 2020