Hi there!

Iโ€™m a second year PhD Student in Computer Science at the University of St. Gallen in Switzerland in the lab for Interactions- and Communication-based Systems.

I study how ubiquitous personalization systems can make peopleโ€™s interactions with their environment more efficient, safer and more inclusive, and how these systems can be built in a responsible and societally beneficial way, by combining the following research areas:

  • Mixed Reality
  • Ubiquitous Computing
  • Personalization
  • Privacy
  • Algorithms and Society
  • Computer Vision
  • Technology Acceptance

Next to my main PhD topic Personalized Reality, I work with colleagues on related topics, I am teaching assistant for multiple lectures (see Teaching), and I am co-supervising Bachelor- and Master Theses.

I am been reviewing for multiple conferences and journals, for more details see Community Service.

For updates on what Iโ€™m doing, have a look at the Publications of my colleagues and me, follow me on the Fediverse: https://hci.social/@jannis, or contact me via email: jannisrene.strecker@unisg.ch. ๐Ÿ˜€

๐Ÿ“‘ Recent Publications

Open your Eyes: Blink-induced Change Blindness while Reading

In

Companion of the the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25)  

Conference

Date

October 12, 2025

Authors

Kai Schultz, Kenan BektaลŸ, Jannis Strecker-Bischof, and Simon Mayer

Abstract

Reading assistants provide users with additional information through pop-ups or other interactive events which might interrupt the fow of reading. We propose that unnoticeable changes can be made in a given text during blinks while the vision is obscured for a short period of time. Reading assistants could make use of such change blindness to adapt text in real time and without infringing on the reading experience. We developed a system to study blink-induced change blindness. In two preliminary experiments, we asked five participants to read six short texts each. Once per text and during a blink, our system changed a predetermined part of each text. In each trial, the intensity and distance of the change were systematically varied. Our results show that text changes โ€” although obvious to bystanders โ€” were difcult to detect for participants. Concretely, while changes that afected the appearance of large text parts were detected in 80% of the occurrences, no line-contained changes were detected.

Text Reference

Kai Schultz, Kenan BektaลŸ, Jannis Strecker-Bischof, and Simon Mayer. 2025. Open your Eyes: Blink-induced Change Blindness while Reading. In Companion of the the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25), October 12โ€“16, 2025, Espoo, Finland. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3714394.3754398

Link to Published Paper Download Paper

Ad-hoc Action Adaptation through Spontaneous Context

In

Companion of the the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25)  

Conference

Date

October 12, 2025

Authors

Raffael Rot, Simon Mayer, and Jannis Strecker-Bischoff

Abstract

Typical everyday physical interactors, such as switches, perform a specific static action upon actuation by a user. For such simple components, this action is independent of the immediate user situation; consideration of this situation typically involves the augmentation of the interactor with specific added interface features (e.g., long-press of a button for dimming). We introduce the "spontaneous context" interaction pattern for everyday interactors where the concrete action is spontaneously adapted based on information about the user situation that the interactors gather and interpret ad hoc. In our approach, the interactor and user hence share no prior relationship and no user data is stored, yet the interactor adapts the action at interaction time. To demonstrate the spontaneous context pattern, we implemented a "plot door": this is an automatic door that differs from classical infrared motion sensor-activated doors by opening only when it is likely that an individual wants to enter. Our plot door uses an infrared sensor that is augmented with our proposed interaction pattern and thereby spontaneously gathers and interprets accelerometer and gyroscope data from the individual to determine whether it should open or not.

Text Reference

Raffael Rot, Simon Mayer, and Jannis Strecker-Bischoff. 2025. Ad-hoc Action Adaptation through Spontaneous Context. In Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25), October 12โ€“16, 2025, Espoo, Finland. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3714394.3755878

Link to Published Paper Download Paper

JUIC-IoT: Just-In-Time User Interfaces for Interacting with IoT Devices in Mixed Reality

In

Companion of the the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25)  

Conference

Date

October 12, 2025

Authors

Lucien Ledermann, Jannis Strecker-Bischoff, Kimberly Garcia, and Simon Mayer

Abstract

The number of deployed Internet of Things (IoT) devices is continuously increasing. While Mixed Reality (MR) allows hands-free interaction, creating MR User Interfaces (UI) for each IoT device is challenging, as often a separate interface has to be designed for each individual device. Additionally, approaches for automatic MR UI generation often still require manual developer intervention. To address these issues, we propose the JUIC-IoT system, which automatically assembles Just-in-Time MR UIs for IoT devices based on the machine-understandable format W3C Web of Things Thing Description (TD). JUIC-IoT detects an IoT device with object recognition, uses its TD to prompt an LLM for automatically selecting appropriate UI components, and then assembles a UI for interacting with the device. Our evaluation of JUIC-IoT shows us that the choice of LLM and the TD of a device are more crucial than the formulation of the input prompts for obtaining a usable UI. JUIC-IoT represents a step towards dynamic UI generation, thereby enabling intuitive interactions with IoT devices.

Text Reference

Lucien Ledermann, Jannis Strecker-Bischoff, Kimberly Garcia, and Simon Mayer. 2025. JUIC-IoT: Just-In-Time User Interfaces for Interacting with IoT Devices in Mixed Reality. In Companion of the the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp Companion โ€™25), October 12โ€“16, 2025, Espoo, Finland. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3714394.3754371

Link to Published Paper Download Paper

See all publications