Last updated on June 25, 2020
The paper was presented at The ACM International Conference on Interactive Media Experiences, Barcelona, 2020 last week. You can find the definitive version on my ACM DL profile here. The video presentation is now archived on ACM SIGCHI YouTube channel:
For the past couple of decades, researchers and broadcasters have been building technology that merges Augmented Reality (AR) with TV broadcasting. Some have used it in the production of TV content, as a way of adding computer-rendered graphics into a scene, others have created prototypes and systems that distribute content that can be consumed using AR displays. In the latter case, the goal has typically been to enhance or even transform the conventional TV viewing experience, through interactive content or to immerse the viewers into the story world.
We have attempted to create an overarching understanding of these technologies, and the design choices and guidelines that they imply. By systematically identifying and gathering relevant works, and qualitatively analyzing the systems and prototypes that they propose, we have arrived at a set of dimensions that enable us to explore the underlying design space. Furthermore, by combining the options that the dimensions provide, we can create unique patterns that enable designers and broadcasters to merge TV with AR to produce novel forms of media presentation.
An initial exploration of these dimensions was published in The ACM International Conference on Interactive Media Experiences, Barcelona, 2020. Due to restrictions imposed on the conference because of COVID19, the conference is going virtual. All authors were asked to provide a video talk of their accepted paper. This provided us the chance to create a video talk and share it with the general public (below).
You can download the pre-print of the paper here.