Video Contest
AR/MR Unbound: Call for Exploratory Videos
It is conventional wisdom that Augmented and Mixed Reality have the potential to reshape how we interact with the world in profound ways. But how? The design of current AR and MR applications is limited by a technological framework still in its infancy, and speculation on future developments is constrained by the foreseeable future as well.
This call for videos solicits explorations of, speculations about, and meditations on the experience of AR and MR in a future unbound from the fetters of our current technologies.
Rules for Submission
This contest is open to everyone: artists, media professionals, technologists, AR enthusiasts, and students from all disciplines.
- Submissions of all types are encouraged. Don't let the fact that your only video hardware is an iPhone stop you.
- Please don't harm any humans or animals in the making of your video.
- Although there is no explicit limit on length, submissions are encouraged to be concise.
- Your video must have been made in 2011.
- Any format is acceptable as long as it can be viewed within a Web browser using conventional codecs.
Submission Process
Send an email to This e-mail address is being protected from spambots. You need JavaScript enabled to view it. containing a short abstract and a link to your online video. If user ID and password are required to view the video, please include those as well.
Submissions will be accepted until 21 October 2011 (23:59 US Pacific Time).
Semifinalists will notified by 24th October and asked to upload their videos to a shared site, where they can viewed and commented on by the public.
Finalists will be chosen by a panel of judges from ISMAR and the V2_ Institute for Unstable Media, and subsequently featured at the ISMAR 2011 conference and the Shift Festival of Electronic Arts in Basel, Switzerland between 26 October and 31 October 2011.
Prize
The first place video will receive 500€ (~ 680 USD).
Second place will receive 250€ (~340 USD).
This video contest is proudly sponsored by Graz University of Technology, Institute for Computer Graphics and Vision