Tagged: HMD

Tutorial 27052016

I had my tutorial with Jonathan on Friday. We spoke about the logistics for the final show and the best way to present my final piece. Although I’ve had some breakthroughs in producing VR content in Unity and Blender, I’m still not entirely sure which would be best for the show. On one side, if I can get my hands on an Oculus or Vive, a PC based Unity build would make it possible for the viewer to explore the environment as they wish. However its proving difficult to get my hands on one for the final show, so I’ll have to consider a rendered equi-rectangular film to be shown on a google cardboard. (I could explore simplifying the Unity build to allow it to be app based, but it wouldn’t have the same impact as the PC version). It’ll most likely have to be a rendered film. There are benefits to both options, however conceptually and aesthetically it’d be better for the viewer to have full control of their movement.

Thinking through the presentation for VR in the gallery, there’s the important question of user experience. I’m apprehensive about the funfair/arcade-style queueing that I’ve seen at a number of exhibitions, but there’s nothing I can do about it. The emergence of the technology makes it currently very attractive to try no matter the context. This fact gives an added pressure to the outcome. Realistically, during the private view, I’ll have to be organised and if I were able to use the open world version of the work I’d have to restrict the time of each viewer’s experience depending on the interest. This can be done a number of ways though I’ve been considering a script that cancels all colliders after a certain period of time, forcing the FirstPerson to fall through the structures, ending the simulation. However, again, this is for the PC version of the work.

The other option is a 5/6 minute fixed animation on a loop. The biggest pro for this option is the quality of the final render and the fact that the headset will be portable (potentially multiple headsets). The biggest problem would be the battery life of the phones.

As for the physical door, I’m going to have to go at it with a Jigsaw and re-arrange it at Wilson Road. My other option was to try to borrow a horse box….

I’m currently working on my Symposium 2. The Research Paper settled my conceptual interests in the freedoms and restrictions of religion/spirituality and the internet. Within this I approached and considered addiction, identity, disembodiment and propaganda within this question. My interest in the relationship between user and device has been inspired by Nam June Paik’s work, which has evolved rapidly since studying his work in more detail for the paper. This time last year, we began to form our research questions, and at the time I didn’t expect it to have had such an impact on my overall practice. It has allowed me to consider the concepts in a purely academic context. ‘The Medium is the Message’, (Marshall McLuhan), and Virilio’s ‘Information Bomb’ have been important texts during this process.

In many ways, my concept hasn’t changed for the last 6 or 7 months. The idea of a Gateway, a device as access to an extension of physical space and identity. My interim exhibit ‘Congregation’ was also an attempt at exploring this idea. Aesthetically, I’ve tried to develop ideas from early in the MA such as trying to represent to multiple identities one holds online and the physical, 3-dimensional make-up of everyday information.

As i’m approaching the final weeks of the MA, I’m happy to be in a position where there are aesthetic and conceptual choices that can be made rather than rushing to finish. Though, I’m aware that its looking less likely I’ll be able to secure a headset for the final show, which makes the Unity experience I’ve been developing somewhat frustrating / partially obsolete. Depending on what happens with the headset, there may still be an 11th hour panic!! Mainly the issue of organising and rendering the film.

I’m still making changes to the work, and having been through a number of versions, in recent days, I can see significant changes happening before the final exhibition version.

Advertisements

SmartGlass and Selective Transparency

2015 SVVR – The Future of Wearable Displays and Inputs

This is David Holz, co-founder of Leap Motion. Watch this lecture or even just browse over this lecture. There are some very interesting ideas of the future of consumer technology. He gets quite detailed about the evolution of Silicon chips and image sensors as we approach the 2020s. Even mentioning the potential of Smart-Dust….

At first he discusses the impact of the HMD (Head Mounted Display) in the coming years. There are some important leaps that must be made for this tech to truly act in the way we wish. As with all technology at the moment, the real crux of the issue is the power supply! Our advance in ability has far outreached the evolution of energy development and storage. Detaching the headset from the computer will be a crucial step. For the coming years, this could mean that its reliant on the Smartphones battery and advances.

As with any predictions, Holz’s are based on current trends and advances, and will obviously change BUT what he’s saying is very easy to believe.

In the lecture, the most interesting thing for me, and my research, was this idea of selective transparency. I recently enquired about buying an A1 sheet of electrochromic glass (Smartglass). I knew this would be expensive and probably impossible, but I thought it would be an interesting addition to the painting I hope to make. If the sensors in the gallery tripped both the fade of a backlight and the fade from frosted to transparent glass, it could be very interesting. Unfortunately an A1 sheet of this glass was going to be just over £1000….. so its on the back burner for the time being… but I’m very interested post MA. On this subject, I’ve found an interesting hack to mimic this effect, (in a way) with sellotape and frosted perspex. see-through sellotape on either side of the perspex takes away the frost and creates transparency.

Anyway…. the Selective Transparency Holz is talking about is in the HMD. He explains that between 2017 – 2022 we could be using headsets that could switch from virtual to mixed / augmented reality at the touch of a button (or 2020s equivalent of a button, maybe a holo-button?)

Very exciting stuff.

Back to present day, Leap Motion have also made some advances as is shown in the Video (though I’ve added some GIFs below). As the LM uses two infrared cameras, they have begun taking the camera feed of your hands and using it instead of 3D modelled hands. Being able to recognise your own hands in a digital landscape will give a strange sense of realism to the experience. Holz explains that this pushes beyond the idea of the “un-canny valley”.

 

interaction-engine2.gif

interaction-engine1.gif

Here are a few more GIFs showing the possible interaction tech we could have with our computer softwares. This is very exciting, finally the mouse and keyboard won’t be the natural interface to a computer, and instead we have……. our hands…….

Hovercast-Slider.gif

This is clearly brilliant for art creation.

arm-hud-widget2.gif

Here’s a load more to celebrate all this excitement.

giphy-2.gifgiphy-3.gif

giphy-4.gif

giphy-5.gif

giphy