After I didn’t secure an Oculus Rift for the final show, I turned to my phone. Luckily, Blender has an option within the panorama camera function for the output to be Equi-rectangular. Which is such a beautiful word. Its process is similar to turning this:
Although similar, its not entirely the same, for consistency and ability to read the map, its been altered.
Below are a few examples of rendered frames that have developed along the process of creating the VR side of my work. These images are raw renders (no noise reduction, editing, colour correction, grading… NOTHING!).
There have been a number of alterations along the way with this. It’s been a lengthy, experimental yet very fulfilling and mind-altering experience…. that… if i’m honest… I cannot wait to do again.
I’ve loved it, and feel I’ve found a real home in 360 animation. It’s flexible, manageable and with enough persistence and dedication, anything can be created.
If I’m honest with the final VR Piece, I began to get frustrated with it, but Ed reminded me this week that when you spend too long looking or listening to the same thing, you are first of all its worst critic, and second of all, after multiple setbacks, thinking differently about its outcome. It affects you emotionally, and if it angers you, you’d naturally begin to like it less.
Luckily, I think i enjoy this frustration…. as long as the end result succeeds… but then again… who doesn’t. Its a possibility of failure that excites me alongside a belief that with enough effort and focus, anything (within reason) is achievable. (you also need good resources and in many cases a brilliant team) but if it gets done, it gets done.
My VR piece is now done. So thats positive.
Thought I’d do a quick post on the rendering process going into my VR piece. I’ve confirmed today that I’ll be displaying the piece on a Samsung Gear headset, which is considerably better than my iPhone 5. (Bigger screen, better resolution.)
The rendering has been a nightmare. I’ve learnt more about the process of rendering in the past month than I’d ever thought I’d know. I’ve gone through render layers, passes, batch edits, shell scripts, render farms, network rendering… all in the space of a month. I’ve had to find quick and creative solutions to ensure that my piece will be viewable.
Re-creating my VR piece has been the biggest struggle over the last 3 months. I’d originally made the environment as a Unity game for an Oculus Rift or an HTC Vive, however, due to not being able to get one for the show, I had to re-make the piece.
Early last week I managed to set up a number of farms to get the animation going:
From tomorrow onwards will be a similar story. I’m basically just running around with a memory stick looking for computers.
I’ll do a further post on the content of the VR piece and how I’ve found ways to get around the issues of equirectangular content creation.
How things can change! I was made aware by a few people who had tried the early tests of my VR piece that the environment wasn’t necessarily a Church to them. The comparison given was the U.S. Capitol building.
I spent a considerable time focussing on a religious setting after the influence my research paper had had on my practice. From the beginning I focused on the Internet as a physical space and the architecture within it. Hearing the users experience made me feel that the environment is comparable to both a political and religious setting, which, given the nature of the internet, I didn’t dislike. Though, admittedly it wasn’t intended.
I’ve kept what was originally designed as a church / chapel going against what I’d decided a while ago.
I’ve reflected on the Internet as a comparable institute to religion and still believe that its use and ability creates a similar social structure based on live information. For example, stories of discrimination, abuse, or any injustice are shared and commented on. These are seen and read by tens of thousands, creating a collective social view on that injustice. The main issue with this idea is that internet communities are more secluded than I’d originally thought. The stories of injustice I see on my Facebook timeline are personalised to the community I’m a part of. If I lived elsewhere, in another community, it is a different story. (Of course this is a simple and obvious reflection that can be seen in any community online or offline).
This couldn’t have been made more obvious by Brexit. Since Friday, Facebook (and most other social media) has been set alight with emotion both positive and negative. Some of the posts have been horrendous, some very enlightening, however, the best one I read was a friend from Nottingham uni. He wrote:
“……and proof that my newsfeed is a terrible measure of public opinion”
I don’t want to digress to Brexit, but this comment gives the perfect insight into internet communities.
Back to the religious comparison. This idea has made me realise that although the spread of information allows for quick solidarity and reaction to significant events. It is also tailored to you as an individual and therefore doesn’t give you the insight of all points on the spectrum. I’ve always felt that the best way to understand a story or an idea is to read from both ends of the argument. This is in some way lost by the monetization of the net, and its focus on your individual experience (This is mostly true for social medias).
I’ve gone slightly off topic but the idea of Gateway is to highlight the importance of the viewer’s input.
I will post some more recent examples of the VR experience I’m building in due course.
Below is an extract from Len Manovich’s ‘The Language of New Media’ Chapter 1, p58.
‘In the 1980s, VR pioneer Jaron Lanier saw VR technology as capable of completely objectifying – better yet, transparently merging with – mental processes. His descriptions of its capabilities did not distinguish between internal mental functions, events, and processes and externally presented images. This is how, according to Lanier, VR can take over human memory: “You can play back your memory through time and classify your memories in various ways. You’d be able to run back through the experiential places you’ve been in order to be able to find people, tools.” Lanier also claimed that VR will lead to the age of “post-symbolic communication,” communication without language or any other symbols. Indeed, why should there be any need for linguistic symbols if everyone rather than being locked into a ‘prison-house of language’ (Frederic Jameson), will happily live in the ultimate nightmare of democracy – the single mental space that is shared by everyone, and where every communicative act is always ideal (Jurgen Habermas). This is Lanier’s example of how post-symbolic communication will function: “You can make a cup that someone else can pick when there wasn’t a cup before, without having to use a picture of the word ‘cup’. Here, as with the earlier technology of film, the fantasy of objectifying and augmenting consciousness, extending the powers of reason, goes hand in hand with the desire to see in technology a return to the primitive happy age of pre-language, pre-misunderstanding. Locked in virtual reality caves, with language taken away, we will communicate through gestures, body movements, and grimaces, like our primitive ancestors….”
An Extract from Life Online: Researching Real Experience in Virtual Space
by Annette N. Markham.
“Although cyberspace is nothing more or less than a network of computer systems passing digitised strings of information back and forth through copper or fibre-optic cables, people who connect to this network often feel a sense of presence when they are online. Even in purely text-based online contexts, people establish and maintain intimate friendships, romantic relationships, and stable communities. This sense of presence can be quite visceral:
‘So much for leaving our bodies out of this… This gathering is not restricted to the Net, and therefore to the text of the Net, but extends to the flesh, the physical body. In this rare case, uncannily, even though online, we feel we meet in the flesh…. Everywhere we rub shoulders with each other. Everywhere users present themselves to each other, freely saying and doing what they choose. (Argyle, 1996, passim).’
Online communication does seem quite extraordinary. By logging onto my computer, I (or a part of me) can seem to (or perhaps actually) exist separately from my body in ‘places’ formed by the exchange of messages, the technical basis of which I am only beginning to understand. I can engage in activities with people of like interests around the globe using nothing but my computer, my imagination, written text, and the capacity of digital code to process and mediate aspects of my life online.
Telepresence, as this is called, is not unique to computer technologies. Indeed, a good novel, a familiar scent from the past, or a long-lost journal can transport a person to another time and place. For many of us, however, the feeling of being somewhere other than in the body with some other non-embowed yet presumably living being – particularly to the extent Argyle describes above – is a new and unfamiliar experience.”
Gazelli Art House for me is really trying to push the inclusion of digitally based art in their exhibition schedule. Every time I go back there’s a new show with a digital twist. This time was phenomenal but at the same time bizarre and worrying.
I’m a huge advocate for Virtual Reality in fine art… of course… but there was a practical issue that many on the course had noticed at Jon Rafman’s exhibition.
The gallery had become an arcade or funfair, where the audience are queueing to experience the work. At Jon Rafman, the anticipation was created by the scale of the maze and the works in the other rooms. This meant that you didn’t mind queueing (it was also my first experience with a headset), however in Gazelli, there were 3 extraordinary VR experiences in two small rooms. Each experience had a long queue and it really did put you off. The redeeming feature was the reaction of those coming out from the headsets. You can see in the way they move and react to others around them that they feel as if they’ve truly been somewhere else.
The piece I most enjoyed was an experience that played on the idea of crossing between physical and digital space. The artist stood next to a plinth with a small cardboard house. You stand in front of the model, put on the headset, and all of a sudden (as is the wonder of VR,) you’re in the same room but alone, without a body and the little house is glowing. I haven’t had the opportunity to get the Oculus Rift head tracker working with a Mac yet. Unfortunately its a known issue so I’ll to wait for a powerful PC. This experience used the head tracking beautifully. As you peer into the windows of the house, you’re suddenly transported inside, where there are paintings and sculptures to see. Another technical aspect worth mentioning was the perfect alignment of a lever on the plinth and the lever in the experience. You don’t have any hands in the experience, so reaching out for a lever should be difficult to co-ordinate… but it was exactly where it was in reality. Very well mapped.
The experiences themselves are in many ways Gimmicky. Its such an exciting medium in an early stage so the content created is going to be simple and crude. (this may be a mad comment) but it reminds me of the impressionists with tubed oil paints. Look what they ended up creating!
Definitely worth seeing this show. Though I’m not sure about the Title. Lends itself to much to medium over concept, and plays on the famous phrase “Exit Through the Gift Shop”
I’ve been making some big changes to the VR side of my final piece. I’ve (just about) got over the fact that incorporating live news media won’t be possible s0 I’m having to go down other routes. If I’m honest, It’s a real blow. I wonder if I’d continued more ruthlessly with three.js at the end of last year rather than getting interested in Unity whether the outcome would have been different, but time is running out and I can’t afford to keep experimenting without a certain outcome. At least I know that with funding I can do it. Whether that funding went towards hiring a programmer to create a unity script using CEFGlue and scraping browser data, or just buying Coherent UI.
Anyway, I’m over it, and going with Plan B.
I’m going to create video collages of browser screen grabs and news recordings. This does allow more flexibility with the narrative on misinformation and religious technology and media usage.
I’ve been trying to design the chapel for the piece, and have decided to give Cycles render a rest for the design process. Blender Render gives a simple model-like finish. These are the native models without any texturing. I want to complete the basic infrastructure of the level before focussing too much on detail. These details will include altars, figures, pews, satellites and monitors. I hope to begin the texturing at the beginning of May.
I should mention that I’m unsure about the presentation when in VR. Part of me wants to put all the focus on the interior. I’ll decide when its more complete. Conceptually it makes more sense to stay inside the Chapel, but aesthetically, its nice to be outside…. A lot to think about.
The model is far from done, but these are the Blender Renders as of lunchtime today:
Actually…. I have made a change this then…. i added a window.
I’ve placed the new model in Unity and here are a few screenshots of it in action. (Lots of work to do). As explained before these are process photos, so theres currently no textures or lighting other than the smokescreen.
2015 SVVR – The Future of Wearable Displays and Inputs
This is David Holz, co-founder of Leap Motion. Watch this lecture or even just browse over this lecture. There are some very interesting ideas of the future of consumer technology. He gets quite detailed about the evolution of Silicon chips and image sensors as we approach the 2020s. Even mentioning the potential of Smart-Dust….
At first he discusses the impact of the HMD (Head Mounted Display) in the coming years. There are some important leaps that must be made for this tech to truly act in the way we wish. As with all technology at the moment, the real crux of the issue is the power supply! Our advance in ability has far outreached the evolution of energy development and storage. Detaching the headset from the computer will be a crucial step. For the coming years, this could mean that its reliant on the Smartphones battery and advances.
As with any predictions, Holz’s are based on current trends and advances, and will obviously change BUT what he’s saying is very easy to believe.
In the lecture, the most interesting thing for me, and my research, was this idea of selective transparency. I recently enquired about buying an A1 sheet of electrochromic glass (Smartglass). I knew this would be expensive and probably impossible, but I thought it would be an interesting addition to the painting I hope to make. If the sensors in the gallery tripped both the fade of a backlight and the fade from frosted to transparent glass, it could be very interesting. Unfortunately an A1 sheet of this glass was going to be just over £1000….. so its on the back burner for the time being… but I’m very interested post MA. On this subject, I’ve found an interesting hack to mimic this effect, (in a way) with sellotape and frosted perspex. see-through sellotape on either side of the perspex takes away the frost and creates transparency.
Anyway…. the Selective Transparency Holz is talking about is in the HMD. He explains that between 2017 – 2022 we could be using headsets that could switch from virtual to mixed / augmented reality at the touch of a button (or 2020s equivalent of a button, maybe a holo-button?)
Very exciting stuff.
Back to present day, Leap Motion have also made some advances as is shown in the Video (though I’ve added some GIFs below). As the LM uses two infrared cameras, they have begun taking the camera feed of your hands and using it instead of 3D modelled hands. Being able to recognise your own hands in a digital landscape will give a strange sense of realism to the experience. Holz explains that this pushes beyond the idea of the “un-canny valley”.
Here are a few more GIFs showing the possible interaction tech we could have with our computer softwares. This is very exciting, finally the mouse and keyboard won’t be the natural interface to a computer, and instead we have……. our hands…….
This is clearly brilliant for art creation.
Here’s a load more to celebrate all this excitement.