I began the course looking at translating 2d pencil drawings to 3dimensional objects in software such as Blender. I take a lot of photos in my free time and realises that there were assets amongst my photos that I wasn’t utilising. Using the same process, I made simplistic line drawings of photographs. These photographs are of passers-by in train stations, on beaches, in the street, in galleries, pubs, anywhere really.
In the animation there are a number of 3Dimensional figures made from these photographs. It goes back to one of the first things I did on the course at our laser cutter induction in the old wood workshop (may it rest in peace).
Here are some images trying to give more of an insight into the process surrounding the figures in my VR animation.
Below are the UV maps of the pillars. These include a Specularity, Occlusion, Displacement, Normals and Diffuse ( Colour ) Map. Each of these have different effects on the overall texture of the mesh.
When I first started with Blender in Dec 2014 / Jan 2015 I couldn’t go anywhere near these. UV mapping was utterly alien, but after a long period of forced learning I began to see the benefits. As soon as the benefits of a new skill become necessary to achieve what you want to achieve, I find its easy to learn it, (as long as you have an outcome that you wish the perform). Learning something from scratch without a reason to use it is lengthy and broad. Of course, you can utterly master the skill, but with computer technology, the software moves at an astounding rate. I find it more interesting to keep flowing through software.
Thought I’d do a quick post on the rendering process going into my VR piece. I’ve confirmed today that I’ll be displaying the piece on a Samsung Gear headset, which is considerably better than my iPhone 5. (Bigger screen, better resolution.)
The rendering has been a nightmare. I’ve learnt more about the process of rendering in the past month than I’d ever thought I’d know. I’ve gone through render layers, passes, batch edits, shell scripts, render farms, network rendering… all in the space of a month. I’ve had to find quick and creative solutions to ensure that my piece will be viewable.
Re-creating my VR piece has been the biggest struggle over the last 3 months. I’d originally made the environment as a Unity game for an Oculus Rift or an HTC Vive, however, due to not being able to get one for the show, I had to re-make the piece.
Early last week I managed to set up a number of farms to get the animation going:
From tomorrow onwards will be a similar story. I’m basically just running around with a memory stick looking for computers.
I’ll do a further post on the content of the VR piece and how I’ve found ways to get around the issues of equirectangular content creation.
Over the weekend I held my first open studios at Coldharbour Studios. The community do two a year. I attended the one at Christmas but didn’t have much to show, so wasn’t involved. This time I felt more confident showing some work as I’d had a little more time to prepare.
It was an amazing experience. I had some very interesting conversations with a range of different people. The feedback was invaluable and I felt very comfortable discussing and listening to interpretations.
The best part of the weekend, by far, was the reaction to those who hadn’t tried VR before. It is a gimmick. a HUGE gimmick, but its still fun. I showed a small 20 second demo of my final piece on a loop. I still haven’t managed to get my hands on a proper headset (and won’t be able to for the final show) so i’ve resorted to a slightly better version of a google cardboard and an iPhone. The response was still promising and I’m excited to show the finished version in a few weeks time! I’m hoping to get a Samsung Gear for this as the resolution and screen size is significantly better.
Here are some photos from the weekend:
I’ve been making some big changes to the VR side of my final piece. I’ve (just about) got over the fact that incorporating live news media won’t be possible s0 I’m having to go down other routes. If I’m honest, It’s a real blow. I wonder if I’d continued more ruthlessly with three.js at the end of last year rather than getting interested in Unity whether the outcome would have been different, but time is running out and I can’t afford to keep experimenting without a certain outcome. At least I know that with funding I can do it. Whether that funding went towards hiring a programmer to create a unity script using CEFGlue and scraping browser data, or just buying Coherent UI.
Anyway, I’m over it, and going with Plan B.
I’m going to create video collages of browser screen grabs and news recordings. This does allow more flexibility with the narrative on misinformation and religious technology and media usage.
I’ve been trying to design the chapel for the piece, and have decided to give Cycles render a rest for the design process. Blender Render gives a simple model-like finish. These are the native models without any texturing. I want to complete the basic infrastructure of the level before focussing too much on detail. These details will include altars, figures, pews, satellites and monitors. I hope to begin the texturing at the beginning of May.
I should mention that I’m unsure about the presentation when in VR. Part of me wants to put all the focus on the interior. I’ll decide when its more complete. Conceptually it makes more sense to stay inside the Chapel, but aesthetically, its nice to be outside…. A lot to think about.
The model is far from done, but these are the Blender Renders as of lunchtime today:
Actually…. I have made a change this then…. i added a window.
I’ve placed the new model in Unity and here are a few screenshots of it in action. (Lots of work to do). As explained before these are process photos, so theres currently no textures or lighting other than the smokescreen.
2015 SVVR – The Future of Wearable Displays and Inputs
This is David Holz, co-founder of Leap Motion. Watch this lecture or even just browse over this lecture. There are some very interesting ideas of the future of consumer technology. He gets quite detailed about the evolution of Silicon chips and image sensors as we approach the 2020s. Even mentioning the potential of Smart-Dust….
At first he discusses the impact of the HMD (Head Mounted Display) in the coming years. There are some important leaps that must be made for this tech to truly act in the way we wish. As with all technology at the moment, the real crux of the issue is the power supply! Our advance in ability has far outreached the evolution of energy development and storage. Detaching the headset from the computer will be a crucial step. For the coming years, this could mean that its reliant on the Smartphones battery and advances.
As with any predictions, Holz’s are based on current trends and advances, and will obviously change BUT what he’s saying is very easy to believe.
In the lecture, the most interesting thing for me, and my research, was this idea of selective transparency. I recently enquired about buying an A1 sheet of electrochromic glass (Smartglass). I knew this would be expensive and probably impossible, but I thought it would be an interesting addition to the painting I hope to make. If the sensors in the gallery tripped both the fade of a backlight and the fade from frosted to transparent glass, it could be very interesting. Unfortunately an A1 sheet of this glass was going to be just over £1000….. so its on the back burner for the time being… but I’m very interested post MA. On this subject, I’ve found an interesting hack to mimic this effect, (in a way) with sellotape and frosted perspex. see-through sellotape on either side of the perspex takes away the frost and creates transparency.
Anyway…. the Selective Transparency Holz is talking about is in the HMD. He explains that between 2017 – 2022 we could be using headsets that could switch from virtual to mixed / augmented reality at the touch of a button (or 2020s equivalent of a button, maybe a holo-button?)
Very exciting stuff.
Back to present day, Leap Motion have also made some advances as is shown in the Video (though I’ve added some GIFs below). As the LM uses two infrared cameras, they have begun taking the camera feed of your hands and using it instead of 3D modelled hands. Being able to recognise your own hands in a digital landscape will give a strange sense of realism to the experience. Holz explains that this pushes beyond the idea of the “un-canny valley”.
Here are a few more GIFs showing the possible interaction tech we could have with our computer softwares. This is very exciting, finally the mouse and keyboard won’t be the natural interface to a computer, and instead we have……. our hands…….
This is clearly brilliant for art creation.
Here’s a load more to celebrate all this excitement.
Well… it appears that the Facebook owned company have gone another step further in their virtual reality dream, and have created a device that allows the user to touch things within the virtual environment….
Oculus Touch has two controllers, one in each hand, and vibrations move through the hand, simulating this virtual sense. What are the possibilities of this on virtual artwork? Unlike galleries today where the work is cornered off with motion sensors and the beady eyes of invigilators, could this bring a new element to art and the audience’s experience?
To be honest, it looks like a PS4 controller has been sliced in half, and been given a halo but its probably one of the more revolutionary designs for console controllers since the Nintendo Wii, or lack of controller, Xbox Kinect. Then again, these things should be judged on their application and ability, not their looks and purpose alone.
We’ll have to wait and see…. but in the meantime, the dream is exciting.
It’s pretty clear that Oculus Rift is getting closer and closer to being a viable commercial product with a lot of possibility, then again, so are many of its competitors.
I still haven’t tried one!!! which is frustrating more each day… as I’d love to get my hands on the developers kit. My nerdy, techy side needs to be indulged.