Tagged: Blender

VR Texturing

Texturing:

Below are the UV maps of the pillars. These include a Specularity, Occlusion, Displacement, Normals and Diffuse ( Colour ) Map. Each of these have different effects on the overall texture of the mesh.

 

Vaults UV:

Vaults

When I first started with Blender in Dec 2014 / Jan 2015 I couldn’t go anywhere near these. UV mapping was utterly alien, but after a long period of forced learning I began to see the benefits. As soon as the benefits of a new skill become necessary to achieve what you want to achieve, I find its easy to learn it, (as long as you have an outcome that you wish the perform). Learning something from scratch without a reason to use it is lengthy and broad. Of course, you can utterly master the skill, but with computer technology, the software moves at an astounding rate. I find it more interesting to keep flowing through software.

Tutorial 27052016

I had my tutorial with Jonathan on Friday. We spoke about the logistics for the final show and the best way to present my final piece. Although I’ve had some breakthroughs in producing VR content in Unity and Blender, I’m still not entirely sure which would be best for the show. On one side, if I can get my hands on an Oculus or Vive, a PC based Unity build would make it possible for the viewer to explore the environment as they wish. However its proving difficult to get my hands on one for the final show, so I’ll have to consider a rendered equi-rectangular film to be shown on a google cardboard. (I could explore simplifying the Unity build to allow it to be app based, but it wouldn’t have the same impact as the PC version). It’ll most likely have to be a rendered film. There are benefits to both options, however conceptually and aesthetically it’d be better for the viewer to have full control of their movement.

Thinking through the presentation for VR in the gallery, there’s the important question of user experience. I’m apprehensive about the funfair/arcade-style queueing that I’ve seen at a number of exhibitions, but there’s nothing I can do about it. The emergence of the technology makes it currently very attractive to try no matter the context. This fact gives an added pressure to the outcome. Realistically, during the private view, I’ll have to be organised and if I were able to use the open world version of the work I’d have to restrict the time of each viewer’s experience depending on the interest. This can be done a number of ways though I’ve been considering a script that cancels all colliders after a certain period of time, forcing the FirstPerson to fall through the structures, ending the simulation. However, again, this is for the PC version of the work.

The other option is a 5/6 minute fixed animation on a loop. The biggest pro for this option is the quality of the final render and the fact that the headset will be portable (potentially multiple headsets). The biggest problem would be the battery life of the phones.

As for the physical door, I’m going to have to go at it with a Jigsaw and re-arrange it at Wilson Road. My other option was to try to borrow a horse box….

I’m currently working on my Symposium 2. The Research Paper settled my conceptual interests in the freedoms and restrictions of religion/spirituality and the internet. Within this I approached and considered addiction, identity, disembodiment and propaganda within this question. My interest in the relationship between user and device has been inspired by Nam June Paik’s work, which has evolved rapidly since studying his work in more detail for the paper. This time last year, we began to form our research questions, and at the time I didn’t expect it to have had such an impact on my overall practice. It has allowed me to consider the concepts in a purely academic context. ‘The Medium is the Message’, (Marshall McLuhan), and Virilio’s ‘Information Bomb’ have been important texts during this process.

In many ways, my concept hasn’t changed for the last 6 or 7 months. The idea of a Gateway, a device as access to an extension of physical space and identity. My interim exhibit ‘Congregation’ was also an attempt at exploring this idea. Aesthetically, I’ve tried to develop ideas from early in the MA such as trying to represent to multiple identities one holds online and the physical, 3-dimensional make-up of everyday information.

As i’m approaching the final weeks of the MA, I’m happy to be in a position where there are aesthetic and conceptual choices that can be made rather than rushing to finish. Though, I’m aware that its looking less likely I’ll be able to secure a headset for the final show, which makes the Unity experience I’ve been developing somewhat frustrating / partially obsolete. Depending on what happens with the headset, there may still be an 11th hour panic!! Mainly the issue of organising and rendering the film.

I’m still making changes to the work, and having been through a number of versions, in recent days, I can see significant changes happening before the final exhibition version.

Updates – New Chapel Designs, Media Plan B, Unity Screenshots 20042016

I’ve been making some big changes to the VR side of my final piece. I’ve (just about) got over the fact that incorporating live news media won’t be possible s0 I’m having to go down other routes. If I’m honest, It’s a real blow. I wonder if I’d continued more ruthlessly with three.js at the end of last year rather than getting interested in Unity whether the outcome would have been different, but time is running out and I can’t afford to keep experimenting without a certain outcome. At least I know that with funding I can do it. Whether that funding went towards hiring a programmer to create a unity script using CEFGlue and scraping browser data, or just buying Coherent UI.

Anyway, I’m over it, and going with Plan B.

I’m going to create video collages of browser screen grabs and news recordings. This does allow more flexibility with the narrative on misinformation and religious technology and media usage.

I’ve been trying to design the chapel for the piece, and have decided to give Cycles render a rest for the design process. Blender Render gives a simple model-like finish. These are the native models without any texturing. I want to complete the basic infrastructure of the level before focussing too much on detail. These details will include altars, figures, pews, satellites and monitors. I hope to begin the texturing at the beginning of May.

I should mention that I’m unsure about the presentation when in VR. Part of me wants to put all the focus on the interior. I’ll decide when its more complete. Conceptually it makes more sense to stay inside the Chapel, but aesthetically, its nice to be outside…. A lot to think about.

The model is far from done, but these are the Blender Renders as of lunchtime today:

Chapel20042165.png

Chapel20042166.png

Chapel20042169.png

Chapel20042162.png

Actually…. I have made a change this then…. i added a window.

Chapel2004201612.png

I’ve placed the new model in Unity and here are a few screenshots of it in action. (Lots of work to do). As explained before these are process photos, so theres currently no textures or lighting other than the smokescreen.

Screen Shot 2016-04-20 at 17.06.09Screen Shot 2016-04-20 at 17.06.43Screen Shot 2016-04-20 at 17.07.19Screen Shot 2016-04-20 at 17.07.35Screen Shot 2016-04-20 at 17.07.49Screen Shot 2016-04-20 at 17.08.49

What once was a phony Fireplace grew to be a Door

IMG_6263-2IMG_6269

Having looked at the prospect of the internet as a physical environment, I’ve come to think, what better way of showing that, than to create an element of classical architecture. It’s perhaps all a little too literal and like something out of Stargate, but it does the job.

I magically came across the mantelpiece of a fireplace. It was sitting by the bins outside Coldharbour Studios. It seems unloved, and I knew instantly that it was something I’d want to play with. Having hauled the stash into the studio, it remained there for some time with no purpose. Initial thoughts were to place a screen in it, but having reviewed and reflected on some of my previous work, I saw an opportunity to realise my hopes for the Ultrasonic circuit.

The majority of my research has considered the importance of the device and how it acts as our access to new information. My interim piece last year ‘Congregation’ considered it as the gateway to the internet. Here, I hope to take that concept further, and more literally. (As mentioned before in the Stargate reference.)

Back to how this came about….. After staring at this pointless, hollow fireplace taking up half of my space, I realised it was perfect for the frame of a classical doorway such as the one used here:

IMG_1423.jpg

stagedoor.jpg

Or this:

IMG_1422.jpg

So I got to work transforming a fireplace into a door….

Here’s how I’ve got on so far.

 

It’s very primitive at the moment but it gives an impression of what will be done. Alongside this, I’ve been working on the Digital side of the doorway:

 

The main chapel is in tact. There is a lot of detailing and texturing to do, before getting it into Unity. When in Unity, I’ll be focussing on the different strands of media presented in the main hall of the Chapel.

Best to say this now, I’ve never noticed more of a difference between the digital and physical work process since approaching this piece. Whether they come together well is yet to be seen, but I’m positive that something will come out of it.

Only large bump in the road so far is displaying live news in unity. I was almost successful with a plugin from Coherent Labs, BUT they only gave me a 30 day free trial for it, and it costs £3000…. So unless I can convince them to offer me another 30 free day trial during the exhibition, I’m going to have to re-think embedding live news channels in the chapel. Another option is to stream it from webcams facing live screens. This will require multiple news channels to be on multiple screens at some other location for the duration of the final show…. Doesn’t seem feasible. I’ve been playing around with CEFGlue but again, no luck. The only other way I can think may be feasible is screen casting onto a plane in Unity. Surely if its possible to display the webcams output, its possible to display the screen.

This is my current snag, hopefully it’ll be solved, but I’m beginning to think of a contingency, mainly by using video clips rather than live news (which is tragic!!)

 

sad-emoji.jpg

 

Monitor 05012016

I’m considering my options with objects that can be both physical and digital. I like the idea of being able to touch it in VR, but when you take the HMD off, its still there in front of you. Today’s attempts at texturising the light paths with media have been terrible… An example of how terrible it was is below. (Its an extremely up close image of the American flag, on top of these lines.)

VSRender305012016.png

The plan is to make the projectors much smaller, to accommodate the images at that scale. When the first projector makes a stable and identifiable image. It will be a matter of copy and paste, and changing the images.

The focus of these images is on news channels, “talking heads” and the miscommunication between their messages. I’m not sure if anyone noticed yesterday’s fallacy of news reporting that is Sky News. A recent ISIS propaganda film has swept its way through our news even though we don’t want their propaganda to effect our daily lives. Whilst watching Sky News broadcast on the subject, they begin a live interview with a current expert in counter-terrorism. His comments were that we should be aware that ISIS are a credible threat, however we shouldn’t give in to their propaganda…. The interview went on for a good 5 /10 minutes. At the end, the presenter re-iterated what the expert had said, that they are a credible threat and we shouldn’t give in to their propaganda, with a banner above him displaying the man from the video and the title “ISIS release new film”…. The report continued for a further length of time, showing images of the British spies kneeling the jihadists.

This story has continued to be a featured part of the news since then, with comments from David Cameron all over the newspaper yesterday and this morning.

It’s irony is mesmerising, and it happens again and again. This is an example of the misinformation and hypocritical values in the news that I was interested in representing last April with ‘Monitor’. Their persistence of calling them ISIS, (a westernised baddie) against the will of our own parliament who are now recognising them as Daesh (a term they despise.)

Current live news reporting represents a terrifying mistreatment of the freedom of information. Paranoia is created throughout the country, and beyond, through the displaying of these propagandist films. (Exactly what Daesh had hoped for). A friend of mine said recently that they could imagine that the sense of unease and tension in London at the moment could be comparable to the Blitz…. other than the fact that there hasn’t been a similar attack. Whereas the Blitz saw consistent air raids on a daily / weekly basis that caused significant damage and hoards of casualties.

And the source of this paranoia? The representation of these stories in our own media. There’s no doubt that we in Britain are shaken by what has happened around the world, but our public information channels shouldn’t be over-bearing the message to make us feel uncomfortable in our own homes. (The sole purpose of ISIS’s propaganda messages). Surely these should be to inform, and to guide in the event of an attack, as well as to re-assure.

Ex-Security Minister Baroness Neville-Jones said on BBC Radio 4 that she was alarmed at the British public’s lack of awareness to their everyday surroundings….. (Two people have bumped into me today because they were looking at their phones.. )

This story trended in the media. It makes the public aware of the threat, whilst also guiding them to be safer in the event of an attack. Though….. you could also argue that this is exactly the kind of thing that stirs paranoia… either way I’d argue that its better than giving ISIS videos the air-time they want on our public news channels.

http://www.huffingtonpost.co.uk/2016/01/02/baroness-neville-jones-spy-chief-terror-level_n_8904572.html

It’s interesting that the obsession (or addiction) we have to our online content, and personal devices could, in the event of a terrorist attack, be a threat to our own safety. In fact, it could be a threat during strong winds (falling branches etc..) How can we interact with our personal computers, whilst still remaining alert and aware of our physical surroundings? The only way that seems feasible, would be the inclusion of Heads-up displays, and augmented vision?

Below is a video with a demonstration of what can happen at a busy junction:

The shoes below were first thought of in 2012, but there’s an interesting idea / joke in here. The shoes use GPS to guide you home if your lost… If there were vibrations in each shoe telling you to stop, go left, or right. If they were fed live data from smartphones GPS positioning, perhaps, we could all walk around looking at our phones without having to look up to stop ourselves from bumping into one another…. This or something similar seems to be the one of the only possibilities of us encountering this awareness issue without Heads-up displays.

gps-no-place-like-home-shoes-1.jpg

Just a thought.

Sam.jpg

Anyway, the low-res images below are a more stripped down version of Monitor, (still with no Textures).

Monitor305012016-2.jpg

Monitor105012016-2

Monitor105012016

Further Blender Projection Tests 18122015

I’ve continued playing with projectors on the surfaces of architectural designs. (In Blender). As I’m trying to texturise 3d Models with online media (most likely screen captured videos of news channels), I’ve been testing its potential. These are the second round.

12122015System-3.jpg12122015System-4.jpg121220152System.jpg

 

 

30112015 – 01122015 Virtual Projection Tests

I’ve been looking for a way to do this for some time and FINALLY have made some headway. I’ve wanted to re-create the effects of a projector within Blender. These, as many of my tests are, are very simple and have no conceptual framework. These is proof of process and essential for me to look back at how my skills in this area have progressed.

Looking at the work I’ve done throughout the MA, the impact of the screen and device on the user is an essential element. This interaction between you the physical and what the device presents, the digital has been a perfect example of the imbalance between these polar opposites. I felt that I best achieved this through my work at Digital Meze, especially as it seemed to sum up a lot of images I had created beforehand that focused on the user and the television, or computer monitor.

‘Monitor’ projected a live stream of BBC News into an altered CRT Television to highlight the complexity of misinformation in the news. Using live news feeds in my work has been an interest of mine for some time, and although this conceptually worked, however the way in which it was presented through ‘Monitor’ wasn’t quite what I’d hoped.

In these tests, I’ve textured spot lights with images and increased the emission value to create the effect. I’ve tested this with screen captured videos of live news feeds but the render time is slow, and will take a while to display the examples. Whether its possible to stream a live news feed within this is not yet known but it feels as if its a crucial step to execute further work.

BLENDER’s CYCLES, UNITY + THREE.JS:

The Cycles render engine is notoriously difficult to run properly in THREE.JS, and similarly, in Unity. Baking the textures into UV wraps is really the only way forward. This is very possible to produce systems and works with still images of media but including videos or lives streams is a different ball park. It seems possible that if the light source in a Python engine such as Blender can be texturised, a light source in Javascript could be used similarly. I’m yet to establish this though it doesn’t seem impossible.

TEST IMAGES:

ProjectionTest3.jpg

ProjectionTest1

ProjectionTest2