How things can change! I was made aware by a few people who had tried the early tests of my VR piece that the environment wasn’t necessarily a Church to them. The comparison given was the U.S. Capitol building.
I spent a considerable time focussing on a religious setting after the influence my research paper had had on my practice. From the beginning I focused on the Internet as a physical space and the architecture within it. Hearing the users experience made me feel that the environment is comparable to both a political and religious setting, which, given the nature of the internet, I didn’t dislike. Though, admittedly it wasn’t intended.
I’ve kept what was originally designed as a church / chapel going against what I’d decided a while ago.
I’ve reflected on the Internet as a comparable institute to religion and still believe that its use and ability creates a similar social structure based on live information. For example, stories of discrimination, abuse, or any injustice are shared and commented on. These are seen and read by tens of thousands, creating a collective social view on that injustice. The main issue with this idea is that internet communities are more secluded than I’d originally thought. The stories of injustice I see on my Facebook timeline are personalised to the community I’m a part of. If I lived elsewhere, in another community, it is a different story. (Of course this is a simple and obvious reflection that can be seen in any community online or offline).
This couldn’t have been made more obvious by Brexit. Since Friday, Facebook (and most other social media) has been set alight with emotion both positive and negative. Some of the posts have been horrendous, some very enlightening, however, the best one I read was a friend from Nottingham uni. He wrote:
“……and proof that my newsfeed is a terrible measure of public opinion”
I don’t want to digress to Brexit, but this comment gives the perfect insight into internet communities.
Back to the religious comparison. This idea has made me realise that although the spread of information allows for quick solidarity and reaction to significant events. It is also tailored to you as an individual and therefore doesn’t give you the insight of all points on the spectrum. I’ve always felt that the best way to understand a story or an idea is to read from both ends of the argument. This is in some way lost by the monetization of the net, and its focus on your individual experience (This is mostly true for social medias).
I’ve gone slightly off topic but the idea of Gateway is to highlight the importance of the viewer’s input.
I will post some more recent examples of the VR experience I’m building in due course.
I’ve been making some big changes to the VR side of my final piece. I’ve (just about) got over the fact that incorporating live news media won’t be possible s0 I’m having to go down other routes. If I’m honest, It’s a real blow. I wonder if I’d continued more ruthlessly with three.js at the end of last year rather than getting interested in Unity whether the outcome would have been different, but time is running out and I can’t afford to keep experimenting without a certain outcome. At least I know that with funding I can do it. Whether that funding went towards hiring a programmer to create a unity script using CEFGlue and scraping browser data, or just buying Coherent UI.
Anyway, I’m over it, and going with Plan B.
I’m going to create video collages of browser screen grabs and news recordings. This does allow more flexibility with the narrative on misinformation and religious technology and media usage.
I’ve been trying to design the chapel for the piece, and have decided to give Cycles render a rest for the design process. Blender Render gives a simple model-like finish. These are the native models without any texturing. I want to complete the basic infrastructure of the level before focussing too much on detail. These details will include altars, figures, pews, satellites and monitors. I hope to begin the texturing at the beginning of May.
I should mention that I’m unsure about the presentation when in VR. Part of me wants to put all the focus on the interior. I’ll decide when its more complete. Conceptually it makes more sense to stay inside the Chapel, but aesthetically, its nice to be outside…. A lot to think about.
The model is far from done, but these are the Blender Renders as of lunchtime today:
Actually…. I have made a change this then…. i added a window.
I’ve placed the new model in Unity and here are a few screenshots of it in action. (Lots of work to do). As explained before these are process photos, so theres currently no textures or lighting other than the smokescreen.
I’ve been thinking about how my linear work can better represent structured personal space. Taking inspiration from physical architecture seemed like the best way to go about this. Above is a structure i’ve put together to show a more architectural, habitable structure than those I’ve worked on before. This mix of physical architecture and digital abstraction I feel establishes the balance I’m looking for.
I’m pleased with how these pieces have gone. I’ve taken on board the feedback from my Unit 1, which is to steer my focus to create more abstract work. I’m open to doing this, and feel there has been success in these areas, though I’m still very set on having elements of representation embedded within these compositions.
I’ve continued playing with projectors on the surfaces of architectural designs. (In Blender). As I’m trying to texturise 3d Models with online media (most likely screen captured videos of news channels), I’ve been testing its potential. These are the second round.
2015 SVVR – The Future of Wearable Displays and Inputs
This is David Holz, co-founder of Leap Motion. Watch this lecture or even just browse over this lecture. There are some very interesting ideas of the future of consumer technology. He gets quite detailed about the evolution of Silicon chips and image sensors as we approach the 2020s. Even mentioning the potential of Smart-Dust….
At first he discusses the impact of the HMD (Head Mounted Display) in the coming years. There are some important leaps that must be made for this tech to truly act in the way we wish. As with all technology at the moment, the real crux of the issue is the power supply! Our advance in ability has far outreached the evolution of energy development and storage. Detaching the headset from the computer will be a crucial step. For the coming years, this could mean that its reliant on the Smartphones battery and advances.
As with any predictions, Holz’s are based on current trends and advances, and will obviously change BUT what he’s saying is very easy to believe.
In the lecture, the most interesting thing for me, and my research, was this idea of selective transparency. I recently enquired about buying an A1 sheet of electrochromic glass (Smartglass). I knew this would be expensive and probably impossible, but I thought it would be an interesting addition to the painting I hope to make. If the sensors in the gallery tripped both the fade of a backlight and the fade from frosted to transparent glass, it could be very interesting. Unfortunately an A1 sheet of this glass was going to be just over £1000….. so its on the back burner for the time being… but I’m very interested post MA. On this subject, I’ve found an interesting hack to mimic this effect, (in a way) with sellotape and frosted perspex. see-through sellotape on either side of the perspex takes away the frost and creates transparency.
Anyway…. the Selective Transparency Holz is talking about is in the HMD. He explains that between 2017 – 2022 we could be using headsets that could switch from virtual to mixed / augmented reality at the touch of a button (or 2020s equivalent of a button, maybe a holo-button?)
Very exciting stuff.
Back to present day, Leap Motion have also made some advances as is shown in the Video (though I’ve added some GIFs below). As the LM uses two infrared cameras, they have begun taking the camera feed of your hands and using it instead of 3D modelled hands. Being able to recognise your own hands in a digital landscape will give a strange sense of realism to the experience. Holz explains that this pushes beyond the idea of the “un-canny valley”.
Here are a few more GIFs showing the possible interaction tech we could have with our computer softwares. This is very exciting, finally the mouse and keyboard won’t be the natural interface to a computer, and instead we have……. our hands…….
This is clearly brilliant for art creation.
Here’s a load more to celebrate all this excitement.
I’ve finally started a physical version of the digital drawings I worked on earlier the year. I’m loosely basing it on a few of my previous pieces, but I’m trying to allow the process to dictate the composition. Beforehand, I’ve thought that having a detailed and structured plan was important, but the outcome of this continued to vary, so instead with this piece, im giving in to my instincts and seeing how it plays out.
Having recently framed a few of the earlier Signals works, (everything changes in a frame) I was pleased and excited by their outcome. Conceptually, there is an element of architecture to these voids, and their erratic nature gives an impression of the digital landscape but at the same time they lack a narrative. This is of course contradictory to my interest in revealing the narrative through the viewers presence, but you could argue that each viewer would have their own narrative of the compositions (like any art). Then again, with no revealed narrative and instead simply abstract contortions of lines, it becomes a classic example of justifying contradiction and curbing narrative to fit the practice.
Nonetheless, for this piece I’m letting the process create the work. The narrative and concept has been set throughout the MA.
Below I’ve attached 4 newly framed prints from earlier this year.
I’m interested in distorting the lines with a frosted Perspex over the top of the canvas, to give a more screen like impression. (I’d love to add lighting to this piece but I’m open to change).
I’ve managed to create and test a few Oculus Rift environments. They are crude, but they worked well. It hasn’t proved too difficult to implement the head tracking which is positive for future projects.
It was a great opportunity to experiment with the kit. Thank you Alejandro!!
They are impressive. The only downside in my mind is the screen resolution, but given certain company’s recent successes with 4K (and higher) screen resolution, hopefully it’ll be better on the commercial release.
Below are two videos of the experiences. No.2 was significantly more successful than No.1