Tagged: Signals

15012016 ‘Signals’ 3rd Session

I’ve continued transferring ‘Signals’ to canvas. Its helping me understand how I’d transfer digitally-made compositions to physical works. Though the design itself is completely spontaneous, the way in which I place the structures has transformed through the digital manipulation of original drawings. The tangibility of a digital process allows for this manipulation and reflects a lot of what we discussed in the seminar yesterday.

This idea can be related back to the way we use personal devices and internet services. Based on your inputs, we can work out a vast array of possibilities to move forward with any given inquiry. An incredibly simple example would be if I were to open a map app and work out how to get from my home to Camberwell, I will be presented with a number of options including walking, driving, cycling and any means of public transport based on live data. My next step will then be to pick one and run the course based on what the app provides. This is a physical issue, with a digital inquiry feeding all the possibilities for a physical output, i.e. me getting to Camberwell. Just like a calculator, and really any computer process. This is a very simple example but when applied to a more complex issue such as the impact of weather systems, or the potential impact of a demolition, these simulations can be life-changing. Software is made to act out these sorts of scenarios and give the user each of the possible outcomes during potential disasters. Simulation softwares have been used in architecture and military design for some time but can simulation be taken a step further? Online advertising nudges us towards products or services we may want based on search engine and social media entries. Using predictive software we could create simulations that act out whether two people will get along…. or whether someone would work well in a particular job… This isn’t that farfetched and is probably well on its way but is it ethical?

Bringing it back down to the everyday… digital processes have allowed us to solve complex problems individually, without moving. Whether this is good or bad isn’t the point, its the fact that its possible. I’ve found that digital process has aided me in finding the drawing style I’ve hoped for, and I’ll never know whether I would have found it if I didn’t use a computer, or how long it would have taken to secure it. (Not that its in any way secure). In essence, I’ve created simulations of physical paintings in the same way I used to use a sketchbook.

Below are the two current working examples of my movement to physical outputs.

The smaller piece has had a very thick layer of varnish over it, which gives it a little more depth and gloss. I wish it could be deeper, perhaps layering it with an inch of clear or frosted resin. I’ve also heard that certain permanent markers can go purple, and eventually disappear over time, so hopefully varnish or resin will preserve the pen a little more.




I felt this print went well. Thought I’d share.


Just to be clear, none of these works are part of my final piece, they’re working towards it. I hope to produce detailed blueprints of my piece by the end of January, along with a rundown of the technical aspects of the work (I’m consulting Ed “Ninja Jamm” Kelly for this as I haven’t made as much headway with my arduino project as I had hoped, though I think the main issue is the Uno boards number of outputs….. Charlieplexing is a nightmare).






I’ve been thinking about how my linear work can better represent structured personal space. Taking inspiration from physical architecture seemed like the best way to go about this. Above is a structure i’ve put together to show a more architectural, habitable structure than those I’ve worked on before. This mix of physical architecture and digital abstraction I feel establishes the balance I’m looking for.

I’m pleased with how these pieces have gone. I’ve taken on board the feedback from my Unit 1, which is to steer my focus to create more abstract work. I’m open to doing this, and feel there has been success in these areas, though I’m still very set on having elements of representation embedded within these compositions.





Further Blender Projection Tests 18122015

I’ve continued playing with projectors on the surfaces of architectural designs. (In Blender). As I’m trying to texturise 3d Models with online media (most likely screen captured videos of news channels), I’ve been testing its potential. These are the second round.




14/15122015 Signals (on canvas) 

I’ve finally started a physical version of the digital drawings I worked on earlier the year. I’m loosely basing it on a few of my previous pieces, but I’m trying to allow the process to dictate the composition. Beforehand, I’ve thought that having a detailed and structured plan was important, but the outcome of this continued to vary, so instead with this piece, im giving in to my instincts and seeing how it plays out.
Having recently framed a few of the earlier Signals works, (everything changes in a frame) I was pleased and excited by their outcome. Conceptually, there is an element of architecture to these voids, and their erratic nature gives an impression of the digital landscape but at the same time they lack a narrative. This is of course contradictory to my interest in revealing the narrative through the viewers presence, but you could argue that each viewer would have their own narrative of the compositions (like any art). Then again, with no revealed narrative and instead simply abstract contortions of lines, it becomes a classic example of justifying contradiction and curbing narrative to fit the practice.

Nonetheless, for this piece I’m letting the process create the work. The narrative and concept has been set throughout the MA.

Below I’ve attached  4 newly framed prints from earlier this year.

  Below are photos from my first two sessions on this new piece.


I’m interested in distorting the lines with a frosted Perspex over the top of the canvas, to give a more screen like impression. (I’d love to add lighting to this piece but I’m open to change).


Oculus Rift Tests 20112015

I’ve managed to create and test a few Oculus Rift environments. They are crude, but they worked well. It hasn’t proved too difficult to implement the head tracking which is positive for future projects.

It was a great opportunity to experiment with the kit. Thank you Alejandro!!

They are impressive. The only downside in my mind is the screen resolution, but given certain company’s recent successes with 4K (and higher) screen resolution, hopefully it’ll be better on the commercial release.

Below are two videos of the experiences. No.2 was significantly more successful than No.1

No. 1

No. 2

Signals – Computer based Environment.

These stills are from the most recent and most successful version of “Signals”. This is a browser-based version, with mouse interaction made in Javascript using Three.js. I will incorporate the interactive version on the website I hope to release before the end of the year. Screen Shot 2015-11-09 at 03.45.10

Screen Shot 2015-11-09 at 03.45.03

Screen Shot 2015-11-09 at 03.45.02

Screen Shot 2015-11-09 at 03.44.58

Screen Shot 2015-11-09 at 03.44.54

Screen Shot 2015-11-09 at 03.44.53

Screen Shot 2015-11-09 at 03.44.51

Screen Shot 2015-11-09 at 03.44.50

Screen Shot 2015-11-09 at 03.44.47

Screen Shot 2015-11-09 at 03.44.40

Mixed Signals 29092015

Screen Shot 2015-09-29 at 07.43.34

Screen Shot 2015-09-29 at 07.43.35

Screen Shot 2015-09-29 at 07.43.44

Screen Shot 2015-09-29 at 07.43.46

Screen Shot 2015-09-29 at 07.43.58

Screen Shot 2015-09-29 at 07.44.03

This is clearly very similar to some of the work I’ve been doing this year, visualising online connections. It also highly contradicts what I wrote at the beginning of the term, but I couldn’t help myself. I’m looking into including Twitter’s open source APIs to integrate live feed information based on conflict. Its an ongoing struggle, especially as I get to terms with learning to code…..

However, these are stills from an interactive, Javascript model of the work. The next step will be to use the Leap Motion to interact with the programme. I’m looking to exhibit some prints in an exhibition this month. I’m thinking I may use the format from the first two, as there is almost a sense of conflict between the two sides. Ultimately though, I’d like to project it, and give the viewer some sort of interaction, whether its a mouse or a leap motion etc.. I still haven’t successfully integrated my figurative and architectural work with these systems, however I thought I’d display this version. Again, this isn’t I hoped, I want to move more towards meaningful displays of information, however this is proving harder than I thought… nonetheless I still feel a sense of progress from previous versions of the work.