FAILURE IN CHROME 
Curators: Tom Milnes and Emile Zile 
Artists: Ian Keaveny, Sarah Levinsky & Adam Russell, Kiah Reading, Tom Smith, Joshua Byron, Marc Blazel, Samuel Fouracre, Micheal O’Connell, Naomi Morris, lyve_forms, Mette Sterre and Amble Skuse. 
 
1st Nov 2017 - 31 Jan 2018. 
 
FAILURE IN CHROME showcases the talent from around the world working with digital performativity. Artists’ work deals with research interactions between digital, online spaces and/or their physical materiality within performance, with an approach which creates discourse around error or failure within these manifestations. Each artist will exhibit for one week on DAR’s online residency space with a live-stream performance at the end of each week by the resident artist. 
 
 
 
 
15th Nov - 21st Nov 
Kiah Reading 
Pure reason & bass 
When first talking about this project I had recently listened to Keller Easterling quote someone to say, truth is at a huge disadvantage because it’s only got the one story to tellStupidity is at a huge advantage because it can take on all the guises of truth and change its story constantly. 
 
Pure reason and bass is audio and text fragmented and reduced to individual words, lines, ideas and pieced back together in a live, random, non-linear and never ending soliloquy, an infinite slippage based on the ambiguous side of language (as excess). 
 
These websites, which can be performed by all, become exercises in a reopening of the indefinite, the act of exceeding established meanings and providing a moment for philosophical verses and soundFX to enter a zone where they lose their extrinsic references and coordinates and re-emerge confounded with new confusions. 
 
Questions responded to by the wrong answer, new versions of old ideas, understanding more or less. 
Below is an example of these websites performed. Open these three links (I, II, III) in your own browser to give it a try. 
Browser history quickly becomes some kind of tabular, notations of performed sites and patterns.  
Over the week I will continue to experiment with ways to code experiences that may mirror our online gestures but generate very different results.  
Above and below are experiments in performative browsing where scroll events trigger electronic samples. 
Test it out here or below. /\/\/\ increase volume /\/\/\ 
Dune pt. 1 for your listening +pleasure+ through this link. 
Using similar code as earlier in the week I replaced .mp3s with .mp4s. Test left and right eyes. 
8th Nov - 14th Nov 
Sarah Levinsky & Adam Russell - Tools that Propel 
TONIGHT @8pm Tues 14th Nov // Youtube live stream below... 
Day 5 // 
(Adam) It seems very fitting that as I was writing this update about trying to remember all the changes made in the last couple of days, I accidentally deleted my draft post with no backup, and had to remember what I wrote about trying to remember things. In fact this is a crude example of my core interest in Tools that Propel. Sedimentary layering of action upon memory becoming memory driving action becoming memory, a recursive folding back-and-forth over time, supported by some kind of inscription or mark making. Does it matter that we cannot remember what we just said, or wrote, or what movement we just performed? A tool can prompt us by playing back recordings of our past actions, but these recordings can never really encompass 'what we were doing' in the past. However this is not a problem, since what really matters is for the playing back of recordings to become a part of 'what we are doing' in the present. I was reminded today of the following quote: 
 
"This table bears traces of my past life, for I have carved my initials on it and spilt ink on it. But these traces in themselves do not refer to the past: they are present" M. Merleau-Ponty (2002 / 1962) Phenomenology of Perception p.479 
Day 4 // 
Experimenting with various different video outputs/aesthetic choices. Looking at the impact on choreographic decision making and the performance space. 
 
Day 3// 
Thinking about group improvisation with Tools that Propel. How to relate to it when there are other bodies in the space and it is not directly tracking you. What is the materiality of the sensor/camera itself? What is the importance of interacting with the system as a material body rather than just a reflective mirror/projection/distortion of your live movement? Keir mentions he thinks that there are three different performers within the improvisation.... the 'active performer' who knows that they are being tracked and is very deliberately front facing... the 'subactive performer' who is dancing with the active performer and trying to become the one in control, the one being tracked... the 'passive performer' who knows that they are in the dance/composition but not being tracked, who is interested in creating (incidental) presence in the memories. What is the sensor or tools that propel judging as 'important' and what becomes interesting because it is captured even though it wasn't focussed upon...? How does this infect/affect the dancer? There are lots of conversations about what the system reveals to them and how that is isn't what they thought they wanted to focus on in their movement but that there is information in it for them to use... 
 
We also started to explore how Tools that Propel might forget memories not simply based on being the 'oldest' memory (ie. forgetting number 1 when number 21 is made and so on) but on the basis of it being the most used. There is a question over whether it should be based on the number of times that memory has been played or the time spent in the memory.  
Day 2 // 
Today we were joined by two new dancers (Keir Clyne and Katherine Sweet) as well as most of those from yesterday. It has been so interesting working with the dancers with Tools that Propel. The system has become a choreographic collaborator and with every time the dancers improvise with it we learn more about its potential. Yesterday there were interesting discoveries about what was happening for each of them on their first encounter with it, from some of them wanting to cheat the system, creating new movement to try to ensure that it didn’t recognise them, breaking their own natural movement pathways by exploring new trajectories, and others talking about retracing their steps, getting lost in a maze and moving through the data. As they learnt to play the system or interact with Tools that Propel as a collaborator or dance partner they have become more playful and more sensitive to the potential of their exchange with the ‘decisions’ made by the system. They were developing duets with it, each with a different motivation or task – for example to try to focus on exploring an emotional expression from the encounter, to focus on the incidental or the chance element – what the system deems is an important memory to play back – or to make decisions about how much to engage or reject its offerings/decisions/memories. 
 
Today we explored the idea of feeding it with a choreographic phrase that each dancer had already developed. This felt very different – like playing amongst saturated memories – and suggested the potential of it to make their choreographic decision-making more complex, impacting on the ways they thought about the material, its dynamics, its directions, where it took place in space. They each seemed to have a different relationship with the system, and sometimes that seemed to be formed by a number of variables, for example how much it seemed to be struggling with the tracking of their movement (some movement seemingly being more easily aligned by it), or how closely it tracked it, or the fact that it seemed sometimes to ignore or not respond to a particular type of movement, as well as the motivations of each dancer towards it. There were bugs in the system today too – but this is an enquiry which accepts them too and just allows that to be part of what produces the new movement. Sadly, due to a bug we lost most of the session footage from this task. 
(the video above shows side-by-side comparison of the input and output video streams; our movers could only see the right-hand image) 
 
Day 1 // 
We spent much of today in the studio introducing Tools that Propel to five dance students who will be assisting our residency over the coming week: Rebecca Moss, Brandon Holloway, Holly Jones, Maria Evans and Yi Xuan Kwek 
 
We worked on solo and group improvisations, exploring different ways of working with the system. We began to prototype a multi-body tracking setup but this is not quite ready to show yet. By the end of the afternoon we switched back to single-body tracking and were encouraging our movers to 'compete for focus' by moving up close to the Kinect sensor and getting in each other's way. This created an interesting in-out motion which we hadn't seen so much previously, and more chaotic entanglements of multiple bodies in shot. It was often unclear just who was currently driving the system. 
 
 
For some months we have been working together to develop an interactive digital environment called Tools that Propel
 
Tools that Propel is an interactive video installation or tool that invites participants to evade stable classification of their movements as they improvise with it. To reveal their own live reflection, the participant must present the system with motion and gestures that are not recognised. If it recognises and has tracked their gestures they find themselves engaging with similar footage from both their own recent past as well as the traces of movement made by other people who have interacted with the system before them. Participants can try to bring back these recordings, creating an onscreen choreography from present and past movement, personal and collective memories. 
1st Nov - 7th Nov 
Ian Keaveny - The digital past is a foreign language  
 
My work is based on forcing software and hardware to the point of failure, through hex editing ( changing hex values in a video or audio file ), Sonification ( opening video or image in an audio editor) misinterpretation, ( similar to Sonification but interpreting say a text file as sound ) and the exploitation of hardware faults found in older computers when mismatched with more modern operating systems what is known as Glitch art . The idea for my residency started with a simple question what would the Internet look like to Windows 95 , how would it read it , given how much the Internet has changed visually and technically since then ) and what kind of work could I make with the programs of that era ? If the digital past is a foreign language could I learn to speak it again – and more importantly where would I like to go today ? 
The installation process sometimes feels like shamanism, coaxing old hardware back into life, revisiting the digital past as an archeological process involving failure , burnt out psu's and case cut fingers . Video is of installation process of Win95 recorded using webcam then cut into images , each image converted to ppm format , hex edited then reassembled using ffmpeg in Linuxmint - the video is then datamoshed using a ruby script and finally rescued using Flowblade . 
Having installed it what will I do with it and what does my desktop sound like ? 
I found an obscure program that was designed for blind people to use as a kind of radar , misusing the program I explored the desktop and found I could create feedback loops and disintegrating icons and text like a continuous asemic dialog . 
Internet explorer 3 vs the web , browser error in rendering text . 
What am I seeing how do I fix this , do I want to fix this ? ( browser rendering error - internet explorer 3) 
Today I was lost in the screensaver maze 
So I looked at the maze as sound and through processing using a webcam to feed the output of the screensaver in motion to a second computer running a modified pixelsort script , adding sound from the earlier video and upscaling . 
Browser render error and misdirection 
Percent percent percent percent dollardollardollar +++++////// This is an earlier video of the win95 maze screensaver sonified using audacity then rerendered using flowblade ( broken files need fixing) the sound is created using a text to speech engine fed with a screen grab image rendered into text using pinxy ( an ancient ascii art maker ) image as text as sound , screensaver as mapped sound as video . 
Exploring the extras hidden away on the installation cd ( slitscan of hover ) 
Language activated desktop . 
Partly sunny ( html code and text from MSN via internet explorer 3, turned into speech /chant) video is asciified version of video found on installation disc turned into Dirac then hex-edited). 
Video taken from installation disc , chopped into stills using an asciiscript in processing , reassembled as mp4 then chopped into stills and run through glic encoder in processing , finally reassembled and uploaded. Happy Days in an alternate reality . 
Where would you like to go today ? 
With the way that I work , and by using different codecs and methods of attacking those codecs ( the above is webm , one of my favourites ) similar source material can give widely differing results each codec having its own texture and breaking points . Again the source is a win95 screensaver read to file using an hd webcam on a secondary computer then hex edited and recaptured during playback using a screencapture program - broken files will often not re-encode correctly but will play in say vlc or mpv. The sound is a series of texts ( some system files from win 95 read by a txt to speech engine ) played with in audacity and then layered . 
Access denied - wait what just happened there ? 
Back to top 
Designed and created by it'seeze