<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Julian Burgess]]></title><description><![CDATA[Computational Arts MA - Goldsmiths ]]></description><link>https://aubergene.com/</link><generator>Ghost 3.42</generator><lastBuildDate>Sun, 19 May 2024 21:58:01 GMT</lastBuildDate><atom:link href="https://aubergene.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Dreamspace]]></title><description><![CDATA[<p><em>Dreamspace</em> is an exploration of the collective unconscious through computational art. Over one hundred thousand online dream journal entries were compiled and processed using the Word2Vec machine-learning algorithm to create a vector space of one hundred dimensions. This was then projected in 3D and a dynamic planisphere map was created</p>]]></description><link>https://aubergene.com/dreamspace/</link><guid isPermaLink="false">5f24a264144fd20001cbc4e3</guid><category><![CDATA[Goldsmiths]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Mon, 28 Sep 2020 16:00:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2020/09/IMG_0781.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2020/09/IMG_0781.JPG" alt="Dreamspace"><p><em>Dreamspace</em> is an exploration of the collective unconscious through computational art. Over one hundred thousand online dream journal entries were compiled and processed using the Word2Vec machine-learning algorithm to create a vector space of one hundred dimensions. This was then projected in 3D and a dynamic planisphere map was created which represented the words used to describe people’s dreams as stars and the relationships between them as constellations.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/460332089?app_id=122963" width="1280" height="720" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="dreamspace"></iframe></figure><h3 id="introduction">Introduction</h3><p>During the early days of the coronavirus lockdown in the UK, many people reported having strange dreams. This led me to think about an idea which I have dwelt on since childhood as to the nature of consciousness and what separates one mind from another.</p><p>I discussed the idea with my crit group at Goldsmiths and felt it had good promise. The next morning a friend from undergrad days, who I hadn’t been in touch with for many years, contacted me out of the blue to say that I had appeared in his dream the night before. He is a practising Buddhist, and so we discussed my idea and forms of consciousness at some length. This event cemented in my mind that the project was the right choice.</p><h3 id="concept-and-background-research">Concept and background research</h3><p>My ideas centred on showing our dreams as the product of one collective entity. I wanted the work to focus less on the specifics of any individual dream and instead to consider the environment of dreaming as a whole. Each night we all escape to the 'dreamspace' where our minds are released from the constraints of conscious thought. In our dreams, there is no work to be done.</p><blockquote>Your vision will become clear only when you can look into your own heart. Who looks outside, dreams; who looks inside, awakes. — C. G. Jung</blockquote><p>Talking through my idea with friends, the same book was independently recommended three times to me: <em>The Hero with a Thousand Faces</em> (Campbell, 1949) which explores the ideas of myths in society as a form of collective memory. It also looks at the role of symbols and archetypes informed by the ideas of Carl Jung. I wanted my work to draw on these ideas and to pose questions about consciousness and the environment of dreams.</p><p>In producing sketches, I often returned to node-based graphs of relationships and from here I felt that perhaps creating a map to the collective unconscious would be a great starting point for the journey.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/09/Screenshot-2020-07-22-at-00.51.38-copy.png" class="kg-image" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/Screenshot-2020-07-22-at-00.51.38-copy.png 600w, https://aubergene.com/content/images/size/w1000/2020/09/Screenshot-2020-07-22-at-00.51.38-copy.png 1000w, https://aubergene.com/content/images/size/w1600/2020/09/Screenshot-2020-07-22-at-00.51.38-copy.png 1600w, https://aubergene.com/content/images/2020/09/Screenshot-2020-07-22-at-00.51.38-copy.png 1920w" sizes="(min-width: 720px) 720px"><figcaption>experimental 3D projection revolving around brain</figcaption></figure><p>A friend had recently seen Susan Hiller’s <em>Dream Mapping</em> (1973) in which the artist created thematic maps from the dream descriptions shared in the morning conversations of a group of seven dreamers and found shared features between the dreams. I intended to do something similar but on a larger scale of sampling and using just textual descriptions, where the words for entities would create the distances of the space.</p><p>Visually, I was inspired by the work of Joseph Cornell and in particular his reworkings of celestial planispheres and star charts. Cornell was himself interested in surrealism and the realm of dreams and was inspired by the novella <em>Aurélia</em> (Nerval, 1854) which features dreams and hallucinations of the author’s descent into madness at his infatuation with his muse.</p><h3 id="technical">Technical</h3><p>The process of creating this work was divided into three parts: fetching and generating the data, displaying it on screen, and plotting and laser cutting the prints.</p><p>To gather the data, I decided to scrape around 20,000 dream journals from DreamBank's online collection. I later added around 30,000 more journal entries from the Sleep and Dream Database, and another 50,000 from the Dream Journal Ultimate app public feed. In the final version I had 100,140 dream journal entries, totalling 17,114,790 terms with a unique vocabulary of 118,730 terms. Unfortunately, it isn’t easy to estimate how many people contributed to this corpus since on the original websites some dreams are attributed to groups and some are anonymous.</p><p>Getting hold of these journal entries was relatively straightforward, as I have previous experience of scraping data, however cleaning the text was quite involved since the journals had been compiled from various surveys and had inconsistent formatting and metadata within the journal entries.</p><p>For analysis of the dreams I initially looked at Linguistic Inquiry and Word Count (LWIC) text analysis programme, after seeing it used in one of artist Lauren Lee McCarthy’s projects, but it’s only available under a proprietary license and didn’t seem that suited to my idea as it mostly focuses on classification of fragments of text.</p><p>I then started looking at other neuro-linguistic programming (NLP) algorithms and found Word2Vec and GloVe. I found there was an existing Node module for Word2Vec and it was quite easy to get the demo working. The Word2Vec algorithm is interesting because it uses two-layer neural networks that are trained to reconstruct linguistic contexts of words. I trained it on my initial set of dreams using 100 dimensions and then extracted those values.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/09/Screenshot-2020-07-09-at-10.43.03.png" class="kg-image" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/Screenshot-2020-07-09-at-10.43.03.png 600w, https://aubergene.com/content/images/size/w1000/2020/09/Screenshot-2020-07-09-at-10.43.03.png 1000w, https://aubergene.com/content/images/size/w1600/2020/09/Screenshot-2020-07-09-at-10.43.03.png 1600w, https://aubergene.com/content/images/size/w2400/2020/09/Screenshot-2020-07-09-at-10.43.03.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>projecting points using t-SNE</figcaption></figure><p>My next problem was to reduce the 100 dimensions to three dimensions. I had previously seen t-distributed stochastic neighbor embedding (t-SNE) used in projects and thought it would be a great fit in this case as it would allow me to reveal the semantic relationships between the words. There were a lot of libraries which could do the calculations but I found most of them were either hard to get working or had a very slow runtime. I found the TensorFlow version worked really well and so I stuck to using that.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/09/Screenshot-2020-07-16-at-22.53.49-1.png" class="kg-image" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/Screenshot-2020-07-16-at-22.53.49-1.png 600w, https://aubergene.com/content/images/size/w1000/2020/09/Screenshot-2020-07-16-at-22.53.49-1.png 1000w, https://aubergene.com/content/images/2020/09/Screenshot-2020-07-16-at-22.53.49-1.png 1320w" sizes="(min-width: 720px) 720px"><figcaption>experimentation using a 3D projection</figcaption></figure><p>I I found that projecting to lower dimensions is something that requires quite a lot of tuning, both of the parameters and the input values. It wasn’t really any advantage to project the full 10,000 values, as it tended to both look messy and cluster too tightly around certain very common terms. In the end I curated a set of 1,004 terms to project, I picked this number as it was the number of fixed stars identified by Tycho Brahe in his star table of 1598. I used a mix of heuristics to identify which terms to include, such as TF-IDF (Robertson, S. E., and K. Spärck Jones), a score of how common a word is to individual documents and to a corpus.</p><p>I then took the 3D projected coordinates and normalised them using Three.js vectors and fitted them to the World Geodetic System (WGS84) model to turn them into the GeoJSON format.</p><p>For creating the map projections, I used D3.js, a JavaScript library I am very familiar with and which has a huge number of projections available. For this project, I chose the stereographic projection, which is commonly used for star maps. I tried out a number of rotations and settled on using an axial tilt of 51.5° as this is the longitude of Goldsmiths where I would be exhibiting the maps.</p><p></p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/09/Screenshot-2020-07-17-at-23.46.34-1.png" width="2346" height="1338" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/Screenshot-2020-07-17-at-23.46.34-1.png 600w, https://aubergene.com/content/images/size/w1000/2020/09/Screenshot-2020-07-17-at-23.46.34-1.png 1000w, https://aubergene.com/content/images/size/w1600/2020/09/Screenshot-2020-07-17-at-23.46.34-1.png 1600w, https://aubergene.com/content/images/2020/09/Screenshot-2020-07-17-at-23.46.34-1.png 2346w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/09/Screenshot-2020-07-17-at-01.07.09-2.png" width="1750" height="974" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/Screenshot-2020-07-17-at-01.07.09-2.png 600w, https://aubergene.com/content/images/size/w1000/2020/09/Screenshot-2020-07-17-at-01.07.09-2.png 1000w, https://aubergene.com/content/images/size/w1600/2020/09/Screenshot-2020-07-17-at-01.07.09-2.png 1600w, https://aubergene.com/content/images/2020/09/Screenshot-2020-07-17-at-01.07.09-2.png 1750w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>Early projected views of Dreamspace</figcaption></figure><p>I used the open-source framework Svelte to help manage the creation of JavaScript components and this gave me a lot of flexibility to compose the code in different ways.</p><p>I wanted to dynamically create a constellation for any given dream using the words in the projected map. To do this, I created a Delaunay Mesh of the selected points and then used the Urquhart edges subset; I then discarded any connections that were outside of the range of acceptable lengths. This gave a nice-looking set of connections that looked similar to constellations you might see in a star map.</p><p>For creating the prints, I used Matt DesLauriers’ canvas-sketch library, which helps by generating a preview and setting various defaults and converting units to millimetres. I already had a plotter (Roland DPX-3300) but unfortunately, it’s rather old and predates the SVG standard language and only understands HPGL. To get around this, I had already written a library Canvas Polyline, which converts HTML Canvas commands into a series of either lineTo or moveTo commands. In the past, this has worked well, but this was by far the most complex output I’ve plotted so far and I discovered a few bugs and missing features in my existing code along the way.</p><p>Plotting is a slow process, but it’s enjoyable to watch and gives degrees of freedom that aren’t easily available by other printed methods. I decided to plot on black card using a mix of white, silver and gold pens (Uni-ball Signo Broad pens). It took quite a few attempts to find settings that would work well for these. In the end, I created two sets of final prints and then took these to the Hatchlab at university where I used the laser cutter to burn out holes for the stars. I then created a cardboard jib to hold in place LED lights that were used to back-light the stars once the prints were framed.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/09/IMG_0821-2.JPG" class="kg-image" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/IMG_0821-2.JPG 600w, https://aubergene.com/content/images/size/w1000/2020/09/IMG_0821-2.JPG 1000w, https://aubergene.com/content/images/size/w1600/2020/09/IMG_0821-2.JPG 1600w, https://aubergene.com/content/images/2020/09/IMG_0821-2.JPG 1969w" sizes="(min-width: 720px) 720px"><figcaption>Map of <em>Dreamspace</em></figcaption></figure><h3 id="future-development">Future development</h3><p><br>There is a lot of scope for future development of this work. One of the big advantages of the broad scope was that I could develop the project in a variety of different ways and then narrow down the scope as we got closer to the deadline and had more certainty about what form our degree show would take.</p><p>I looked at allowing direct user interaction, outside of Covid-19 issues. I would have probably allowed users to select dreams and perhaps words directly using a mouse and keyboard. I did do some user testing of allowing people to enter their own dreams, however I found this rarely led to a good interaction since people would usually type out dreams that were a single sentence of around 15 words which meant it had very limited scope to draw a pleasant constellation.</p><p>I would like to develop the plots further. I think small versions could have more appeal for people to buy for their homes, as I had a number of enquiries along this line and I would like to try plotting twelve views of <em>Dreamspace</em> rotated by 30° instead of just the two views which are 180° rotated.</p><p>I also worked on a 3D representation of <em>Dreamspace</em>. My plan was to project it onto a wall or ceiling near to the rest of the works, but I didn’t feel the space in which I was exhibiting was suitable for this, so I didn’t develop it further.</p><h3 id="self-evaluation">Self evaluation</h3><p>Overall, I was very pleased with the outcome of this project. The hardest problem was not knowing if and how our degree show would progress and how my work might be exhibited and what form would work best under those circumstances.</p><p>The core concept had many strands that could be explored and so I had several areas that I developed quite far which could have been used if the circumstances had been different.</p><p>I was pleased that I got the backlighting of the prints working, but it took a lot more time than I anticipated. I think it might have been better if I could have exhibited in a darker space as this would have both accentuated the backlighting and also reduced the reflection on the glass. I think it also could have worked well to have projected the <em>Dreamspace</em> spheres so that they were much larger, giving a different feel to the work.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/09/IMG_0745.JPG" class="kg-image" alt="Dreamspace" srcset="https://aubergene.com/content/images/size/w600/2020/09/IMG_0745.JPG 600w, https://aubergene.com/content/images/size/w1000/2020/09/IMG_0745.JPG 1000w, https://aubergene.com/content/images/size/w1600/2020/09/IMG_0745.JPG 1600w, https://aubergene.com/content/images/size/w2400/2020/09/IMG_0745.JPG 2400w" sizes="(min-width: 720px) 720px"><figcaption>reflections made it challenging to photograph the work</figcaption></figure><p>As I said previously, I found it didn't work well to allow users to directly enter their dreams to be visualised within <em>Dreamspace</em>. However, given more time, I would like to explore working with the Dream Journal Ultimate app team to be able to access their private API so that I could add timestamps to the latest dream journal entries to situate the work more in the present moment.</p><p></p><blockquote>Be not afeard; the isle is full of noises,<br>Sounds and sweet airs, that give delight, and hurt not.<br>Sometimes a thousand twangling instruments<br>Will hum about mine ears; and sometime voices,<br>That, if I then had waked after long sleep,<br>Will make me sleep again: and then, in dreaming,<br>The clouds methought would open, and show riches<br>Ready to drop upon me; that when I waked,<br>I cried to dream again.<br><br>— Caliban, in Shakespeare's <em>The Tempest</em></blockquote><p></p><h3 id="references">References</h3><p>Materials and libraries</p><ul><li>Svelte — https://svelte.dev</li><li>Node Word2Vec — https://github.com/Planeshifter/node-word2vec</li><li>D3.js — https://d3js.org</li><li>Canvas-sketch — https://github.com/mattdesl/canvas-sketch</li><li>Canvas-polyline —  https://github.com/aubergene/canvas-polyline</li><li>Noto Serif — https://fonts.google.com/specimen/Noto+Serif</li><li>Big Shoulders Display https://fonts.google.com/specimen/Big+Shoulders+Display<br></li></ul><p>Works consulted</p><ul><li>Campbell, Joseph. (1956). <em>The Hero with a Thousand Faces</em>. New York: Meridian.</li><li>Crary, Jonathan. (2013). <em>24/7: Late Capitalism and the Ends of Sleep</em>. New York: Verso.</li><li>Encyclopaedia Britannica. <em>Astronomical Map — The Constellations and Other Sky Divisions</em>. [online] Available at: www.britannica.com/science/astronomical-map/The-constellations-and-other-sky-divisions#ref510210. [Accessed 25 Sept. 2020].</li><li>Hiller, Susan. Susan Hiller website [online] available at: www.susanhiller.org, www.susanhiller.org/otherworks/dream_mapping.html. [Accessed 13 Aug. 2020].</li><li>Hoving, Kirsten A. (2009). <em>Joseph Cornell and Astronomy: A Case for the Stars</em>. Princeton: Princeton University Press.</li><li>Jung, Carl Gustav. (1966). 'On the Relation of Analytical Psychology to Poetry'.</li><li>Kandinsky, Wassily. (2013). <em>Point and Line to Plane</em>. Mansfield Center: Martino Publ.</li><li>Marenko, Betti. (2015). 'When making becomes divination: Uncertainty and contingency in computational glitch-events.' [pdf] London: Design Studies. Available at: ualresearchonline.arts.ac.uk/id/eprint/8663/. [Accessed 13 Aug. 2020].</li><li>Meulen, B.C. ter, et al. (2009). 'From Stroboscope to Dream Machine: A History of Flicker-Induced Hallucinations.' [pdf] European Neurology, vol. 62, no. 5, pp. 316–320, DOI: 10.1159/000235945. Available at: researchgate.net/publication/26789124_From_Stroboscope_to_Dream_Machine_A_History_of_Flicker-Induced_Hallucinations.  [Accessed 31 Mar. 2020].</li><li>Mikolov, Tomas, et al. (2013) 'Efficient Estimation of Word Representations in Vector Space.' [pdf] ArXiv.Org. Available at: arxiv.org/abs/1301.3781.</li><li>Radford, Alec, et al. (2019) Language Models Are Unsupervised Multitask Learners. [pdf] Available at: https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe.</li><li>Robertson, S. E., and Spärck Jones, K. (1994) 'Simple, proven approaches to text retrieval.' [pdf] Available at: www.cl.cam.ac.uk. www.cl.cam.ac.uk/techreports/UCAM-CL-TR-356.html. [Accessed 13 Aug. 2020].</li><li>Rushkoff, Douglas. (2011). <em>Program or Be Programmed: Ten Commands for a Digital Age</em>. New York: Soft Skull Press.</li><li>Shakespeare, William. <em>The Tempest</em>.</li><li>Walter, W.  Grey. (1961). <em>The Living Brain</em>. Harmondsworth: Penguin Books.</li></ul>]]></content:encoded></item><item><title><![CDATA[JESTLED]]></title><description><![CDATA[<p>JESTLED is an experiment in bridging the real world to virtual reality. Oculus Quest controllers are used to conduct a display of vibrant colourful lights along a strip of LEDs.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/415705996?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="JESTLED"></iframe><figcaption>Video demonstration of JESTLED</figcaption></figure><h2 id="motivation">Motivation</h2><p>During the course, we learnt about using <a href="http://www.wekinator.org/">Wekinator</a> to create machine learning models to turn</p>]]></description><link>https://aubergene.com/jestled/</link><guid isPermaLink="false">5ebf1c4e696bd500014eaaaa</guid><category><![CDATA[Data and Machine Learning for Artistic Practice]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Thu, 07 May 2020 23:12:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-23-at-20.34.05.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-23-at-20.34.05.png" alt="JESTLED"><p>JESTLED is an experiment in bridging the real world to virtual reality. Oculus Quest controllers are used to conduct a display of vibrant colourful lights along a strip of LEDs.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/415705996?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="JESTLED"></iframe><figcaption>Video demonstration of JESTLED</figcaption></figure><h2 id="motivation">Motivation</h2><p>During the course, we learnt about using <a href="http://www.wekinator.org/">Wekinator</a> to create machine learning models to turn complex sensor data into usable output values for artistic creativity. For this project, I wanted to make use of the controllers on the Oculus Quest virtual reality headset. I had only recently tried VR and was very impressed at how accurate and responsive the controllers were for use within 3D space and I wanted to see if I could use them for control outside of the virtual world.</p><p>My original plan was to use them to control a musical performance, but I changed my mind to instead use them to manipulate light patterns on an LED strip secured above the window frame in my lounge. I see this initially as a nice home installation but it has a lot of flexibility and could be installed in a public setting.</p><h2 id="background-and-inspiration">Background and inspiration</h2><p>I became aware of programming LEDs when I met Robin Baumgarten at the <a href="https://london.hackspace.org.uk/">London Hackspace</a>. He was working on a project which became <a href="https://wobblylabs.com/projects/wobbler">Line Wobbler</a>. At the time it was in very early stages and wasn’t that impressive, but I next saw it at the Victoria and Albert Museum as part of their <a href="https://www.vam.ac.uk/blog/projects/parallel-worlds-2019-a-conference-on-videogame-design-and-culture">Parallel Worlds exhibition</a> and it was amazing. The colours are bright and refresh rates are very fast on a strip of LEDs, and so they can be used in creative ways.</p><p>I used an LED strip for my previous project - <a href="https://vimeo.com/379264886">City Sunrise</a> for Physical Computing as part of this degree course. Although I was happy with the outcome, I learned quite a lot about using LED strips which I wanted to take forward into another project. Previously I used 5v strips which meant I wasn’t able to power all the LEDs to full brightness simultaneously. For this project, I used a 12v strip which overcomes this limitation and produces full vibrant colours.</p><p>I also switched from using a Metro M0 Express (similar to Arduino) to using a Raspberry PI v4 (RPI). I had heard that connecting LEDs to an RPI wasn’t possible, but it turns out it can be done by turning off the soundcard and redirecting that output to the LEDs as a pulse width modulation (PWM). I did some tests, found it worked and decided to proceed with the RPI as it was much easier to work since it has WiFi built-in and I could even remote desktop to it.</p><p>I was also inspired by <a href="http://www.graffitiresearchlab.com/blog/projects/laser-tag/">L.A.S.E.R Tag</a> from Graffiti Research Lab. This project allowed people to graffiti using a high powered projector and a laser. The software would track where the laser had been and use this point as a cursor to control the projection of light. I like the connection of the physical touch with the remote action.</p><p>In a different vein, I saw <a href="http://imogenheap.com/">Imogen Heap</a> perform at the <a href="https://www.roundhouse.org.uk/whats-on/2019/imogen-heap/">Roundhouse</a>, where she used a set of programmable gloves as her instrument. I presume they must employ a form of ML similar to Wekinator since she had a wide variety of gestures which she performed to drive musical and lighting outputs. The lighting setup and gestures were quite specific to the venue so I would imagine the model was trained specifically for the performance. It was an impressive setup and worked very well on stage.</p><h2 id="implementation">Implementation</h2><p>I started experimenting to see if OSC output from the Quest was even possible. I was fortunate that someone had already written QuestOSCTransformSender and I was able to side-load it onto the Quest.</p><p>The output is sent as six different OSC messages with the first string argument used to differentiate between them. It sends position and rotation as both global and local transforms for the headset (HMD) and both controllers. Since I didn’t plan to wear the headset I wrote a Node.JS application to receive the OSC messages and filter them to the local values for both handsets. I then combined these into a single message which has 14 floats for the local positions and rotations of both controllers. I looked at using WekiInputHelper for these adjustments but it didn’t seem like it was possible to split and combine different messages.</p><p>There are 7 variables for each controller: in addition to the standard six degrees of freedom (position x,y,z and rotation x,y,z) they also include w representing the rotation of the controller along the axis of the person’s wrist.</p><p>I wrote code that could collect OSC and output it to analyse in existing software before proceeding further so that I had an idea of what kind of values to expect. I wrote the values to sample-data.csv (included in zip) and used <a href="https://apps.apple.com/us/app/wizard-statistics-analysis/id495152161?mt=12">Wizard - Statistics &amp; Analysis</a> to analyse it.</p><p>The following three charts show some of the analysis. I was able to see that the values are distributed around zero as the origin and go roughly ±1 from there. The wrist rotation looked like it might be a useful feature to have so I decided to include it in my OSC messages. The data seems somewhat noisy, however from playing games on the Quest I’ve found the tracking to be exceptionally good, so I put the noise partly down to minor movements in my hands and body while conducting the tests.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/rpy-by-lpy-1.png" width="880" height="770" alt="JESTLED" srcset="https://aubergene.com/content/images/size/w600/2020/05/rpy-by-lpy-1.png 600w, https://aubergene.com/content/images/2020/05/rpy-by-lpy-1.png 880w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/lpy-by-lpx-1.png" width="880" height="770" alt="JESTLED" srcset="https://aubergene.com/content/images/size/w600/2020/05/lpy-by-lpx-1.png 600w, https://aubergene.com/content/images/2020/05/lpy-by-lpx-1.png 880w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/lrw-by-lrx-1.png" width="880" height="770" alt="JESTLED" srcset="https://aubergene.com/content/images/size/w600/2020/05/lrw-by-lrx-1.png 600w, https://aubergene.com/content/images/2020/05/lrw-by-lrx-1.png 880w" sizes="(min-width: 720px) 720px"></div></div></div></figure><p>From here I used Wekinator to make some rough models so I could check that moving the controllers could produce something like the intended effect and it seemed like it would work well, with occasional inconsistencies, however, it seemed a reasonable result and so I moved on to coding the LEDs for output.</p><figure class="kg-card kg-image-card"><img src="https://aubergene.com/content/images/2020/05/IMG_20200415_010110.jpg" class="kg-image" alt="JESTLED"></figure><p>I had ordered a 12v LED strip from China early in the year and was lucky that it arrived just before the lockdown began. After some experimentation I got it working with my Raspberry PI using this <a href="https://github.com/jgarff/rpi_ws281x">PWM driver</a> and then better still I found that a <a href="https://github.com/tom-2015/rpi-ws2812-server">server already existed</a> so I could communicate with it using TCP.</p><p>I tested sending commands over TCP using Node.JS and that worked well too. This helped me a lot as it’s the language I’m most familiar with.</p><p>I started by writing code that would accept a float from 0 to 1 and map it to the position along the LED strip. Once I had that working I was able to test end-to-end using Wekinator and it roughly worked.</p><p>From here it was a case of adding the second controller and making incremental improvements to the code. I imported the D3 library as it has excellent colour control which I used for interpolating between colours and also for <a href="https://en.wikipedia.org/wiki/Gamma_correction">adjusting the gamma</a> which is critical when using LEDs since the response curve is non-linear.</p><p>I wrote code which listens to the output from Wekinator in a similar way to a game engine by using two independent loops. The “game” loop runs as fast as it can and receives messages which it uses to update position with a time delta so that it is independent of how fast the hardware is running. The render loop is set to run every 20 milliseconds and it takes the current state and turns it into commands for the LEDs. I found that this gave excellent responsiveness without causing any render queues on the Raspberry PI.</p><p>I ended up with three variables that could be set. They adjust the x position of the blob of light, the size of the blob and the colour.</p><h2 id="model-training-and-usage">Model training and usage</h2><p>So far I had been testing with direct input to simulate the outputs I would expect from Wekinator. I needed an extra set of hands so for the training phase my girlfriend used the Oculus controls and I took care of the recording with Wekinator.</p><p>I tried a polynomial regression initially for the x-position since I thought that the radius of the arm would mean a curve would be the best fit, but after a lot of testing found that using linear regression with just a single variable and data only recorded for the extremities of the LED strip gave the best result for both performance and accuracy. The formula came out as 1.2826 × lpx + 0.5865, (where lpx is the x-value from the handset). I ran cross-validation accuracy and got 0.01 RMS.</p><p>For size I used the height of the controllers, however, the rotation of someone’s hands varies quite a lot by how high they hold their hands and also depending on if their hands are in front of them or to the side. For this, I settled on using a neural network regression with the three controller rotation values and one hidden layer. We trained at three different heights for different sizes across the range of x-position values.</p><p>For colour, initially, I wanted to use the rotation of the wrist, but during training, I found that rotating the wrist would confuse the previous model for controlling size. After dwelling on this issue for some time I realised that it was because the origin of the position of the values from the controller is set slightly above the hand.</p><p>I didn’t have enough time to work out how to fix this in Unity, so instead, I looked for a different mechanism. I came up with the idea of using a palette of colours which the user could point to on the ground. This could be set up as physical objects and you select the colour by simply pointing to an object of the colour you want.</p><p>I used a classification model for this with five classes. Class 1 was anything above waist height and in the code, this simply preserved the current colour. The other colour classes were then created by pointing downwards. I used a k-nearest neighbour model with one neighbour for this since the boundary didn’t need to be precise and it was clear when you changed class and how to change back.</p><p>One thing we discovered during training was that turning 360° would invert some of the controls. It was very confusing as it happened occasionally and I initially thought it was that the model had gone wrong — it took a while to realise that it could be fixed by just turning back the other way.</p><h2 id="self-evaluation">Self Evaluation</h2><p>Overall I’m extremely happy with this result. It’s really fun to use and gives immediate pleasure. I would have liked to do more user testing but social distancing made that impossible. Prior to taking this course, I would have just used simple mapping of the values, which would have worked ok for the x-position but everything else would have been nearly impossible without using Wekinator for machine learning. Now that the model is set up, if I move the equipment then retraining takes only a couple of minutes</p><p>If I had more time I would have recompiled the Unity project to fix the transform of the wrist rotation point so that I could use that as a useful input. I would also love to spend more time on adding extra effects to the LEDs.</p><h2 id="appendix">Appendix</h2><h3 id="equipment-software-and-architecture">Equipment, software and architecture</h3><p>For this project I used the following:</p><p>Hardware</p><ul><li>Oculus Quest headset and controllers</li><li>Raspberry PI v4</li><li>WS2812 300 LED 12v strip with power adapter</li><li>MacBook Pro laptop</li></ul><p>Third-party software</p><ul><li>Wekinator - For training and running machine learning model to transform input values to output values<br><a href="http://www.wekinator.org/">http://www.wekinator.org/</a></li><li>QuestOSCTransformSender - For emitting Oculus Quest controller data in OSC <a href="https://github.com/sh-akira/QuestOSCTransformSender">https://github.com/sh-akira/QuestOSCTransformSender</a></li><li>Rpi-ws2812-server - For controlling WS2812 over TCP on Raspberry PI<br><a href="https://github.com/tom-2015/rpi-ws2812-server">https://github.com/tom-2015/rpi-ws2812-server</a></li><li>Osc-js - For listening and transforming OSC data from Quest, sending to Wekinator, and then receiving from Wekinator<br><a href="https://www.npmjs.com/package/osc-js">https://www.npmjs.com/package/osc-js</a></li><li>D3.JS - For colour control and interpolation<br><a href="https://github.com/d3/d3">https://github.com/d3/d3</a></li><li>Node.JS v12 -  including FS, Net, Performance modules from the standard library<br><a href="https://nodejs.org/docs/latest-v12.x/api/fs.html">https://nodejs.org/docs/latest-v12.x/</a></li></ul><h3 id="architecture">Architecture</h3><p>The values from Oculus Quest controllers are emitted by QuestOSCTransformSender and sent to my MacBook where server-listen-oculus.js receives them. The original messages are sent separately for left and right controllers and includes a string to specify which controller and seven float values. I turn these into a single message which has 14 floats for both controllers combined as this makes it easier to deal with in Wekinator.</p><p>I use the models in Wekinator to transform the data for the outputs and this is then sent to server-listen-wek.js which uses those values to control parameters within led.js which then sends messages over TCP to rpi-ws2812-server running on the Raspberry PI. This then sends PWM signals over pin 18 which controls the WS2812 LEDs directly.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/sMYynCT1F6gUQkxeS8D-dCQ.png" class="kg-image" alt="JESTLED"><figcaption>Architecture diagram</figcaption></figure><h3 id="running-the-project"><br>Running the project</h3><p>The code is available at <a href="https://github.com/aubergene/jestled">https://github.com/aubergene/jestled</a></p><p>You will need to have Node JS v12, Wekinator, a Raspberry PI v4 running rpi-ws2812-server and a strip of WS2812 LEDs connected and working. It might well work with other versions of software and hardware but hasn’t been tested. Additional instructions can be found in README.md.</p>]]></content:encoded></item><item><title><![CDATA[Sonicode]]></title><description><![CDATA[<p>Sonicode is a performance piece producing audio-visual work derived from barcodes. The barcode is input using a wireless scanner and informs a rhythmic pattern made of audio samples that I have collected from items around my flat. There are four primary tracks into which a barcode can be entered to</p>]]></description><link>https://aubergene.com/sonicode/</link><guid isPermaLink="false">5ebf225a696bd500014eaafd</guid><category><![CDATA[Programming for Performance and Installation]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Wed, 29 Apr 2020 23:16:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-23-at-20.38.44.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-23-at-20.38.44.png" alt="Sonicode"><p>Sonicode is a performance piece producing audio-visual work derived from barcodes. The barcode is input using a wireless scanner and informs a rhythmic pattern made of audio samples that I have collected from items around my flat. There are four primary tracks into which a barcode can be entered to create the rhythm of the work, and a further four tracks which allow binary mixing of the input signals to produce further sound output.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/413352406?app_id=122963" width="1280" height="720" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="sonicode"></iframe><figcaption>Sonicode demonstration performance</figcaption></figure><h2 id="background">Background</h2><p>I’ve had a fascination with barcodes since childhood. I remember the scene in Disney’s <a href="https://youtu.be/aQUrAh7GRqo?t=1296">The Computer Wore Tennis Shoes</a> (1995) where Dexter Riley (Kurt Russell) has been transformed into having a human/computer hybrid brain as a result of an electrical storm and is suddenly capable of amazing acts of memory and computation.</p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/hdrWJa1bdsY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>He picks up a jerky stick from a news-stand and is able to “read” the barcode to realise the cost is 89¢. Even as a child I was sceptical that a barcode was readable in this way as I (correctly) thought there was a direct mapping to the decimal number printed below the barcode and that it only contains a reference number from which the price could be looked up.</p><p>Later, I created my first barcode for a zine I worked on with friends back in 2001. The code is derived from the <a href="https://en.wikipedia.org/wiki/International_Standard_Serial_Number">ISSN</a>, and it made it easier for us to get the zine accepted for sale at larger shops .</p><p>My interest was recently reawakened when reading <em>Code - The Hidden Language of Computer Hardware and Software</em> (Petzold, Charles), which begins by explaining the history of encoding systems such as morse code and braille, then later the UPC barcode standard. I finally had a deep understanding of how the barcode worked!</p><p>A barcode represents an interesting interface between humans and code. Invented in the 1950s, it wasn’t until the 1980s that it started to become very popular as the costs associated with the technology decreased. Visually they are easy to recognise in form, but almost impossible to “read” or decode by eye – unless assisted by Hollywood magic.</p><p>Barcodes appear as a series of bars of differing thickness, but are actually composed of bars of uniform thickness where the bars representing the 1s of the binary string are printed. Each decimal number uses 7 bits, which at first seems unnecessarily large, but actually there are 3 bit patterns to represent the same decimal value. The patterns are divided into two halves (either side of the central control pattern) and the left has even parity and the right odd parity. This allows the scanner to determine if the code is upside down and reverse it if required.</p><h2 id="concept">Concept</h2><p>When I’ve asked friends what they see when looking at a barcode, their answer always involves a series of thick and thin lines. However to the scanner the lines are of uniform width and a thick line is simply three adjacent lines.</p><p>I’m interested in exploring the idea of treating adjacent lines as a single thick line for rhythmic interpretation. I also want to look at using barcodes for direct waveform creation.</p><p>I intend to focus on the binary pattern of the barcode, its decimal representation being only of secondary interest. I’m also interested in the dynamics of how barcodes might combine. There are three control codes which mark the start, middle and end of the barcode and so these form a natural synchronisation point.</p><p>If time allows, then I hope to look at visualising the barcode and how visuals could be used to contextualize the audio generated.</p><p>I’ve also been interested in the relations between the barcode, the UPC number and the product to which it attached. The numeric association seems essentially meaningless, a result of the visual expression of the algorithm and the numeric id which was chosen, however there are some subtle patterns which indicate the genre of product and the <a href="https://en.wikipedia.org/wiki/List_of_GS1_country_codes">country of origin</a>. It is perhaps possible that a well tuned ear might be able to identify aspects of the code from the audio output. It could be interesting to tie the products attached to the barcode into the piece but I think this would have to be left for future exploration.</p><h2 id="inspiration">Inspiration</h2><p>I’ve been greatly inspired by the work of Ryoji Ikeda. Last year I was fortunate to see one of his performances at the Barbican. The show was in two parts, the first performed live and unamplified using human clapping and cymbals. The second part was a recorded performance from his datamatics series. Both these works play very heavily on the overlapping of independent rhythms and the temporal and physical distances that get highlighted with the binaural interplay of the sounds arriving in our ears.</p><p>I was also very inspired by the work of Ei Wada who recently released an amazing video performance of creating live music using barcode scanners. This was around the time I was reading <em>Code</em> and figured into my idea of using barcodes for this project. In his video, he is actually mostly using fiducials and it seems the reflected light pattern from the scanner is feeding the audio.</p><h2 id="technical-implementation">Technical implementation</h2><figure class="kg-card kg-image-card"><img src="https://lh6.googleusercontent.com/2DnyAsZRBChneND66MhWQy_wTDr24HUX8oaNwNT_KrnHeHx6KUxFF_tdJaozJZkE7HiS4r9TjP_AnOotfSebvsBEwJlYtxsdq3aU7KPbNAZrwrqCeGisGa8Rq_q1JxJStSmx9F2d" class="kg-image" alt="Sonicode"></figure><p>I picked up a basic understanding of how barcodes work from reading <em>Code - The Hidden Language of Computer Hardware and Software (Petzold, Charles)</em>. I chose to focus on the <a href="https://en.wikipedia.org/wiki/Universal_Product_Code#Encoding">UPC-A</a> barcode standard as it had a good encoding reference and I created a very basic binary pattern for a single 7-bit value (see left).</p><p>This worked, but it wasn’t going to be practical to encode the entire algorithm using toggles and gates. I tried a few things before settling on using coll to output the binary list for each digit’s encoding.</p><p>This worked well and I used it to get a first sense of translating a barcode to audio.</p><p>My barcode scanner works like a keyboard which types the barcode very rapidly. I found that by using zl.group I could store the 13 digits of a barcode and push them as a single list.</p><p>I discovered that using a multislider I could display the barcode within Max. This was exciting but I realised that my encoding wasn’t correct since the displayed barcode didn’t match the original one I had scanned. I needed to switch to use <a href="https://en.wikipedia.org/wiki/International_Article_Number">EAN-13</a> as this is the standard which most products use for generating barcodes. However it was more complex as it has differing encoding patterns for the first half of the code depending on the first digit. This took up most of the coding time I spent on this project and resulted in the following four abstractions.</p><ol><li>barcode-digits-to-bin-list.maxpat<br>Takes a single digit and the encoding sequence, and returns the binary encoding as a list of 7 bits.</li><li>upc-to-bin84.maxpat<br>Takes the full 13 digit EAN, splits it and then using barcode-digits-to-bin-list turns it into a 84 bit list (this is the most complex part).</li><li>bin84-to-barcode.maxpat<br>Takes the 84 bits from the upc-to-bin84 abstraction and adds control codes and padding to make 101 bits.</li><li>upc-to-barcode.maxpat<br>A convenience wrapper to make it easier to group together the full encoding and reset.</li></ol><p>I was very pleased when my encoding was fully working and could be verified by using a scanning app on my phone.</p><p>Now that I had a working binary list I wanted to be able to sonify more than one barcode at once. I had the idea that they should behave like tracks into which I could enter a barcode and the sound would be produced on each beat. I initially did this using a metro, but found that the barcodes would go out of sync, especially when adding a new barcode to a track.</p><p>Due to these frustrations, I started to look at using MSP (signals). I knew what I wanted to produce; a sound which was only active during the 1s in my binary list. I tried using a complex set of line~ objects, and then tried using techno~. I had some progress but it wasn’t sounding how I wanted. I abandoned this approach as I came up with a better way of using metro, however I think it is possible and would like to get it working as it presents interesting possibilities.</p><p>The new approach was working well and I had my four tracks. I now needed more samples. Due to the Covid-19 lockdown I was spending almost all my time in my flat and so I decided to try producing my own. I recorded these using my Røde VideoMic, and Olympus Dictaphone–probably not ideal equipment but it was what I had available. I made 30 samples in total and gave them minimal processing in Audacity. I like that my samples have an “amateur” quality to them and I can identify the item which produced the sound.</p><p>I moved on to visualising the barcode. I have experimented with shaders previously and we looked at them in class so I initially tried that. I got some very basic black and white pulses, which I think could look good projected at a live performance but I wanted to try for a smooth continuous stream of bars. I next tried using mgraphics and was able to do basic things like moving a rectangle in a fairly easy way but it was hard to build it up dynamically, and I was going to need around fifty rectangles. I ended using up using JSUI as this allowed me to use mgraphics with JavaScript, a language I’m already familiar with. This worked well and I settled on producing an accurate rendering on the barcodes with a line to indicate the current progress through the track. I felt this was useful as it visually connects the audio output and more importantly shows that a single “thick” bar is actually composed of consecutive 1s so multiple beats will be played.</p><p>Ideally I would continue to develop more visualisations from the binary representation and also a way to switch between them. I would have liked to make it so barcode could use a jit.window which didn’t seem possible with JSUI.</p><h2 id="performance-notes">Performance notes</h2><p>My performance begins with me scanning a barcode which appears within the Max patcher. I click the toggle and it begins to play each bar as a clap sound. Once the barcode has finished I add a second barcode to the next track and we can hear both sounds playing in an almost syncopated style across the stereo channels. I add one more barcode on each of the next loops until all four tracks have a barcode.</p><p>I now add a mix track on each loop. The barcodes are played verbatim, but the mixes allow expr objects to remix the patterns. A barcode has no inherent meaning, its representation is designed to achieve accurate scanning. The encoding is not two’s complement like many binary encodings, and so there is a large latent space between all the valid barcodes. This allows me to dive into, and consider ideas such as the logical union of two barcodes.</p><p>The piece grows in complexity and intensity where it is hard to pick out the rhythm of each track but the shared gaps of the control codes now form dominant points along the segments.</p><h2 id="what-the-audience-will-experience-if-it-were-publicly-presented">What the audience will experience if it were publicly presented</h2><p>I envisage that I would be standing behind a table and would have a variety of products with barcodes that I could scan to produce the audio. I would play through several single barcodes, to show the slight differences between different product’s rhythms. It would also be fun if I could take an item from the audience, perhaps a beer can, and scan it–in order to show that it isn’t “fake” and that the audio is truly derived from the barcode.</p><h3 id="appendix"><br>Appendix</h3><p>The source code is available at</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/aubergene/sonicode"><div class="kg-bookmark-content"><div class="kg-bookmark-title">aubergene/sonicode</div><div class="kg-bookmark-description">Max 8 barcode thingy. Contribute to aubergene/sonicode development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Sonicode"><span class="kg-bookmark-author">aubergene</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars0.githubusercontent.com/u/1710?s=400&amp;v=4" alt="Sonicode"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Travelling]]></title><description><![CDATA[Travelling is an experimental virtual reality experience where the user is transported through a fantasy landscape. They have the ability to control their speed and to plant trees and also grow apples on the trees they have planted.]]></description><link>https://aubergene.com/travelling/</link><guid isPermaLink="false">5ebf175f696bd500014eaa2d</guid><category><![CDATA[3D Virtual Environments and Animation]]></category><category><![CDATA[virtual reality]]></category><category><![CDATA[A-Frame]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Fri, 17 Jan 2020 23:40:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-15-at-23.33.09.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2020/05/Screenshot-2020-05-15-at-23.33.09.png" alt="Travelling"><p><em>Travelling</em> is an experimental virtual reality experience where the user is transported through a fantasy landscape. They have the ability to control their speed and to plant trees and also grow apples on the trees they have planted.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/385582501?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="Travelling"></iframe><figcaption>Video demonstration and user testing</figcaption></figure><p>Try it out at <a href="https://aubergene.github.io/travelling/">https://aubergene.github.io/travelling/</a></p><h2 id="development">Development</h2><p>The app was designed and built for the Oculus Quest using the <a href="https://aframe.io">A-Frame</a> JavaScript library which is built upon the new <a href="https://www.w3.org/TR/webxr/">WebXR</a> web standard and also upon <a href="https://aframe.io/examples/showcase/helloworld/">Three.js</a> a WebGL rendering library. I used VS Code as my text editor and live-server to locally host the files. I was then able to access the app through the VR mode of the Quest’s browser.</p><h2 id="background-and-inspiration">Background and inspiration</h2><p>For my individual VR project I wanted to create an experience inspired by Rez, a game initially released for PlayStation 2 by Sega in around 2001. I first played Rez many years ago and it really stuck with me and I feel it has lots of elements with great potential that I think could be developed further.</p><p>Rez has a fairly minimalistic aesthetic which uses low polygon 3D models often rendered as wireframes with blending effects and trails. To me the style captures the zeitgeist of the cyberpunk ideas around the time of the new millennium.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh5.googleusercontent.com/mGyp-xPOqhiTHl33vIhoxRcWc3wwob8kapJHXOUTl5JjGkY2IRxos1IwXFmXxqRYE7aQI8ZPeifdB1hQS5AVMSMawTGDGtH0G9Gqiud_U6ERoxxTOZrrcW96vBvvAlC-q7BX95rj" class="kg-image" alt="Travelling"><figcaption>Rez (PlayStation 2 - 2001)</figcaption></figure><p>The player is represented by a humanoid avatar and they progress through the game on rails, encountering enemies which can be destroyed. This is the primary mechanism of interaction and is performed by holding the X button, moving the onscreen cursor over the enemy and then releasing the button to shoot. You can select up to eight enemies simultaneously, and there is also a power up called <em>overdrive</em>, which will automatically target and shoot enemies.</p><p>As the form of interaction is quite simple, the player can relax and just drift through the game, especially on the mode which gives the player unlimited health which called <em>Traveling</em> and is from where I take the name for this project.</p><p>In 2015 Rez was re-released as <em>Rez Infinite</em> which has support for VR and was made available for the PlayStation 4 and later the HTC Vive, Oculus Rift and Google Daydream HMDs.</p><p>I tried out <em>Rez Infinite</em> on the <em>Google Daydream</em> and I enjoyed it a lot. As it is a port of the original game, the graphics are practically identical, however VR gives it a very different mode of interaction.</p><p>One benefit of playing in VR is that it is much easier, since previous you had to control the cursor using a joystick. Now you can simply point at the enemies which is a much more natural gesture and something you can perform very quickly and easily.</p><p>However a big problem with <em>Rez Infinite</em> is that the movement of the player is on rails and this also hasn’t been updated since the original which means that the user’s viewpoint is suddenly rotated without their control when moving through the scenes. I am fairly seasoned user of VR but I found it so disconcerting that I kept my free hand on a stationary object to help me cope with the in game direction changes. It would be much better if the user stays on a straight trajectory (as happens for much of the early stages) or perhaps if the changes in direction were on a smooth curve as they are currently snap too quick to the new angle.</p><h2 id="design-implementation">Design &amp; Implementation</h2><p>My approach to design was very iterative. I knew I wanted certain elements, such as the user to follow a fixed path, and for it to feel smooth and natural. I wanted there to interactive elements within the world that the user could control with their hands. However I didn’t have a particular idea visually that I wanted to achieve since I was also completely new to 3D modelling and wasn’t sure what I’d be able to achieve or what tools I could use to get there.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh5.googleusercontent.com/D4sdcF-apyBzA9dylC8DkwJiK9bJv7-zv9hMuwOaeWL9JrFFUfaxQbomHzJXzAQ-jMa9409xd0_Z0_66pzZOss86-NdOPOqg6G__fZEmsxcG-rrr97GheBKY22u5kKPaWOfaM6Zf" class="kg-image" alt="Travelling"><figcaption>Hello World! of A-Frame</figcaption></figure><p>I started with the <a href="https://aframe.io/examples/showcase/helloworld/">Hello World</a> example in A-Frame and gradually adapted it as I started to understand how it worked. I began by adding VR controls with laser pointers and then understanding how to write code to deal with objects as you intersected them.</p><p>I then worked on adding objects to the world dynamically. I initially used the <a href="https://aframe.io/docs/1.0.0/components/pool.html">pool</a> for creating objects, since it is efficient with memory, however I later found that it imposed various constraints and that actually memory and performance wasn’t an issue as my world was quite simple.</p><p>I used the A-Frame Environment Component to create a basic world. I found that since it has a lot of configuration parameters that I really began to like it and ended up using it as the basis for the environment. Ideally if I had more time I would have investigated the code further so that I could have even greater control of the environment.</p><p>I found that colour was very important to the mood of the game. Initially I had been using a black background with highly saturated coloured objects, somewhat in the theme of Rez. However, I found it really difficult to make it feel good, the lighting was complex and it just didn’t feel immersive, or that I was fully getting the place illusion.</p><p>I tried out other environments with the plugin and found the trees felt much nicer and also it gave me the idea of being able to plant trees in the environment. I picked a calming pink colour pallette and set the user so they started on one side of the world and would progress towards the other. As the speed of travel was slow I didn’t work on trying to stop the user from reaching the edge of the world, but ideally I would have a much longer generative world that wouldn’t have limits.</p><p>I tried modelling trees, plants and rocks in Tilt Brush on the Quest and it was really fun and I was happy with the results, however I had problems importing them as assets in to my app. In the end I settled to use existing assets created by Google as they worked well, and also a basic geometric shape I had created in Blender.</p><h2 id="webxr">WebXR</h2><p>The idea of VR for the web has been around for a while with <a href="https://en.wikipedia.org/wiki/VRML">VRML</a> being an early incarnation. I was vaguely aware of A-Frame before starting the course and I became more interested as I researched it. I am already quite proficient in JavaScript so I wanted to make use of my skills and knowledge there. I also thought it would be more likely that I would continue to use a web based VR platform once my course had finished.</p><p>I was worried that it was a risk to try and learn a new framework in a short time, but I quickly found a lot of the concepts we had learned in Unity carried over to A-Frame and writing JavaScript was easier for me.</p><p>A-Frame was also a good choice for development as I had bought an Oculus Quest which although I had managed to compile for using Unity there was no live view available so each change required compiling and uploading which was very slow. A-Frame could be served on my development machine and then I could use the Oculus browser to visit the page and it would be immediately reload whenever I saved changes.</p><p>I used VS Code as my editor and Firefox and Chrome for testing on my development machine. I also used the A-Frame Inspector which made debugging much easier.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh5.googleusercontent.com/gHlJhOQSavZzFTBmbEcLxrdQAbgiOONavL6HUp2mr9ZfUuqshhVxIsA1hV6l1hGrQh88f-cDrmdwHxIvriFd8yFLZf8vWhf1p8RI5G16T1vrMYxc2PeOusFGFyTMlTo3548_RJT_" class="kg-image" alt="Travelling"><figcaption>Keita Ikeda testing Travelling on the Oculus Quest</figcaption></figure><h2 id="user-testing">User testing</h2><p>I spent a full day user testing Travelling on fellow students and recorded six of them. I made some adaptation to the app as the day progressed based on their feedback and fixed some bugs.</p><p>I didn’t have any specific goals that I wanted the users to achieve and I was happy to just see how they enjoyed the experience and what thoughts they had.</p><p>I would initially set up the VR scene and enable recording within the headset and on a video camera. unfortunately I had technical issues recording some of the sessions so wasn’t able to sync HMD footage with all the participants. I let users explore the space unassisted to see what they did and if they saw the instructions, and would then guide them through a second run.</p><h2 id="evaluation">Evaluation</h2><p>I had tried out my app myself as I went along and felt it had improved a lot but I was really interested to see what other people thought of it.</p><p>When I had been using Tilt Brush (a 3D drawing application by Google) it had a very nice tutorial which introduced a concept where help messages for the tool you were using are displayed when you looked on the back of the controller. I successfully managed to place instructions in the same location within my app. I placed text telling the user to “Turn your hands towards you to see instructions”.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh6.googleusercontent.com/6UWcoJo5t9tHyeQKsfanperJSYKQso1ed-JvdrKscQFceOPH6tnLHLD4N7Er87yq5Mce0lT6_5h3rzjSGB3rtvzH-pBMpEgSqWvTd5RmrPEaTPpCcHd9BKJi4YC_SyTvifaQYJ88" class="kg-image" alt="Travelling"><figcaption>Google Tilt Brush - turn for hidden help info</figcaption></figure><p>During testing I found that nobody understood this instruction, however once shown it everyone seemed to like the idea that instructions were placed in this location. Perhaps this will develop to become a standard mode of interaction with VR to find help. Many actions aren’t intuitive in a 2D environment, such as double clicking, but make sense once you have learned the behaviour. I think users would have found it if I’d had an animation showing them how to rotate the controllers and gave instructions to “look on the back of the controllers”.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh3.googleusercontent.com/lkxly2651wOl0plGtuFW-7R4jglpjfAn4LCBJ5NQGdY6MZ90B2KJfQmol-b3nWInHRwgZAIUOXGzsZey8l_5_aORvDgiihkdK8HWgbe4GGtGUzoGLvuVrixQcOqBSz5jIdmuZ7Hw" class="kg-image" alt="Travelling"><figcaption>Turning controls within my app</figcaption></figure><p>The users had a lot of feedback regarding the virtual environment and interaction. The <strong>most commonly requested feature</strong> was to be able to add more types of trees or plants, and then to be able to control the colour of them. The users reported that the world seemed plausible even though the tree model was very basic and they liked the simplicity.</p><p>There was mixed feedback about the control of movement in the game. The controls were not very intuitive and I would have liked to improve them but found the coding was difficult. I made it so that the user could only progress forwards, this was partly to avoid them going in the wrong direction and off the edge of the world early in the experience.</p><p>It would be quite easy to allow two way travel, but I think forwards only is a key part of the experience as it makes the world very simple for the user. Perhaps timeline scripted app could have resting points where the user would be given more time to interact with the direct environment.</p><h2 id="conclusion">Conclusion</h2><p>Overall I am really pleased with the outcome of the app. I had a lot of very positive feedback from my test group and everyone played with it for around seven minutes, which was far longer than I was expecting.</p><p>There were a lot of features I wanted to add, which were more around interaction, but the most requested feature was to simply add more variety of trees and plants that you could create. Doing this user testing was very helpful as I probably wouldn’t have concentrated on that feature otherwise.</p><p>I found using A-Frame was really good, and is a tool which I will continue to use. It had a nice simple system which was familiar to coding HTML and had great results.</p><p>I’ve learned a lot about VR from the course, the theory side is very interesting. I didn’t realise before how nuanced interaction in VR is, and that on the one hand we are very sensitive to physical sensations relating to rendering such as frame lag and not having 6DOF, but on the other hand users will easily accept variations in scale. I was also very interested in how key our perception of shadows is to VR and would like to investigate that further and continue reading about VR research.</p><h2 id="addendum">Addendum</h2><p>Creative commons references for libraries and assets used within this game</p><p>Models</p><ul><li>Tree - <a href="https://poly.google.com/view/6pwiq7hSrHr">https://poly.google.com/view/6pwiq7hSrHr</a></li><li>Apple - <a href="https://poly.google.com/view/5hRReRDr0v4">https://poly.google.com/view/5hRReRDr0v4</a></li></ul><p>Libraries</p><ul><li>A-Frame - <a href="https://aframe.io">https://aframe.io</a></li><li>A-Frame Environment - <a href="https://github.com/supermedium/aframe-environment-component">https://github.com/supermedium/aframe-environment-component</a></li><li>A-Frame Haptics - <a href="https://supermedium.com/superframe/components/haptics/">https://supermedium.com/superframe/components/haptics/</a></li><li>A-Frame Inspector - <a href="https://github.com/aframevr/aframe-inspector">https://github.com/aframevr/aframe-inspector</a></li></ul><p>Sounds</p><ul><li>Shooting/trigger sound - <a href="http://soundbible.com/930-Gun-Silencer.html">http://soundbible.com/930-Gun-Silencer.html</a></li></ul>]]></content:encoded></item><item><title><![CDATA[City Sunrise]]></title><description><![CDATA[A project to create an ambient interface using LEDs to show various environmental data]]></description><link>https://aubergene.com/city-sunrise/</link><guid isPermaLink="false">5ebf157ec453090001c495b4</guid><category><![CDATA[Physical Computing]]></category><category><![CDATA[arduino]]></category><category><![CDATA[LEDs]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Fri, 20 Dec 2019 23:25:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2020/05/city-sunrise.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2020/05/city-sunrise.jpg" alt="City Sunrise"><p>A project to create an ambient interface using LEDs to show various environmental data.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/379264886?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="City Sunrise"></iframe></figure><ul><li>Watch the video <a href="https://vimeo.com/aubergene/city-sunrise">https://vimeo.com/aubergene/city-sunrise</a></li><li>See the code <a href="https://github.com/aubergene/city-sunrise/">https://github.com/aubergene/city-sunrise/</a></li></ul><h2 id="background">Background</h2><p>I wanted to create a light display as a site specific installation to  fit over my window, which is five metres wide. I bought a rope of 300  addressable NeoPixel LEDs and started to think about what I could sense,  what I would want to know and how I might want to see it displayed.</p><p>I was inspired by <em>Light Tower</em> 1972-2016 by <a href="https://www.philipvaughan.net" rel="nofollow">Philip Vaughan</a> which was mounted on the roof of the Hayward Gallery from 1972 to 2016.  I fondly remember seeing the work daily when I was studying at King’s  College and regularly walking over Waterloo Bridge.</p><p>We were already using the Adafruit Metro M0 in class and so I decided  to use a weather related sensor for my project and found the BME680  from Pimoroni. It returns four readings which I was interested in:  temperature, humidity, air pressure and volatile organic compounds (VOC -  a proxy measure for certain air pollution). I also bought a small OLED  screen which I found was incredibly helpful for debugging purposes as  well as a nice addition for information output. It was slightly tricky  getting both the BME680 sensor and the OLED screen working since they  used the same pins, but I found out it can use the I2C interface and  that there are two address which can be switched by soldering closed a  jumper switch on the back.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/temperature-by-time-1.png" width="1046" height="938" alt="City Sunrise"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/pressure-by-time-1.png" width="1046" height="938" alt="City Sunrise"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2020/05/humidity-by-time-1.png" width="1046" height="938" alt="City Sunrise"></div></div></div></figure><p>Once I had the sensor working I logged some measurements via the serial interface using a small NodeJS script. I was then able  to get some idea of the range of values I could expect and the rates of  change. Thinking about a strip of LEDs as output I came up with the  follow ways of expressing data:</p><ul><li>Static colour and change of colours over time</li><li>Movement of pixels, as individual cells, or as larger groups</li><li>Difference in colour to neighbours</li><li>Animation by blinking and fading pixels</li><li>Brightness of pixels and change of brightness over time</li></ul><p>I wrote a bunch of test scenes for the LEDs and learned how to use  VS Code as an editor for Arduino code. I found it was much easier to  simulate the sensor inputs using a variable resistor, and used that to  cycle through the range of inputs and outputs. I used a button to switch  between scenes. It was a challenge writing my code so that each scene  could run and still return to the main loop to check for button pushes  and the value of the variable resistor. I split my code into different  header files, I'm not sure it was the best solution as there are a lot  of global variables, but it made the code more manageable. If I had more  time I would like to look at using C++ classes and especially creating a  class which would easily allow me to do running/longer term averages of  the sensor data so that I could show rates of change.</p><p>I learned that powering so many LEDs at maximum is a big technical  challenge in its own right and that did cause some constraints on my  ideas for how I could use the lights for output. I also would have liked  to use a board which has WiFi as then I could connect to internet APIs  to get weather data and predictions, however this was more complex still  for coding and power as the boards seemed to only output 3.3v and I  needed 5v.</p><p>Overall I'm really happy with the outcome and I would really like to  continue to work on it and perhaps overcome some of the limitations I  reached.</p><h2 id="wiring-diagram">Wiring Diagram</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/city-sunrise_v1-2.png" class="kg-image" alt="City Sunrise"><figcaption>Fritzing diagram of circuit</figcaption></figure><p>I later changed to use a rotator switch which I was unable to find in Fritzing so I still show the variable resistor</p><h2 id="plans-for-development">Plans for development</h2><ul><li>Add rain sensor YL38</li><li>Add real-time clock HW-84</li><li>Create C++ class to track average sensor values over time</li><li>Switch to WiFi enabled board and connect to web APIs to get weather data and forecasts</li></ul><h2 id="parts-list">Parts list</h2><ul><li>Adafruit Metro M0</li><li>Adafruit BME680 Sensor - temperature, humidity, air pressure, VOC</li><li>SSD1306 Monochrome OLED screen, 128x32 pixels</li><li>WS2811 "NeoPixel" rope 300 pixels</li><li>HW-040 Rotary encoder and switch</li><li>1000µF 6.3V capacitor</li><li>5V 2A DC power supply</li></ul><h2 id="libraries">Libraries</h2><ul><li>FastLED - <a href="https://github.com/FastLED/FastLED">https://github.com/FastLED/FastLED</a></li><li>Adafruit BME680  - <a href="https://github.com/adafruit/Adafruit_BME680">https://github.com/adafruit/Adafruit_BME680</a></li><li>Adafruit SSD1306 - <a href="https://github.com/adafruit/Adafruit_SSD1306">https://github.com/adafruit/Adafruit_SSD1306</a></li></ul><p>I also pasted code into my project in the following files and adapted as needed. Please see inside files for license info.</p><ul><li><code>src/screen.h</code> - Example OLEDs based on SSD1306 - Limor Fried/Ladyada</li><li><code>src/rotator.h</code> - New Rotary Encoder Debounce - by Yvan / <a href="https://Brainy-Bits.com" rel="nofollow">https://Brainy-Bits.com</a></li><li><code>src/leds.h</code> - adapted from FastLED example</li><li><code>src/gamma8.h</code> - found in FastLED examples</li><li><code>src/button.h</code> - adapted from <a href="http://www.arduino.cc/en/Tutorial/Debounce" rel="nofollow">http://www.arduino.cc/en/Tutorial/Debounce</a></li><li><code>src/bme680.h</code> - adapted Adafruit BME680 example</li></ul><h2 id="notes">Notes</h2><ul><li>Powering neopixels using levelshifter for 3.3 -&gt; 5v <a href="https://learn.adafruit.com/neopixel-levelshifter" rel="nofollow">https://learn.adafruit.com/neopixel-levelshifter</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Genetic Construction]]></title><description><![CDATA[Genetic Construction is a mobile created using genetic and linear algorithmic processes. It draws on ideas from the centennial constructivist  movement looking at art grounded in the material reality of space and time.]]></description><link>https://aubergene.com/genetic-construction/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04d0</guid><category><![CDATA[Summer Term Projects]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Mon, 16 Sep 2019 17:00:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/09/IMG_2789-2.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/09/IMG_2789-2.JPG" alt="Genetic Construction"><p>Genetic Construction is a mobile created using genetic and linear algorithmic processes. Taking inspiration from the works of Munari and Calder, it also draws on ideas from the centennial constructivist  movement looking at art grounded in the material reality of space and time.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/360093847?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="Genetic Construction"></iframe></figure><h2 id="introduction">Introduction</h2><p>The  work comprises three types of element: the equilateral triangular  masses, the pivotal beams and the thin transparent connecting  monofilament threads. The elements are represented by a tree structure  within the code, where calculations determine the position of the pivot  point along the beam to ensure the tree-like structure will balance. A  population of random variations is generated and fed into a genetic  algorithm which selects the fittest individuals to proceed to the next  generation. It seeks to reduce any clash or entanglement of the  components whilst also looking to optimise the diversity of masses and  complexity of structure.</p><p>After many generations of genetic  selection, I picked an individual from the population and used the  generated blueprint to laser cut the pieces and assemble the mobile  structure. It is painted with blackest black BLK 3.0 paint, as I wanted  the piece to be invisible, for it to create a subtractive, negative  space where its presence is that of what is removed from view — like  floating voids in space.</p><h2 id="concept-and-background-research">Concept and background research</h2><p>As a  part-time student I have the luxury of having two summer term projects.  This year I knew in advance that I wouldn't be able to attend the end of  year art show as I would be going to <a href="https://ars.electronica.art/outofthebox/en/">Ars Electronica</a>.  Therefore I wanted to create a project that I could be certain would be  ready before I left and that would work seamlessly at the show in my  absence.</p><p>I had seen the <a href="https://www.tate.org.uk/whats-on/tate-modern/exhibition/alexander-calder-performing-sculpture">Alexander Calder: Performing Sculpture</a> show at Tate Modern in 2015 and loved it. I've been fascinated by  kinetic art since I was a child and have admired the works of Bruno  Monari, Alexander Calder, and other kinetic artworks. It had been a  long-standing ambition to create mobiles and this seemed like the ideal  opportunity to begin my explorations.</p><p></p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/calder-tate-modern-1.jpg" width="2194" height="2194" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/gabo.jpg" width="2565" height="2565" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190808_204025.jpg" width="2465" height="2465" alt="Genetic Construction"></div></div></div><figcaption>Works by Calder, Gabo and Munari</figcaption></figure><p></p><h6 id="technical">Technical</h6><p>My work on this project was  broadly split across two parts: writing software to generate the layout  for a mobile and physical construction of the mobile. Both presented  challenges which I didn't expect.</p><p><strong>F = Md</strong> (force = mass × distance)</p><p>I  initially thought that creating an algorithm to balance a beam would be  easy since it should simply be applying the above Newtonian law we were  taught during GCSE physics. However, it was more complex than I that,  as usually these physics exercises assume a perfect straight massless  beam, and in real life the mass of the beam matters!</p><p>I  researched  balanced construction techniques and discovered the Whippletree, which is used to tether horses (and in other situations) to  spread the forces evenly.</p><p>I experimented by  iteratively experimenting with balancing structures made materials from  around the home. I was somewhat surprised by how delicate it was to  find the pivot point.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><blockquote class="imgur-embed-pub" lang="en" data-id="a/I4tUJrZ"><a href="https://imgur.com/a/I4tUJrZ">View post on imgur.com</a></blockquote><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script><figcaption>My cat inspects the initial prototype</figcaption></figure><p>While performing these  physical experiments, I also began trying to work out how to draw a  mobile. I thought I might be able to use L-Systems again as I had used  them before in my theory project for drawing snowflake shapes. However,  it wasn't easily possible since the recursive nature of L-systems made  calculating the balance point complex. Instead, I decided to use a tree  structure with the assistance of D3 Hierarchy.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><blockquote class="imgur-embed-pub" lang="en" data-id="a/FyDhDn1"><a href="https://imgur.com/a/FyDhDn1">View post on imgur.com</a></blockquote><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script><figcaption>I then went to produce masses of known relative weight so I could measure the pivot point</figcaption></figure><p>I  decided to create my project using JavaScript since it is the language  with which I am most familiar and I wasn't expecting any significant  performance issues. Also, JavaScript and the web as a platform has  advanced considerably in the last few years which has made writing more  complex projects much easier.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/l-system-02-2.png" width="500" height="204" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/l-system-01-3.png" width="254" height="197" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/Screen-Shot-2019-07-11-at-22.01.43-3.png" width="738" height="406" alt="Genetic Construction"></div></div></div><figcaption>Early experiments with tree based layouts</figcaption></figure><p>I made some fairly basic SVGs of mobiles using D3 and decided to try out  laser cutting them at the Hatchlab. It was not a success at all! I  hadn't yet accounted for the mass of the beam in my code, plus there  were scaling issues which mean it was far off balance. As I work full  time getting access to the laser cutter was tricky so I had to do a lot  of measuring and experimenting at home in the evenings. Towards the end  of the project, I started using the laser cutter at the London Hackspace  as I could get late-night access and there was no queue of fellow  students.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190829_175037.jpg" width="2494" height="2494" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190814_170544-2.jpg" width="2865" height="2865" alt="Genetic Construction"></div></div></div><figcaption>Laser cutting my first experimental output, not a success!</figcaption></figure><p>With some maths advice from my girlfriend, I  figured out how to account for the mass of the beam in the equation and  the images which the code produced started to at least look like they  should balance.</p><p>I decided to split my code  up into modules and used Svelte to assist with this. It became much  easier for me to then separate the code in the model which was  controlling the dimensions of the tree from the code which rendered the  tree to the screen. I was then also able to add another render which was  optimised for cutting the mobile from sheets of acrylic.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/Screenshot-2019-09-10-at-23.30.20.png" width="1996" height="1124" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/Screenshot-2019-09-10-at-23.30.11.png" width="1138" height="698" alt="Genetic Construction"></div></div></div><figcaption>Elevation view showing how it should look and vital measurements, and plan view optimised for laser cutting</figcaption></figure><p>I  now had some output which looked promising so I started to add  variability to the generation of mobiles. I used dat.GUI to make a basic  editor for playing around with variables. In the end, I settled on the  following variables:</p><ul><li>Mass - since I was using a uniform material this translated directly into the area of the hanging shape</li><li>Beam  length - as force is equal to mass × distance the beam length has as  big an effect as the mass but visually perhaps is the strongest</li><li>Drop  length - the drop length matters from to prevent masses crashing into  each other, but in terms of mass I ignored it for simplicity as the  monofilament I was using is very light compared to the rest of the  mobile</li><li>Branching - I had two variables to  control how much the tree would branch to the left and the right. In  some sense left and right doesn't matter since the mobile can rotate  180° reversing each beam, but in practice, the mobile tends to find  equilibrium near to the original angle</li><li>Pointy  top - As I was using only equilateral triangles, I wanted to have  control over which masses were pointing up and down, especially as an  up/down pair can be nearby without clashing<br> </li></ul><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/mass.gif" width="600" height="360" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/beamLength.gif" width="600" height="360" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/dropLength.gif" width="600" height="360" alt="Genetic Construction"></div></div></div><figcaption>Adjusting mass, beam length and drop length</figcaption></figure><p>I then got quite stuck on creating the genetic algorithm. I wanted the  tree to be of variable depth and so the number of beams and nodes would  need to vary, and I wanted to apply further variations to each beam and  node. I chatted to Andy Lomas about this problem, as all the examples I  have seen had a fixed length DNA and he advised me to stick with that  principle otherwise breeding and mutating the DNA would become complex. I  was using a lot of random function calls within my code which meant  each time I ran it I got a completely different tree, so there was  nothing to evolve from or towards.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190831_115443.jpg" width="4032" height="3024" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190831_115351.jpg" width="4032" height="3024" alt="Genetic Construction"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/09/IMG_20190901_110110.jpg" width="4032" height="3024" alt="Genetic Construction"></div></div></div><figcaption>Various failures with getting the pivot point correct</figcaption></figure><p>I had a small improvement when I switched my  random function to be seeded so that I could pick a given seed and then  adjust the amount that the DNA applied to the phenotype of the mobile.  However, this was still far from ideal as the same seed always produced  the same tree, so the genetic algorithm could only seek to find the best  parameters for that tree. There was no way of merging two seeds  together without ending up with a completely different tree which was  nothing like either of the two parents. I struggled with this idea for  ages and then eventually realised I could use random noise! I changed my  code so that the seed now just generated the noise and then all the  randomness came from the position on the noise. Now when two DNAs were  merged they could become closer to each other on the continuous varying  noise function. Sometimes this still meant wild changes, but it could  allow them to settle and find local maxima.</p><p>Now  that I had my genetic algorithm working, I set about tuning my fitness  function. Initially, I had just been using a simple function to get the  total mass of the mobile to target a given number. An early and  important idea was to use it to remove or reduce the chance of parts of  the mobile clashing or bumping into each other. I thought about how to  achieve this and came up with the plan to render the mobile to an HTML  Canvas with 50% transparency and then count the number of pixels which  had opacity above 50% and normalised by the total area of the mobile.  The fitness would target reducing this overlap area to zero.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://aubergene.com/content/images/2019/09/evolve-1.gif" class="kg-image" alt="Genetic Construction"></figure><p>This  plan worked out pretty well. The above animation shows some of the  results. I made the vertical drops much thicker than they would be in  the physical version so that the algorithm would "try harder" to remove  this overlap too. It worked pretty well but tended towards finding very  simple small mobiles, so I added to the fitness to prefer more nodes and  a larger diversity of sizes which increases the complexity and  interest.</p><h2 id="future-development">Future development</h2><p>I  plan to continue developing this work as I've found it really exciting  and had lots of positive feedback about it so far. The natural  progressions I would like to make are to extend the range of shapes for  the masses, initially to cover other regular geometric shapes, but then  ideally any shape including calculating the negative mass of holes. I  would also change my code to deal directly with mass instead of the area  so that I could incorporate other items, such as found objects into my  mobiles. I would also like to be able to extend the total size and mass  of the mobiles - this will require a lot more research into the  materials since my prototypes were already reached the limits which  caused the beam to bend.I would also like  to develop a clearer narrative behind mobiles produced from this work so  that I could use the concept of balance to compare and contrast ideas  and representations as part of the manifest form of the mobile.				 			</p><h2 id="self-evaluation">Self evaluation</h2><p>Overall  I was very pleased with the outcome and amount achieved with this  project. Every step was more complex and took longer than I had hoped  for, but I eventually got to where I wanted. I would have liked to have  made a bigger mobile, but I found I was closer to the constraints of the  materials which caused the beams to begin to bend.				 			</p><h2 id="references">References</h2><ul><li>Realist manifesto - <a href="http://www.terezakis.com/realist-manifesto.html">http://www.terezakis.com/realist-manifesto.html</a> </li><li>D3 - <a href="https://d3js.org">https://d3js.org</a> use of d3-hierarchy, d3-randomLogNormal, d3-scaleLinear, d3-extent, d3-descending</li><li>GeneticJS - <a href="https://github.com/subprotocol/genetic-js">https://github.com/subprotocol/genetic-js</a> code to run genetic algorithms</li><li>PF Perlin - <a href="https://www.npmjs.com/package/pf-perlin">https://www.npmjs.com/package/pf-perlin</a> for generating Perlin noise</li><li>Svelte - <a href="https://svelte.dev">https://svelte.dev</a> for component architecture</li><li>Lodash - <a href="https://lodash.com">https://lodash.com</a> for debounce and object clone </li></ul><p>Thanks to Brittany Harris for helping me with maths, code reviews and general help and patience with this project</p>]]></content:encoded></item><item><title><![CDATA[FaceTime]]></title><description><![CDATA[FaceTime  is an interactive timepiece which incorporates the viewer's face into  the time display. It is built using OpenFrameworks with face tracking  video processing.]]></description><link>https://aubergene.com/facetime/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04d1</guid><category><![CDATA[Workshops in Creative Coding]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Sat, 18 May 2019 17:00:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/09/Screenshot-2019-09-20-at-23.32.55.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/09/Screenshot-2019-09-20-at-23.32.55.png" alt="FaceTime"><p>FaceTime  is an interactive timepiece which incorporates the viewer's face into  the time display. It is built using OpenFrameworks with face tracking  video processing.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/337172888?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="FaceTime"></iframe></figure><h2 id="introduction">Introduction</h2><p>FaceTime  has two scenes, the first shows a black screen with the current time as  a series of straight lines connected to a bright red mouth shape. The  mouth tracks that of the viewer and when they open their mouth very wide  it makes the lines thicker and brighter. Scene two tracks the face of  the viewer and crops an image around their face and redraws this on the  screen multiple times with a delay and also behind each numeral of the  clock display.				 			</p><h2 id="concept-and-background-research">Concept and background research</h2><p>The work of <a href="https://www.instagram.com/p/Bwjr3LWjAQp/">Zach Liberman</a> inspired me greatly. He has done a fantastic amount of work around face  tracking, where the viewer's face or parts of it are remixed and  altered in real time. Recently some of them have been added to Instagram  as filters which are now very popular. I am also really interested in  time as a concept and how we relate to it and timepieces which show the  passing of time as an integral part of the artwork. I wanted this work  to incorporate those ideas. The focus of my idea was that as a person  viewed the timepiece, they become integrated with it. Ideally, the  duration spent looking at the work would be reflected by back in the  amount of time their face continues to appear in the work after they  have left.				 			</p><h2 id="technical">Technical</h2><p>This project was built using OpenFrameworks. I initially used the <a href="https://github.com/kylemcdonald/ofxFaceTracker/">ofxFaceTracker</a> addon, but switched to <a href="https://github.com/HalfdanJ/ofxFaceTracker2">ofxFaceTracker2</a> as I found it had better performance, also it had the ability to  recognise more than one face at a time, but I didn't get around to using  this feature in the end, and also found that it had a severe  performance impact when trying it out. I also added <a href="https://en.wikipedia.org/wiki/Adaptive_histogram_equalization">CLAHE</a> to improve contrast in the image before applying face detection. I learned about CLAHE from this helpful <a href="https://medium.com/@zachlieberman/m%C3%A1s-que-la-cara-overview-48331a0202c0">blog post by Zach Lieberman</a> on a project he had implemented which uses face tracking. Adjusting  contrast is especially important if I want to install the work in an  area which has natural lighting since the contrast of the faces can be  extremely variable depending on lighting conditions.I  extensively used the ofxDatGui addon for dynamically controlling  variables within the code. It was really helpful as compile is quite  long and so this makes tuning and experimenting with variables much  simpler. It also has folders which I used by passing a reference to each  scene on startup, this made it so much easier to work on a single scene  or part of the code.I found writing the  code quite challenging as I'm not very familiar with C++ and the  differences relative to JavaScript, which I use a lot more. I spent a  lot of time getting the numerals of the clock to align so that they  didn't jump around as the "1" numeral is thinner than the "8". In the  end, I made equal-sized blocks, in which each numeral would be centred  and then had an adjustable width for the block that contains the colon  (pictured below). It was a huge amount of coding work for something  which looks very simple, so perhaps I missed a trick, or there is a good  library which could have done most of this for me.I  also had problems with the text in the mouth clock scene. There I was  basing my code on the OrganicText example we developed in Term 1. The  trouble was that the code for creating points measures the rendered  width of a glyph, so again an "8" is wider than a "1", and then I wanted  to render the same text again over the top. In the end, I stored the  width of the "8" during the setup code and then reused it. However, it  still needed a little offset to adjust depending on screen size. I used a  Logitech external webcam as I found it had better resolution and  contrast than the build in iMac webcam. </p><figure class="kg-card kg-image-card"><img src="https://aubergene.com/content/images/2020/05/digit-blocks.png" class="kg-image" alt="FaceTime"></figure><h2 id="future-development">Future development</h2><p>There's  a lot more development I could add to this work. I started on the  ability to switch scenes, but only had time to implement two scenes.  Ideally, I would add many more scenes, also adding more dynamic  variability within the scenes. I'd also like to look at changing the  parameters of the scene more with the time so that it reflects ideas  around when the person is viewing the piece. So at night, it would show a  different style of interaction compared to daytime, but also perhaps it  has a different style in winter compared to summer.				 			</p><h2 id="self-evaluation">Self evaluation</h2><p>Overall  I was pleased with the work. At the exhibition, it was really enjoyable  to watch people interacting with the work without knowing that I was  the author and to see how they understood and worked with the piece. It  seemed the mouth clock scene was more popular than the webcam delay, so I  left it mostly playing that scene. People understood the mode of  interaction quickly, but the face tracking itself wasn't that reliable.  There's another library <a href="https://github.com/TadasBaltrusaitis/OpenFace">OpenFace</a> which I think might be more reliable, but it wasn't clear how easily I  could use it with my existing OpenFrameworks code, so I didn't try to  use it.My code was quite well structured,  but I'm still struggling with understanding how to pass references  around in C++, so it limited some of the modularity of the code, and  also made it harder to reused code between scenes.				 			</p><h2 id="references">References</h2><ul><li>ofxFaceTracker - <a href="https://github.com/kylemcdonald/ofxFaceTracker">https://github.com/kylemcdonald/ofxFaceTracker</a></li><li>ofxFaceTracker2 - <a href="https://github.com/HalfdanJ/ofxFaceTracker2">https://github.com/HalfdanJ/ofxFaceTracker2</a></li><li>ofxDatGui - <a href="http://braitsch.github.io/ofxDatGui/">http://braitsch.github.io/ofxDatGui/</a></li><li>CLAHE implementation code adapted from <a href="https://gist.github.com/gu-ma/eae2f72e740631a31b20eb8b2810c370">https://gist.github.com/gu-ma/eae2f72e740631a31b20eb8b2810c370</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Snowflake Generation]]></title><description><![CDATA[A look the meaning of uniqueness and through an investigation of Twitter's algorithm and generative snowflakes]]></description><link>https://aubergene.com/snowflake-generation/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04d2</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Mon, 13 May 2019 17:00:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/09/snowflake.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/09/snowflake.jpg" alt="Snowflake Generation"><p>While researching for my end of term project I came upon Twitter’s <a href="https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake.html">blog post</a> announcing the creation of an algorithm called <em>Snowflake</em>, used to generate a unique ID for each tweet. I was interested and so began to look into how the algorithm worked. I kept thinking about the word snowflake and the recent rise of its use as a pejorative term, as a label for a generation of people, and its underlying usage as a metaphor for uniqueness. In this paper, I am going to investigate the materiality and semiotics of snowflakes and uniqueness and look at generative artworks involving snowflakes. I will use this exploration as a basis for creating my own generative snowflake artwork. </p><h2 id="physical-phenomena">Physical phenomena</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/SnowflakesWilsonBentley.jpg" class="kg-image" alt="Snowflake Generation"><figcaption><p> Bentley, Wilson A. Studies among the snow crystals during the winter of 1901-2 </p></figcaption></figure><h3 id="in-nature">In nature</h3><p>The adage that no two snowflakes are alike is often used as a way to illustrate the huge degree of variety that exists within the natural physical environment. A large snowflake can be observed with the naked eye to be a complex and intriguing shape.</p><blockquote>“I do not believe that even in a snowflake this ordered pattern exists at random” — Johannes Kepler, 1611</blockquote><p>Kepler noted the regular six-fold symmetry of snowflakes, taking it as evidence of the supreme reason from the creator’s design. The phenomena of their uniqueness were first truly captured in 1885 by Wilson Bentley when he developed a photomicrography technique to take photos of individual snowflakes. He went on to take over five thousand such images, producing various publications of them<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt1">[1]</a></sup>. </p><p>Scientific research indicates that this idea of uniqueness is almost certainly correct, at least once snowflakes have developed beyond tiny crystals<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt2">[2]</a></sup>. Indeed we can consider snowflakes to be a paradigm of uniqueness<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt3">[3]</a></sup> so apt is the metaphor that it seems unlikely something else could replace it. </p><p>The uniqueness of a snowflake comes from the way in which it is formed in the upper atmosphere where tiny variations in temperature and humidity affect its growth and ultimate shape. If we follow this thread a little further we can think about the structure of water molecules which form the snowflakes, comprised of two hydrogen atoms and one oxygen (the electron bonds are what cause the snowflakes have six-fold symmetry often with a hexagonal shape), a step further and we look can look at a single hydrogen atom with its single electron shell. In some sense we are problematising the concept of uniqueness, each atom is distinct and cannot become anything else, but at the same time they are also uniform and distinguishing one from another pushes the limits of our cosmic understanding. In a final step if we rewind time to the Big Bang then all matter and energy are reduced to a singularity, the concept of uniqueness is moot. We can think of all future uniqueness as unwinding in a thread in time from this point spiralling outwards.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/Great_Britain_Snowy.jpg" class="kg-image" alt="Snowflake Generation"><figcaption><p> Satellite image of snow-covered Great Britain on 7 January 2010, taken by MODIS on NASA's Terra satellite. </p></figcaption></figure><p>Let us consider this photo of the United Kingdom, blanketed by heavy snowfall in January 2010, and try to think of the number of individual snowflakes which collectively form such a scene. Now instead of scaling the snowflakes up as to be visible as in Bentley’s photomicrography, we scale in the other direction. This leads to another, almost paradoxical aspect of uniqueness when scaled to both a vast number and tiny size, that the individuality can be lost and we see different materiality appear. A blank of pure white snow has a uniform surface topology, and yet beneath a microscope, it yields a different surface of endless individuality. I wanted to capture the essence of this seeming contradiction in my artwork, for it to have this dual aspect relationship of uniformity at a distance and uniqueness in proximity.</p><h3 id="in-art">In art</h3><p>In art, uniqueness has an interesting relationality. Famous artworks are often copied both by other artists and as souvenirs, but usually, the copying begins earlier with the artist themselves producing sketches and maquettes to shape the idea. The piece itself is in a state of relational flux between concept and concrete with the artist as mediator.For physical work, the piece materially changes over time, as paint and varnish age, and even how the piece is hung in which frame and against what background. All of these contextual elements add to the uniqueness of a work in terms of how one experiences it. In <em>The System of Objects </em>(Baudrillard) says:</p><blockquote>“a luxury car is in a red described as 'unique'. What 'unique' implies here is not simply that this red can be found nowhere else, but also that it is one with the car's other attributes: the red is not an 'extra'.”</blockquote><p>Mass produced woodblock and screen prints always have tiny differences in offsettings, inks and pigments exist making each one unique. However, when we use the word unique in this context we would usually mean differences which would be clearly discernible to the naked eye (or perhaps another sense such as audio differences). Such minute variations might be considered a distinction without a difference. Where should the line be drawn? Our journey through the physical world shows there is no obvious point and that it’s purely the subjective concern of the artist. We should also consider what constitutes a work of art which we can individuate. </p><p>In <em>The Uniqueness of a Work of Art</em> (Meager, R 1958) the differences between Michelangelo's David and Milton’s Paradise Lost are considered as spatiotemporal works. What constitutes a poem? The first written form, a printed copy, a recitation or a recording of a reading by the author? Through this lens, we can now also re-evaluate the statue as work which is under a slow but constant change both in how it is viewed and the context and location in which it exists, but also the physical changes from pollution, restoration. In this way, we can consider any work to be a series of manifestations.</p><blockquote>“a copy of David provided it is a good one and produced by hand, is also a work of art” — Meager, 1958</blockquote><p>Many works play on the idea of uniqueness and uniformity. I was drawn to Ai Weiwei’s Sunflower Seeds (2010) which features around one hundred million hand-painted porcelain sunflower seeds, each appearing both identical and unique<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt4">[4]</a></sup>. Up close you see the unique identity of each seed, at the middle distance you see them as collection, and the sensorial experience of the sounds of walking on them (when permitted) becomes eminent. The view from a distance is a uniform textural topology, where the colour becomes the dominant feature and the shadows and highlights formed from the perturbations in the topology. </p><p>Specialness is perhaps more appropriate to think of as a relationship, as we think of the person or item as special relative to a group, and the relationship to the group and within the group defines the hierarchical value of these relations. This relationality is, of course, temporal as a thing’s specialness waxes and wanes with the zeitgeist of the day more from the ravages of time itself upon the nature of the thing. </p><h2 id="in-language">In language</h2><p>While the use of snowflake as a metaphor for uniqueness dates back to Bentley’s photographs, a new usage appeared in 1983 where it is used to identify a person, as having a unique personality<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt5">[5]</a></sup>. This usage was in turn mocked in <em>Fight Club</em> (Palahniuk, 1996) where the anti-protagonist Tyler Durdan states during a scene initiating new members to the club.</p><blockquote>“You are not a beautiful and unique snowflake. You are the same decaying organic matter as everyone, and we are all part of the same compost pile.” — Fight Club, 1996</blockquote><p>Since then the term has changed again and has become a pejorative term for a person seen as overly sensitive or easily offended. The original notion of a snowflake’s uniqueness has been displaced by allusion to its fragility<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt6">[6]</a></sup>. </p><p>As an insult it is effective, a broad way of criticising someone for being too sensitive and in turn claiming your own self-toughness. It’s an idea that is broad-ranging, easy to remember and has <a href="https://www.gq.com/story/why-trump-supporters-love-calling-people-snowflakes">spread widely, especially amongst the alt-right</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/new-yorker-snowflake.jpg" class="kg-image" alt="Snowflake Generation"><figcaption><p> The New Yorker, April 2019 <a href="https://www.instagram.com/p/BwpInaGD__Q/">instagram.com/p/BwpInaGD__Q/</a> </p></figcaption></figure><h2 id="twitter-s-snowflake-algorithm">Twitter’s snowflake algorithm</h2><p>Within the digital realm, uniqueness is much easier to discern since all information is represented as binary. If the boundary is agreed then bitwise comparison will reveal if two items are identical. This also gives rise to the notion that within a bounded digital topology only a finite number of unique states are possible within this mimesis of nature. </p><p>This extends to any type of coded information, even natural forms such as DNA. Here we find that unique is actually not common at all, since even <a href="https://www.nytimes.com/2008/03/11/health/11real.html">identical twins have subtly different DNA</a>. Even when items are directly cloned, the organisms continue in the vitality on different paths. This effect was effectively demonstrated in <a href="http://www.deeproot.com/blog/blog-entries/onetrees-the-forgotten-tree-art-project">One Trees by Natalie Jeremijenko</a>, where one thousand Paradox Walnut trees were cloned and planted in pairs around the San Francisco Bay Area. Despite all being genetically identical the trees grew and lived and died in quite different ways. </p><p>Twitter launched in 2007 and first gained popularity at the SXSW conference. The number of tweets per day grew from around 5000 per day in 2007 to 300,000 a year later and to 50 million by 2010. The rapid growth meant that Twitter had a lot of problems scaling their service to work consistently and the “fail whale”, an image which appeared on the error page, became almost as well known as their iconic bird logo<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt7">[7]</a></sup>.</p><p>The snowflake algorithm was created to solve the problem of efficiently creating unique IDs across different servers running in different geographic locations where they would at times lose contact with each other (network partition)<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt8">[8]</a></sup>. I emailed <a href="https://twitter.com/rk">Ryan King</a>, one of the lead developers at the time to confirm if the origins of the algorithm’s name was due to the service ensuring each ID being unique whilst formed independently. He responded:</p><blockquote>“kYup, that was it. not much more complicated than that other than that there was a preexisting, related project called ‘snow goose’. Called such because it was a migration project.”</blockquote><p>To give a brief forensic analysis of Twitter’s snowflake we have 64 bits in total which breaks down as 41 bits for millisecond precision from Twitter’s custom epoch (gives 69 years worth of possible IDs), 10 bits to identify the server which generated the ID and 12 bits sequence to give a chronology for each ID generated within that millisecond. The first bit is unused since by convention that denotes a negative number. </p><p>So for Twitter, we have around 9.2×10<sup>18</sup> possible IDs, approximate the same as the number of grains of sand on earth. However that the vast majority of these possible IDs will never be created due to how many servers are allocated and that they accounted for the capacity to be higher than expected usage. In this sense, the snowflake algorithm defines a latent temporal topographic realm of unique IDs to which tweets are attached.</p><p>If we take a look at the components parts of the anatomy above then we see that uniqueness is only guaranteed for the whole and that within the ID there are relationships which are formed. Many snowflakes will share the same server ID and when the service is popular then we can expect many will be created at the same millisecond. This practice has similarities to the work Serendipity created by Kyle McDonald<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt9">[9]</a></sup> using Spotify data to show when two people somewhere in the world both played the same music track at the same time. Following this idea, we find the thing power of snowflake id. The algorithm has been created and the initial parameters set, but the creation of IDs within the latent space is manifested through and entangled with the survival of the body it serves (Twitter), or beyond since the algorithm was released under an Open Source language, this vitality can spread and outlive its original host.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/serendipity.png" class="kg-image" alt="Snowflake Generation"><figcaption><p> <a href="https://vimeo.com/293645693">Serendipity</a> (2014) — Kyle McDonnald </p></figcaption></figure><p>This leads me to think about the content of the Tweet, which so far I’ve ignored. At the time of the creation of the snowflake algorithm, a Tweet was limited to 140 characters (it’s presently 280 characters). The latent topology for content of what could be tweeted is far greater than the space of the snowflake IDs, meaning that the content of each tweet could be unique, however in practice we don’t just tweet random assortments of letters (or at least not all the time), so we can actually expect that many tweets with identical content do exist, excluding retweets which is a practice of doing exactly that.</p><h2 id="computational-art">Computational Art</h2><p>In my artwork, I wanted to visualise the relationship between the underlying digital topology of the Twitter snowflake IDs and the materiality of Bentley’s photomicrography. I wanted the work to present at both micro and macro scale, showing each ID as a unique snowflake pattern which is hyper-programmably generative and can be called to produce output for any given ID, disintermediating the content and Twitter itself. Then by producing the output for many IDs, I would hope to get the blanket uniformity of where the details meld away and we see the work as a blanket of snowflakes.</p><p>I decided to create snowflake patterns using a Lindenmayer system (L-System), a simple recursive coding system which can be translated into pen tool movement commands. I started with the Koch snowflake<sup><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt10">[10]</a></sup> which is a fractal. I then adapted it to add high-level complexity which also removes the fractal nature since it is no longer self-similar although it can be iterated to unlimited degree.</p><p>I added multiple terms to my snowflake code that allowed me to change its appearance by varying the length of different segments. These can then be tied to the anatomic parts of the snowflake ID that I outlined earlier. I wanted to look at using the geographic server ID component to surface a psychogeographic linking the locations of the servers, as opposed to the location of the user who sent the tweet. Producing the complete algorithm is complex since the latent space is so large (63 bits). I would particularly like to look at the ordering since there will be many snowflakes that are superficially similar, it would be good if they aren’t clustered together.</p><p>I initially concentrated on plotting snowflakes using my obsolete Roland DPX-3300 plotter which I have reanimated by writing software to bridge compatibility with the HTML5 Canvas API. Using the plotter is also interesting since a reasonable degree of randomness creeps in from the nature of the pen on paper, minute bumps and grit on the paper.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2020/05/snowflake.jpg" class="kg-image" alt="Snowflake Generation"><figcaption><p>Snowflake (2019) — Julian Burgess</p></figcaption></figure><p>I later found time to laser cut a single one of my generated snowflakes into clear acrylic. I really like this aesthetic as it gives a slight impression of them being made of ice and the idea of temporality in their composition through the changing structure of the refracted light. Ideally, I would like to laser cut a much larger number into bigger sheets of acrylic so I could get the micro and macro scale which I talked about earlier, however it has been tricky to balance the time with my job commitments.</p><h2 id="conclusion">Conclusion</h2><p>Snowflakes are an apt metaphor for uniqueness, perhaps the perfect one. However, defining uniqueness in a physical sense is a slippery slope since we can continually reduce the object to component parts and it becomes problematic to find a boundary.</p><p>In the digital domain, we can easily define binary equality between entities, much software relies on this mechanism. We have a tightly defined binary topology from which all digital media grows. We can also create things which are indistinguishable to humans but wildly different in the underlying digital domain. In this a simplified universe that we can inspect down to the bit, the lowest conceptual level. As the physical construction of computers has developed this milieu is hidden in increasingly tiny electronic components and data is strewn in an entanglement across many layers of hardware and distant networked devices in the real world all working to support our parallel cyberspace.</p><p>In an artistic context, uniqueness is perhaps impossible to consider outside of conceptual space. All works are transient and subject to change in their surroundings an interpretation, which change their relationships to other works and the degree of similarity. In that sense uniqueness really comes from the totality of the vast web of relationships stemming from the piece to the artist and to the viewer, the work is the unique focal point of these relationships and embodies them,</p><h2 id="annotated-bibliography">Annotated bibliography</h2><p>Meager, R. “The Uniqueness of a Work of Art.” <em>Proceedings of the Aristotelian Society</em>, vol. 59, 1958, pp. 49–70. JSTOR, <a href="http://www.jstor.org/stable/4544604">www.jstor.org/stable/4544604</a>.</p><p>Meager considers what sort of thing an individual artwork must be how uniqueness can be evaluated. Although the work is written at the beginnings of postmodernist art movement it takes a philosophical approach as one would expect with the Aristotelian Society. It was of interest to me the approach of looking at the temporal nature of all works and figurations of work as copies. Looking at what can be considered original and a copy, where an artist repeatedly attempts the same work, and where another artists copies a work as a work in it’s own right. Setting a comparative framework only works if characteristics can be agreed upon which ultimately difficult.</p><p>Lin, Y and Vuillemot, R. <em>Twitter Visualization Using Spirographs</em>. Leonardo 2016 49:2, 170-172 <a href="https://www.mitpressjournals.org/doi/abs/10.1162/LEON_a_01063">https://www.mitpressjournals.org/doi/abs/10.1162/LEON_a_01063</a></p><p>The piece describes the Spirograph, the mechanism and a brief history of how they are produced and why it’s structure leads to a unique signature of geometric shape outputs. The spiragram has been used before to create patterns similar to that of nature and in artworks such as Lauren Thorson’s “First 24 Hours of Spring”. In this work they selected three parameters to study how adjustments lead to outputs similar to various types of flower petal arrangements seen in nature. The output they produced is best seen as animation, but static images show the change in the rhythm of the conference for which they produced the artwork.</p><p>Cramer, Florian. <em>Words Made Flesh: code, culture, imagination</em>. Rotterdam: Piet Zwart Institute. 2005</p><p>Cramer looks at the origins of the idea of code as executable instructions, long before the invention of computers and considers the practices of music, magic and religion. Looking at the computer as a performer of commands that previously were the work of humans, including thinking processes. These instructions we give to computers are now in a formalised language inscribed in code. Symbols have cultural significance, and these work their way in the computer languages we create, forming persistent structures of within the code in how they mediate between human and machine.</p><h2 id="references">References</h2><ul><li>Kepler, Johannes. <em>The Six-Cornered Snowflake</em>. Oxford: Clarendon P, 1966. Print.</li><li>Baudrillard, Jean. <em>The System of Objects</em>. London: Verso, 2005. Print.</li><li>Palahniuk, C. (1996). <em>Fight Club</em>. New York: W. W. Norton &amp; Company.</li><li>Bentley, Wilson A. <em><a href="https://siarchives.si.edu/sites/default/files/pdfs/WAB_Snow_1902.pdf">Studies among the snow crystals during the winter of 1901-2</a></em> with additional data collected during previous winters and twenty-two half-tone plates," In Annual Summary of the Monthly Weather Review for 1902. Washington, DC: Government Printing Office, 1903. </li><li>Page, Mark, Jane Taylor, and Matt Blenkin. "<em>Uniqueness in the forensic identification sciences—fact or fiction?</em>." Forensic science international 206.1-3 (2011): 12-18.</li><li>Snowflake - <a href="https://github.com/twitter-archive/snowflake&amp;sa=D">https://github.com/twitter-archive/snowflake</a> <br><a href="https://developer.twitter.com/en/docs/basics/twitter-ids.html">https://developer.twitter.com/en/docs/basics/twitter-ids.html</a></li><li><a href="https://www.wired.com/2012/12/algorithmic-snowflakes/&amp;sa=D&amp;ust=1557793522633000">https://www.wired.com/2012/12/algorithmic-snowflakes/</a></li><li>Theory of the Snowflake Plot and Its Relations to Higher-Order Analysis Methods - Leonardo <br><a href="https://www.mitpressjournals.org/doi/pdf/10.1162/0899766053723041">https://www.mitpressjournals.org/doi/pdf/10.1162/0899766053723041</a></li></ul><h2 id="works-consulted">Works consulted</h2><ul><li>Gravner, Janko, and David Griffeath. "Modeling snow crystal growth II: A mesoscopic lattice map with plausible dynamics." Physica D: Nonlinear Phenomena 237.3 (2008): 385-404.</li><li>Rosenberger, Robert, and Peter-Paul Verbeek. Postphenomenological Investigations: Essays on Human-Technology Relations. Lanham: Lexington Books, 2015. Print.</li><li>Da, Costa B, and Kavita Philip. Tactical Biopolitics: Art, Activism, and Technoscience. Cambridge, Mass: MIT Press, 2008. Print.</li><li>Stengers, Isabelle. “Thinking with Whitehead - a Free and Wild Creation of Concepts.” Thinking with Whitehead - a Free and Wild Creation of Concepts, Harvard University Press, 2014.</li><li>Reconstructing Twitter's Firehose <a href="https://news.ycombinator.com/item?id%3D19266823&amp;sa=D">https://news.ycombinator.com/item?id=19266823</a></li><li>Sunderman, Vida. Tatted Snowflakes. Dover Publications, 2012. Internet resource.</li><li>Paperfold snowflakes <a href="https://robbykraft.com/snowflakes/">https://robbykraft.com/snowflakes/</a></li><li>Almond Darren, The Principle of Moments 2019 - artwork<br><a href="https://www.studiointernational.com/index.php/darren-almond-the-principle-of-moments">https://www.studiointernational.com/index.php/darren-almond-the-principle-of-moments</a></li></ul><hr><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref1">[1]</a> Bentley, Wilson A. "<a href="https://siarchives.si.edu/sites/default/files/pdfs/WAB_Snow_1902.pdf&amp;sa=D&amp;ust=1557793522638000">Studies among the snow crystals during the winter of 1901-2</a> with additional data collected during previous winters and twenty-two half-tone plates," In Annual Summary of the Monthly Weather Review for 1902. Washington, DC: Government Printing Office, 1903.</p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref2">[2]</a> <a href="https://www.nationalgeographic.com/science/2007/02/no-two-snowflakes-the-same/&amp;sa=D&amp;ust=1557793522638000">https://www.nationalgeographic.com/science/2007/02/no-two-snowflakes-the-same/</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref3">[3]</a> Page, Mark, Jane Taylor, and Matt Blenkin. "<em>Uniqueness in the forensic identification sciences—fact or fiction?</em>." Forensic science international 206.1-3 (2011): 12-18.</p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref4">[4]</a> <a href="https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-ai-weiwei-sunflower-seeds&amp;sa=D&amp;ust=1557793522640000">https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-ai-weiwei-sunflower-seeds</a> </p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref5">[5]</a> <a href="https://blog.oxforddictionaries.com/2018/01/30/oed-new-words-mansplain-hangry-snowflake/&amp;sa=D&amp;ust=1557793522638000">https://blog.oxforddictionaries.com/2018/01/30/oed-new-words-mansplain-hangry-snowflake/</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref6">[6]</a> <a href="https://blog.oxforddictionaries.com/2018/01/30/oed-new-words-mansplain-hangry-snowflake/&amp;sa=D&amp;ust=1557793522639000">https://blog.oxforddictionaries.com/2018/01/30/oed-new-words-mansplain-hangry-snowflake/</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref7">[7]</a> <a href="https://venturebeat.com/2017/11/24/why-the-artist-behind-twitters-fail-whale-thinks-you-should-treat-art-as-a-currency/&amp;sa=D&amp;ust=1557793522636000">https://venturebeat.com/2017/11/24/why-the-artist-behind-twitters-fail-whale-thinks-you-should-treat-art-as-a-currency/</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref8">[8]</a> <a href="https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake.html&amp;sa=D&amp;ust=1557793522637000">https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake.html</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref9">[9]</a> <a href="https://vimeo.com/293645693">https://vimeo.com/293645693</a></p><p><a href="http://doc.gold.ac.uk/compartsblog/index.php/work/snowflake-generation/#ftnt_ref10">[10]</a> <a href="https://en.wikipedia.org/wiki/Koch_snowflake">https://en.wikipedia.org/wiki/Koch_snowflake</a></p>]]></content:encoded></item><item><title><![CDATA[violence & expérience]]></title><description><![CDATA[<p>This week we looked at speculative practises, ideas around thinking into the future and imaging other pasts and futures, writing practise as time travel. </p><p>One of the ideas which caught my attention was thinking how to write with gaps — leaving space for things which are indeterminate. I find that is</p>]]></description><link>https://aubergene.com/week-20/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04cd</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Wed, 27 Mar 2019 21:56:43 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/PANO_20190321_202048.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/PANO_20190321_202048.jpg" alt="violence & expérience"><p>This week we looked at speculative practises, ideas around thinking into the future and imaging other pasts and futures, writing practise as time travel. </p><p>One of the ideas which caught my attention was thinking how to write with gaps — leaving space for things which are indeterminate. I find that is one of the key elements of the written word, and why I often enjoy the book more the film, is that there are gaps left for you to fill. What did this character look like, how did their voice sound, how did they walk, what were their hands like and their touch?</p><p>I was interested in the word <em>violence</em> which our tutor used in reference to critical fabulation ideas by Saidiya Hartman. The violence of archive I believe was the phrase. She then further explained ideas of finding silenced histories and then imagines what they might have been. The <a href="https://en.oxforddictionaries.com/definition/violence">OED</a> gives the following two definitions</p><ol><li>Behaviour involving physical force intended to hurt, damage, or kill someone or something.</li><li>Strength of emotion or of a destructive natural force.</li></ol><p>Both can perhaps be relevant. We can think about archives of objects from colonial collections which were taken by force or indeed may even be the bodily remains of those encounters. We can also think about the nature of archiving itself, about how force a tracking number and label to each object, and try to fit it in to a given ontology perhaps very distant and removed from the world in which it originally lived, now almost more of a fiction than a discernable place in space and time. </p><p>How to move between the making (for research) and the writing to expand what we are making (expanding the world we making too).</p><p>We heard about the memory plates for used for the Apollo missions which were woven from copper threads. The process had been called the <em>LOL Method</em>, which at the time stood for Little Old Ladies. It was interesting to think about as a fabulative method for making a surface for storing computer memories. It complicates the intellectual processes of early computation with that of traditional manual labour. I was reminded of when I visited a lace making museum in Bruges. They had demonstrations from a friendly group of women who meet regularly to work on their lace. The process is fascinating to watch, they rapidly move bobbins over each other to form the lace. Patterns are written out in the form of a chart which has colour coding to show which threads move in which patterns. In many ways, it's very similar to how memory is managed within a computer, as they pass from left to right moving the bobbins and then repeat back the other way (there are many variants on this).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2019/03/IMG_20180630_151255.jpg" class="kg-image" alt="violence & expérience"><figcaption>Lace at the <a href="https://www.visitbruges.be/en/kantcentrum-lace-centre">Kantcentrum (Lace Centre) — Bruges</a></figcaption></figure><p>Embracing these connections through design and engineering the project, and relating labour and thinking practice. Often today they are separate , what does it mean to have separate manufacturing processes? This type of manufacturing values process over outcome, the lace is sometimes sold, but profit is not the motivation for those in the group. </p><p>We also thought about a game of <a href="https://en.wikipedia.org/wiki/Cat%27s_cradle">cats cradle</a>, and its similarities to the open source model. Code is complex and interwoven, and yet usually is the work of many people, passing their work on to the next person for further contributions, adding, changing and removing. Some companies use open source as much as a part of the process as for the outcome, to show your work as you go along, in the scrutiny daylight does it make developers write better code or think more about their practice? In my experience it does, but it isn't, of course, a silver bullet for problems and won't work isolation if other parts of the process aren't in harmony.</p><p>What does it mean to make conclusions or speculations? Chris Salter. At the end of the journey, we expect a resolution, a conclusion, a "what have we learned" from these works in the making.</p><p>Flow of matter. Differences in science, science as a practice which tries to manipulate and measure the matter. Art is often about framing the flow of matter.  </p><p>Towards the end of the lecture, we heard about using the word <em>expérience</em> - the French use of the word apparently of interest to Chris Salter. It grabbed my attention as this week I had been reading <em>Thinking with Whitehead </em>(Isabelle Stengers, 2011 Harvard Press). In chapter one she talks about using the “expérience” as a substitute for “awareness” as there is no direct word for the English <em>(la nature est ce dont nous avons l'expérience dans la perception). </em>It's interesting that the word for experience is somewhat nuanced, describing an experience without reference to consciousness, and to think about what is the true nature of experience. </p><h3 id="writing-exercise">Writing exercise</h3><p>The green and yellow leds either side of the RJ45 network socket danced rapidly and wildly back and forth, the fan spun at maximum velocity, desperately trying to keeping the overall temperature of the of tweet harvesting machine within the recommended temperature operative limits. The harvester had been set running a few weeks earlier, its purpose, collecting tweets relating to the impending doom of brexit. When it began on its earnest task the flow of tweets was a weak stream, but now it raged like an angry engorged torrent, tearing...  </p>]]></content:encoded></item><item><title><![CDATA[doors left unopened]]></title><description><![CDATA[<p>I was originally planning to write about some the other ideas which came to mind whilst researching my end of term project. However along the way I changed my mind and switched the title to one of my alternative ideas, so instead I'm using this space to write about why</p>]]></description><link>https://aubergene.com/unopened-doors/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04c7</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Sun, 24 Mar 2019 23:10:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/DSCF2685.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/DSCF2685.JPG" alt="doors left unopened"><p>I was originally planning to write about some the other ideas which came to mind whilst researching my end of term project. However along the way I changed my mind and switched the title to one of my alternative ideas, so instead I'm using this space to write about why I switched and the progress I had made.</p><p>My original idea was look at human-computer communication with specific reference to the use of robots (chatbots) to influence humans and how humans can be programmed and ideas spread in a viral-like way. I wanted to focus my research by looking at Twitter and the use of chatbots to influence and program people and was interested in the term psychosis, however during my tutorial it became clear that it's a complex, and loaded term and would probably not be appropriate to consider within this use. The focus shifted to look more specifically at how ideas spread in humans, and how the Twitter milieu enables and positions chatbots relative to humans.</p><p>My interest was sparked by the large amounts of speculation of the use of robotic agents to influence and subvert people's minds with relation to the UK's European Referendum (Brexit) and the United State's Presidential Elections 2016. I also have a long standing background interest in fictional speculation around computerised cognitive control. A long time ago I read Snow Crash (Stephenson 1992) which revolves around the idea of a special code, a virus which when read by humans within cyberspace kills its victim in real life.</p><p>I talked about my ideas with my classmates and they proffered lots of useful books and articles. I read <em>Manufacturing Consent: The Political Economy of the Mass Media (</em>Edward S and Chomsky - annotated bibliography below). Then the tragic events of the Christchurch mosque shootings in New Zealand which was <a href="https://www.bbc.co.uk/news/technology-47583393">live-streamed on Facebook</a> occurred. It was a shocking event and though it isn't directly related to my research it just felt too close, and I don’t feel like I’m qualified to talk about it. The relations between social media and mental health are very complicated and I just no long felt I could focus or approaching researching or writing about this topic, especially relating to the use of chatbots to influence people's minds.</p><p>Along the way I picked other ideas which I perhaps feel bad in that they are lighter and easier to address.</p><p></p><blockquote>Time present and time past<br>Are both perhaps present in time future,<br>And time future contained in time past.<br>If all time is eternally present<br>All time is unredeemable.<br>What might have been is an abstraction<br>Remaining a perpetual possibility<br>Only in a world of speculation.<br>What might have been and what has been<br>Point to one end, which is always present.<br>Footfalls echo in the memory<br>Down the passage which we did not take<br>Towards the door we never opened<br>— Burnt Norton, T.S. Eliot</blockquote><p></p><p></p><h2 id="annotated-bibliography">Annotated Bibliography</h2><p>Cramer, Florian. <em>Words Made Flesh: code, culture, imagination</em>. Rotterdam: Piet Zwart Institute. 2005.</p><p>Florian looks at a wide variety of computer related communication, both as text and code. They highlight and contrast the practical forms relating to the computer and the human cognitive and socials aspects. I was draw to this as it looks in to the differences and similarities of the human and computer when it comes to textual communication, the cultural aspects of code and the ideas of executing the code and control structures.</p><p></p><p>Herman, Edward S, and Noam Chomsky. <em>Manufacturing Consent: The Political Economy of the Mass Media</em>. New York: Pantheon Books, 1988. Print. </p><p>Edward and Chomsky outline the scope and workings of mass media and then quickly address the core ways in which they see it as a tool of mass manipulation. “the media serve, and propagandize on behalf of, the powerful social interests that control and finance them”. They then enumerate the key ways in which they consider this takes place. The areas of interest to me were chapter 3 on legitimizing versus meaningless third world elections, which shows examples where elections for various regimes are praised or degraded within popular western media outlets. Chapter 1.4 deals with “Flak and the enforcers: the fourth filter”, showing where doubt can easily be cast on legitimate and factual reporting in order to reduce its impact and confuse the situation.</p><p>Rushkoff, Douglas, and Leland Purvis. <em>Program or Be Programmed: Ten Commands for a Digital Age</em>. Berkeley, CA: Soft Skull Press, 2011. Print. </p><p>Ruskoff has written a chatty and concise guide based off a talk originally given at the South by Southwest conference. He looks the fundamentals of human language and how it compares with computer language, talking about the speed at which information now travels and that this creates a cybernetic organism which network and collectively thinks in more advanced ways that we do as individuals. He looks the persistent and omnipresent nature of the internet as a complete single system. Then later the role of choice within systems, if you can't program then you have reduced agency to create your own choices and are stuck with those who have created the systems you end up using. </p><h2 id="works-referenced">Works Referenced</h2><ul><li>Stephenson, Neal. <em>Snow Crash</em>. New York: Bantam Books, 1993. Print. </li><li>Bastos, Marco. <em>The Brexit Botnet and User-Generated Hyperpartisan News.</em> Social Science Computer Review, Feb 2019</li><li>Bunz, Mercedes. "When Algorithms Learned How to Write." In The Silent Revolution, Palgrave Pivot, London, 2014.</li><li><a href="https://www.mitpressjournals.org/doi/abs/10.1162/LEON_a_00573">Watching How Ideas Spread Over Social Media</a><strong> </strong>Yu-Ru Lin, David Lazer, and  Nan Cao - Leonardo 2013 Vol 46:3, page 277</li><li>Orwell, George, Ben Pimlott, and Peter H. Davison. <em>Nineteen Eighty-Four</em>. London: Penguin Books in association with Secker &amp; Warburg, 1989. Print. </li><li>Žižek, Slavoj, and Sophie Fiennes. The Pervert's Guide to Cinema. , 2016. (film)</li><li>Curtis, Adam. <em>Hypernormalisation</em>. , 2016. (film)</li></ul>]]></content:encoded></item><item><title><![CDATA[Horniman X Goldsmiths]]></title><description><![CDATA[<p>On Thursday I took part in a <a href="https://www.horniman.ac.uk/visit/events/horniman-x-goldsmiths">late night exhibition at the Horniman Museum</a> in south London. I worked with classmates from the course <a href="https://ckarpodini.wixsite.com/christinakarpodini">Christina Karpodini</a>, Hazel Ryan and <a href="https://romainbiros.com/">Romain Biros</a>. Our proposal, <em>Overcurrents</em> was from our looking at the rich ecology of the aquarium at the museum and thinking</p>]]></description><link>https://aubergene.com/overcurrents/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04ce</guid><category><![CDATA[Workshops in Creative Coding]]></category><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Sat, 23 Mar 2019 00:50:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/ezgif.com-optimize.gif" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/ezgif.com-optimize.gif" alt="Horniman X Goldsmiths"><p>On Thursday I took part in a <a href="https://www.horniman.ac.uk/visit/events/horniman-x-goldsmiths">late night exhibition at the Horniman Museum</a> in south London. I worked with classmates from the course <a href="https://ckarpodini.wixsite.com/christinakarpodini">Christina Karpodini</a>, Hazel Ryan and <a href="https://romainbiros.com/">Romain Biros</a>. Our proposal, <em>Overcurrents</em> was from our looking at the rich ecology of the aquarium at the museum and thinking about the threats those environments are currently facing. We chose to focus on plastic pollution of the world's water systems.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/03/IMG_20190315_193523.jpg" width="4032" height="3024" alt="Horniman X Goldsmiths"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/03/IMG_20190321_204052.jpg" width="4032" height="3024" alt="Horniman X Goldsmiths"></div></div></div><figcaption>Our cyber jellyfish mirrored the real jellyfish in the Horniman aquarium</figcaption></figure><p>Our work was in three parts. Two projection displays built using <a href="https://openframeworks.cc/">OpenFrameworks</a>. I worked on the interactive jellyfish display with Romain. We wanted to have a rough simulation of the jellyfish, but to add interactivity driven behaviour. I remembered a <a href="http://marcinignac.com/blog/cindermedusae-making-generative-creatures/">blog post from Marcin Ignac on creating generative jellyfish</a>, which was really helpful and we used as a starting point. For interaction we decided to use the Kinect, so we could sense people in the dark conditions of the aquarium. We added interactions so that plastic bottles would appear in the water and float upwards when sudden movement was detected, and also the jellyfish would move away from the nearest point to the display. Christina using Max/MSP added sounds which played in accordance with the interactions and which really enhanced the atmosphere of the work.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/03/IMG_20190321_220209.jpg" width="4032" height="3024" alt="Horniman X Goldsmiths"></div><div class="kg-gallery-image"><img src="https://aubergene.com/content/images/2019/03/IMG_20190321_184841-1.jpg" width="4048" height="3036" alt="Horniman X Goldsmiths"></div></div></div><figcaption>World map showing ocean currents of plastic pollution</figcaption></figure><p>Our second display was mostly coded by Hazel and was projected on to an existing world map. We wanted to show the <a href="https://response.restoration.noaa.gov/about/media/visualizing-how-ocean-currents-help-create-garbage-patches.html">location and patterns</a> of the vast patches of plastic pollution that have formed in the oceans. We again used OpenFrameworks, this time with the ofxPiMapper addon which we had become familiar with during term 1, as this made it much easier to align the projection to the complex shape of the curved wall we were projecting on to. Again Christina added sounds to enhance the piece</p><p>In addition to the projection displays, we had collections of plastic that we had gathered from the banks of the Thames (including a old shopping trolley) and the Essex shore , along with recycling collected from home. These were both scattered around the aquarium and also displayed within glass cases to contrast the beauty of the natural environments that aquarium reproduces and the ugliness of our waste plastics.</p><p>The work for the show was really tiring, but we learnt a lot and had fun. We were only given one hour to install the show, which was a very short time frame. We did once rehearsal but it was still a challenge. Our code wasn't the tidiest, and even though we used a bunch of adjustable variables with <a href="https://github.com/braitsch/ofxDatGui">ofxDatGui</a>, there were still lots of things I would have liked to tweak and the code was becoming rather spaghetti. It was quite shocking how much plastic I collected from my regular recycling as part of this project and also how also anything we had to buy for the project was also wrapped in one-time use plastic. Obviously individual actions are part of the solution to reducing plastic usage, but we also desperately need legislation and much firmer action from large companies that manufacture plastic if we're to preserve the amazing ecology of the worlds oceans.</p><h3 id="further-links">Further links</h3><ul><li><a href="https://www.theoceancleanup.com/">https://www.theoceancleanup.com/</a></li><li><a href="http://plasticadrift.org/">http://plasticadrift.org/</a></li><li><a href="https://response.restoration.noaa.gov/about/media/visualizing-how-ocean-currents-help-create-garbage-patches.html">https://response.restoration.noaa.gov/about/media/visualizing-how-ocean-currents-help-create-garbage-patches.html</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Guest lecture - Matthew Yee-King]]></title><description><![CDATA[<p>We had a guest lecture today from <a href="http://yeeking.net">Matthew Yee-King</a>. It was really cool to hear about his background across biology, music and technology and how he had tied it all together in producing <a href="http://www.yeeking.net/evosynth/">EVOSYNTH</a> a genetic synthesiser. You can generate a bunch of synths and then breed them together (or</p>]]></description><link>https://aubergene.com/yee-king/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04cc</guid><category><![CDATA[Workshops in Creative Coding]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Mon, 18 Mar 2019 23:24:53 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/Screenshot-2019-03-18-at-23.27.11.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/Screenshot-2019-03-18-at-23.27.11.png" alt="Guest lecture - Matthew Yee-King"><p>We had a guest lecture today from <a href="http://yeeking.net">Matthew Yee-King</a>. It was really cool to hear about his background across biology, music and technology and how he had tied it all together in producing <a href="http://www.yeeking.net/evosynth/">EVOSYNTH</a> a genetic synthesiser. You can generate a bunch of synths and then breed them together (or singularly, it's asexual) and get new resulting synths. It's really neat in the course when I see some professional work and I can now understand how I might be able to approach building something similar, obviously he did an entire PhD on the subject, so wouldn't be expecting to reach that level but just walking around the foothills helps me appreciate better the mountain. Also it's written in JavaScript which gave me a break from reading C++ 😉.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/pvsdGvzWYy4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item><item><title><![CDATA[What is it like to be a thing?]]></title><description><![CDATA[<p>This week's lecture was title “What is it like to be a thing?” and drew from <em>Alien Phenomenology, or What It’s Like to Be a Thing</em> — Ian Bogost and <em>Vibrant Matter — </em>Jane Bennett.</p><p>I found the lecture really interesting and I felt like I understood it much better than</p>]]></description><link>https://aubergene.com/week-18/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04cb</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Sat, 09 Mar 2019 15:13:38 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/IMG_20181123_192506-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/IMG_20181123_192506-1.jpg" alt="What is it like to be a thing?"><p>This week's lecture was title “What is it like to be a thing?” and drew from <em>Alien Phenomenology, or What It’s Like to Be a Thing</em> — Ian Bogost and <em>Vibrant Matter — </em>Jane Bennett.</p><p>I found the lecture really interesting and I felt like I understood it much better than the early lectures in term one. We looked at using the idea as tool, to think and write about from the perspective from the “thing” and to avoid human exceptionalism and try and move past what it might be to think as a human.</p><blockquote>If a lion could speak, we could not understand him<br>— Wittgenstein</blockquote><p>This quote came to my mind and when I asked Helen (our tutor) about, she explained that we can challenge idea and use the framework as a tool to imagine the <em>experience</em> of being the <em>thing</em>, not necessarily trying to communicate as the <em>thing</em> or to understand it, but to just move outside ourselves, which made sense to me.  </p><p>It felt like a lot of the items often had human connections, found items of street trash (rubbish) on the small and familiar scale, to the more exotic ideas of the flotsam and jetsam of space junk. In nature we thought about the vastness in scale and energy of volcanoes and the long temporality of the shifting of tectonic plates.</p><p>We thought about the idea of what is it like to be a computer, which Ian Bogost writes about in his book. I like this idea very much, as a child I was always very curious about how electronics worked and I would enjoy taking apart radios and other broken electronic items and was fascinated by the spectra of electronic components which to me looked like a city casting myself as a giant overlord. It was interesting to essentially come back around to this same idea, of how a capacitor might feel, or the personality of a relay switch (somewhat bipolar perhaps?). Another world I often imagined myself in to as a child was that of a pinball table, which I think probably was largely influenced by the Sesame Street counting song. It's something I like about this course, combining the that engineers can understand how something works, but also to understand what something experiences. </p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/VOaZbaPzdsk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p></p><p>I liked the poetic form that Jane Bennett employed listing the items with rich description. I'm currently evaluating idea for my research project and thought this might be an interesting form to take as part of the write up. I also started think about the idea of trying understand the experience of an algorithm. I've been looking at <a href="https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake.html">Twitter's Snowflake algorithm</a> and so started to ponder how it might feel to be the algorithm. Watching a binary clock rapidly escalating, feeling that continual background pulse alongside the rise and fall of the sequential id as the human events flow through me. I found it hard to think about existing simultaneously across many different servers, all working together and yet each part separate (by design) each in a sense unaware of the other and yet systematically all one of the same.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://aubergene.com/content/images/2019/03/IMG_20181123_192506.jpg" class="kg-image" alt="What is it like to be a thing?"><figcaption>Mesopotamian duck weights - British Museum</figcaption></figure><p>Towards the end of the lecture we watched a video of a talk by <a href="https://en.wikipedia.org/wiki/Katherine_Behar">Katherine Behar</a>, where she briefly touched across a wide range of her works. What particularly caught my attention was when she mentioned about weights. She had an image of a Mesopotamian duck weight and it took me a while to recall where I had seen it before, I thought perhaps we had covered in it some way in class, but it was actually from the research trip I did with Izzy to the British Museum for the <a href="http://doc.gold.ac.uk/compartsblog/index.php/work/encoding-humans-knots-and-threads/">group project which focused on Quipu</a>. The curation of the gallery meant that ancient artefacts relating to literature, monetary account and measurements of weights were all close to each other. Recordings and representations of the immediate world around form a lot of the focus of early human artefacts (along with wildly imaginative and decorative objects). The duck weights caught my eye as most of the other forms in the section appear very functional, yet these have an amazing aesthetic. Now viewing them again it made me think further, about their role and purpose as units of measure and how their experience changes when they are no longer deemed suitable for that purpose as she found with broken and modified weights, this could be within the lifetime of the original creator of the artefact. Today we store most information in a binary representation, often now in the “cloud” which both by name and nature is very nebulous and hard much hard to reason and imagine being when compared with a duck.</p><h2 id="visits">Visits</h2><p>This week's lecture reminded me of a Symposium I went to late last year — <a href="https://www.serpentinegalleries.org/exhibitions-events/symposium-shape-circle-mind-fish-we-have-never-been-one">The Shape of a Circle in the Mind of a Fish</a>. I didn't attend the whole thing and only saw a few talks. I really enjoyed the talk below from (my friend) Leah Kelly on <a href="https://en.wikipedia.org/wiki/Tetrodotoxin">TTX</a> in puffer fish, and how in a very real sense we (humans) are all connected to other life in a very deep and complex way.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/videoseries?list=PLLrFzV6gBibfXb1sqZA_3AHKZHP_l23Wu" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>Earlier last week I went to my first “crit”. It's an interesting word which holds power, and has been banded around a lot by students on the course in the last few weeks, demanding we want or need them, and questions about how, when and where they would take place. I remember in one of my first jobs when I was sitting my manager and she picked up the phone and said “hello, yes, I'm busy in a meeting”, it was then that I realised that the word <em>meeting</em> which had this special power was similar, in that it’s actually something quite ordinary once you try it. The crit seemed to go very well, lots of interesting feedback and I hope I contributed to that. I now look forward to receiving my own crits in the summer term.</p>]]></content:encoded></item><item><title><![CDATA[escape into dream]]></title><description><![CDATA[<p>This week we looked computational art and post-phenomenology. To be honest although I'd heard of the word phenomenology I didn't really know what it meant, so had to look that up to start with and then add it to my (secret) list of words I've learnt during this module. So</p>]]></description><link>https://aubergene.com/week-17/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04c6</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Thu, 07 Mar 2019 00:52:00 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/03/PANO_20180507_195543.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/03/PANO_20180507_195543.jpg" alt="escape into dream"><p>This week we looked computational art and post-phenomenology. To be honest although I'd heard of the word phenomenology I didn't really know what it meant, so had to look that up to start with and then add it to my (secret) list of words I've learnt during this module. So it sounds like it relates to studying which concentrates on consciousness and direct experience of objects. </p><p>The study of technology in terms of the relations between human beings and technological artefacts, focusing on the various ways in which  technologies help to shape relations between human beings and the world.  They do not approach technologies as merely functional and instrumental  objects, but as mediators of human experiences and practices.</p><p>Not looking at things a pre-given, but the relationships define both object and subject</p><p>Relational frameworks</p><p>Not a bridge, but a fountain</p><p>Material artefacts deserve philosophical attention</p><p>MRI image as a picture of the brain, but look also at what relationships it has. The ideas of medicine, of visualising</p><p>black mysticism (Moten)</p><p>Don Idhe - four types of relationships</p><ul><li>Emobodiment relations<br>users develop bodily perceptual relationships with devices<br>technology changes your perception of the world<br>how you view through a camera, tell the time by a watch<br>how does a pair of glasses change our perception of the world (<a href="https://en.wikipedia.org/wiki/Lygia_Clark">Lydia Clark</a>)</li><li>Hermeneutic relations<br>(How we interpret things)<br>Looking at a watch face, we interpret the hands, or the display and our expeirence is transformed<br>Thinking that midday is when the hands point to 12<br>That the day is divided in to 24 hours<br>A watch is easy to interpret, but a MRI is hard<br><a href="https://www.serpentinegalleries.org/exhibitions-events/pierre-huyghe-uumwelt">Pierre Huyghe - Uumwelt</a></li><li>Alterity relations<br>Moment when we relate to something in a manner which is how we relate to other humans<br>When we wrote about it, he had ideas like ATM, but now we have things like Amazon's Alexa<br>Feminist Internet project at <a href="https://twitter.com/otheragent">UAL - Charlotte Web</a><br>We don't think it <em>is</em> a human, just that we relate to it as if it we human<br>(Q. how would this relate to animals?)</li><li>background relations<br>devices that shape our experiences that we don't notice<br>refridgerator, air conditioning - at a distance, or ambient<br><a href="http://www.katherinebehar.com/art/high-hopes-deux/index.html">Katherine Behar - High Hopes</a>  (roombas with rubber tree plants) - background relations with technology<br>the plants change the relationship with the roomba<br>shifting of how things are definied or what we things are</li></ul><p>Thinking about above with relations between humans and technology</p><p>VR works - postphenemological</p><p>You can't escape human relationship. About moving in and out of worlds and bodies</p><p>﻿﻿Slavoj Žižek on David Lynch’s Blue Velvet</p><blockquote>The logic here is strictly Freudian, that is to say we escape into dream to avoid a deadlock in our real life. But then, what we encounter in the dream is even more horrible, so that at the end, we literally escape from the dream, back into reality. It starts with, dreams are for those who can not endure, who are not strong enough for reality. It ends with, reality is for those who are not strong enough to endure, to confront their dreams.</blockquote><h2 id="visits">Visits</h2><figure class="kg-card kg-image-card"><img src="https://aubergene.com/content/images/2019/03/00016IMG_00016_BURST20190222213801.jpg" class="kg-image" alt="escape into dream"></figure><p>I saw Massive Attack play at the O2 Arena. It was a twenty-year anniversary of the release of their album Mezzanine. I saw them play at the Armoury in NYC. </p><p>Stuff about visuals and Adam Curtis</p>]]></content:encoded></item><item><title><![CDATA[Curatorial concerns]]></title><description><![CDATA[<p>This week we had a guest lecture by <a href="https://cargocollective.com/RF">Rachel Falconer</a>. She introduced herself with a brief run down of her background in curating across various art institutions.</p><p>The lecture talked through a number of computational art exhibitions dating from the nineteen-sixties to the present day. We talked about the curatorial</p>]]></description><link>https://aubergene.com/week-16/</link><guid isPermaLink="false">5ebf0e5b5e27000001ef04c8</guid><category><![CDATA[Computational Arts-Based Research and Theory]]></category><dc:creator><![CDATA[Julian Burgess]]></dc:creator><pubDate>Sun, 24 Feb 2019 22:29:22 GMT</pubDate><media:content url="https://aubergene.com/content/images/2019/02/Screenshot-2019-02-24-at-22.21.11.png" medium="image"/><content:encoded><![CDATA[<img src="https://aubergene.com/content/images/2019/02/Screenshot-2019-02-24-at-22.21.11.png" alt="Curatorial concerns"><p>This week we had a guest lecture by <a href="https://cargocollective.com/RF">Rachel Falconer</a>. She introduced herself with a brief run down of her background in curating across various art institutions.</p><p>The lecture talked through a number of computational art exhibitions dating from the nineteen-sixties to the present day. We talked about the curatorial aspects of the show and what we felt worked well and what didn't. </p><p>Interestingly one of the featured showed was the <a href="https://anthology.rhizome.org/">Net Art Anthology by Rhizome</a> which was showing at the <a href="https://www.newmuseum.org/">New Museum</a> in New York which by luck at had visited earlier that week. The show was a small collection of works and we discussed how useful and relevant the physical presentation of computers from the late 90s was as part of the art form, both as ascetics and making a show about <em>net</em> art primarily in physical location. </p><p>In person I wasn't really taken with the show, the piece which I spent the most time with was <a href="http://art.teleportacia.org/exhibition/give_me_time__this_page_is_no_more/">Give me time/This page is no more</a> - Olia Lialina, 2015. The work is a series of 35mm slides shown in pairs featuring pages from the now defunct Geocities site. One side of the pair shows pages of people "moving in" to <a href="https://en.wikipedia.org/wiki/Yahoo!_GeoCities">Geocities</a>, typically with excited messages of "content soon to come" and the other slide shows people leaving the site, with farewells and links to alternative hosting. The slides evoked a fair amount of nostalgia for me as my first website was hosted with Geocities, it was a fan website for Portishead, which is now gone, but perhaps I can <a href="https://boingboing.net/2018/05/04/a-search-engine-for-old-geocit.html">resurrect it some day</a>. It's very hard to capture how different the web was back then, a much more handcraft and amateur experience, very optimistic perhaps naïvely so.</p><p>We also talked about <a href="http://bak.spc.org/">Backspace</a> which was an amazing collection of early online art. I discovered them by wandering along the South Bank of the Thames in 1997 when I had just started university. The site still runs pretty well considering the distance in time and continues to inspire me with ideas which can now be realised in new ways.</p><p>After the lecture we spent time in small groups talking about ideas for curating digital art shows and what we might do that would be different. We talked about the idea of temporality with both how works are initially created, such as live performances and also how works can be displayed in a temporal context, such as <a href="https://www.tate.org.uk/whats-on/tate-modern/exhibition/christian-marclay-clock">The Clock</a> by Christian Marclay. We imagined some ideas such as an exhibition from within the time frame of a single second of a computer, many billions of mathematical operations spread out across a human time span, as in many ways the disconnect in both size and speed is one of the hardest aspects of computation to grasp and I think what makes them appear so magical.</p><h2 id="gallery-visits">Gallery Visits</h2><p>In addition to the New Museum, during my trip to New York I also visited <a href="https://bitforms.art/">Bitforms</a> where they had a show <em><a href="https://bitforms.art/exhibitions/rozin-2019">Sol</a></em> with three pieces by Daniel Rozin. I think it is fair to say that Daniel is very well known within the digital art community for his "mirror" pieces, an early version of which is on <a href="http://www.smoothware.com/danny/woodenmirror.html">display at the ITP program</a> in NYC where I did a summer school program a while ago. It's a really fun and interesting use of computation as it's so immediately familiar and causes everyone to interact with the mirrors. In this show there was a sunset video mirror in the entrance, and although I'm sure it would be complex, I now felt that with the coding experience I had gain in Theo's classes I would at least be able to embark on something similar myself.</p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/AqALsicPFPE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>I also visited the <a href="http://sfpc.io/">School for Poetic Computation</a> where they had a <a href="http://sfpc.io/codepaper/">exhibition on paper folding and cutting</a>, which was fascinating. The mathematics are really quite complex and there was both interesting physical paper works and computer simulations of folding.</p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/fZnyqWpTpoY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item></channel></rss>