Monthly Archives: December 2013

Who forgot to plug in the audience?

Who forgot to plug in the audience?

The return of aesthetics in a post-digital paradigm

 

In relation to art and aesthetics the difference between the digital and the post-digital designates two different ways of ascribing meaning to concrete works of art – two different paradigms of thinking about art. As this article will demonstrate, the digital paradigm is governed by a technological point of departure, whereas the post-digital one is characterized by a focus on experience and use. Both paradigms have their pros and cons, depending on what dimensions of the work of art one wishes to investigate. By drawing on the ideas of especially Immanuel Kant, Dominic McIver Lopes and Domenico Quaranta, this articles analyses and compares how the two paradigms relate to the concept of aesthetics.

 

Why aesthetics?

Why focus on aesthetic experience in the first place? Why not just investigate and interpret the concrete works of art? The radical answer to that question is: Because the work of art in itself does not exist. By this I mean that whenever we assume that we talk about a specific work of art, we really talk about a number of different, culturally constructed phenomena depending on who ‘we’ are. Whether we take as an example a piece of net art or a marble sculpture it can be considered, for instance, as pure conceptualization on the side of the artist (Kosuth), as mimetic representation of reality (Plato), as an evolutionary step in human knowledge (Hegel), as significant form (Bell), as created by geniuses (Kant), as an act of communicating feelings from the artist to an audience (Tolstoy), as good or poor social/cultural critique (Adorno), as in-distinguishable from the artist’s life (Vasari), as a text open for reading bearing no relation the artist (Barthes), as bourgeois/commercial ideology (Berger), as that which is accepted by the art institution (Dickie, Bourdieu) – not to mention the original/copy question, first raised by Walter Benjamin, which has been reinforced in the era of digital technology.[1]

Therefore, it is impossible to essentially pin down a specific work of art as something that exists as one clear-cut object/phenomenon/process/action/relation ready for ‘pure’ interpretation and analyses. In other words, all discussions on concrete works of art are based (sometimes unknowingly) on certain theoretical points of departure – even if the focus of the discussions themselves are down to earth and do not seemingly involve theory.

Hence, the fact that I insist on focusing on aesthetics in following comparative analysis of the digital and the post-digital paradigm, is not because it is the right way to consider those paradigms, but because it is – as the article shall demonstrate – a fundamentally relevant issue that the digital and the post-digital paradigm approach differently. Before moving into more detailed analyses of the notion and role of aesthetics within paradigms of the digital and the post-digital, a few overall comments (which will be elaborated later) on their overall diachronic and synchronic relationship are in place: To some extent the paradigms follow chronologically in the sense that the digital paradigm emerged vaguely with the avant-gardes’ of the early 20th century, and then had its most profound period in the 90s up till the beginning of the millennium when the post-digital paradigm gradually took over.[2] I prefer, however, the notion ‘paradigm’ to ‘period’, since to a very large extend we are dealing with two fundamentally different ways of comprehending aesthetics related to digital technology, which, therefore, run parallel.

 

Art and aesthetics of a digital paradigm

The digital paradigm’s notion of aesthetics is characterized by two things: Border crossing and a technological focus. The border crossing is to be understood in the sense that the digital paradigm challenges the borders between traditional institutions and disciplines, and, hence, does not seem to differ between ‘aesthetics’, ‘art’, and ‘culture’ insofar as, overall, these terms are used more or less synonymously to describe new experiments or practices that make use of digital technology. As an example of this characteristic Stephen Wilson’s book Information Arts (2002) carries the subtitle: Intersections of art, science, and technology. Wilson states that ‘Information Arts can be seen as an investigation of these moving boundaries [between art and techno-scientific inquiry] and the cultural significance of including techno-scientific research in a definition of art.’ (Wilson 2002, 18).

A significant achievement of the digital paradigm is its ability to transgress traditional borders and look beyond the narrow institutional confinements of Art with a capital A when focusing on aesthetics – thus, it is possible to consider a theme like, for instance, surveillance from a number of different points of views (culturally, technically, artistic, politically etc.). In this sense, the digital paradigm is in accordance with classic Kantian aesthetics according to which aesthetic judgement of taste is applicable to all sorts of phenomena from different domains and not just to art. (Kant 1790, § 48)

Closely related to the digital paradigm’s refreshingly unorthodox border crossing, the second characteristic of the digital paradigm is that digital technology in itself becomes the centre of attention in the digital paradigm. This means that digital technology and media are the elements that fixates the meaning of the paradigm – or constitutes it – whereas art and aesthetics do not play central roles. Therefore, when art or aesthetics are considered from the point of view of the digital paradigm they are subsumed – along with other cultural/social/political modes of expression – under the primacy of digital technology and not as governing concepts in themselves.

On the surface, it would seem that aesthetics as understood within the digital paradigm relates to Kantian aesthetics in the same way Visual Culture studies relates to the discipline of Art History: By proposing a radically different point of perspective on a well known subject matter while at the same time using this new point of perspective to expand the scope of that subject matter to include phenomena (like traffic signs, fashion, reality shows etc.) that are not included in the original discipline of Art History. Thus, within the digital paradigm, the notion of ‘aesthetics’ covers a very vast area from recommendations for webpage design, to copyright issues related to music software, to wearable technology, to computer games etc., while aesthetics in the classic sense of the philosophy of the beautiful, the sublime, art etc. plays a minor role. As Carsten Strathausen puts it in 2009:

 

‘The nascent aesthtetics of new media is variously names “rational aesthetics”, (Claudia Gianetti) or “info-aesthetics” as well as “post-media aesthetics” (Lev Manovich) or “techno-aesthetics” (Peter Weibel) […] “Rational,” “info-,” or “techno-“ aesthetics is thus informed by the history of science and engineering rather than that of philosophy and politics. Its heroes are Boscovich, Boole, Turing, and Bense instead of Aristotle, Kant, Hegel, or Adorno.’ (Strathausen, 59)

 

What Strathausen points to and criticizes is that, what he terms ‘new media’ and this articles terms a ‘digital paradigm’, actively proposes a radical replacement of one discourse of aesthetics (the classic) with another, new discourse which is closely tied to the subject matter of digital technology. Hence, aesthetics becomes identical to the subject matter of the work (which, in the digital paradigm, is identical to technical properties) instead of being a philosophical perspective applied to a work (and its subject matter, its technical properties etc.).

Survey books on new media art or digital art are organised either as descriptions/analysis of individual artists or works or according to technological subgenres like ‘video art, ‘network art’, ‘interactive art’, ‘telepresence’ etc. (see for instance Rush, Giannetti, Tribe & Jana, Paul, Shanken, Wilson). Consequencely, in a digital paradigm analyses and debates on the role of new technology in art have (had) an overall essentialist character in the sense that questions asked basically centres around: what is “interactive”, or “networked”, or “digital” (etc.) art?

Though the above questions are good and relevant, they lack one important component that it is highly appropriate to investigate in a post-digital paradigm, that is: According to whom? Or in other words: From which specific subject position are such questions asked? From the position of the artist, the curator/critic, the user, the implied audience or the actual audience? By not explicating which subject positions are addressed when carrying out analyses of new art forms, the results of those analyses are staged as virgin born truths radiating from the works of art. As a result, attempts to critically investigate tendencies across different works of art do not distinguish between the specific technical features applied in a work of art and what is actually encountered by the average member of the audience.

Art404: "Five Million Dollars 1 Terabyte"

Art404: “Five Million Dollars 1 Terabyte” 

Consider, for instance, the work “5 Million Dollars, 1 Terabyte” by Art 404 (exhibited at Transmediale 2012), which consist of a black terabyte hard drive exhibited in a vitrine. No matter how hard we look, smell, taste, listen or touch the hard drive, we will never be able to extract the most important feature about this work of art – the decisive factor that transforms the terabyte from a dull object of everyday life and that potentially gives rise to aesthetic experience for the audience: The fact that this particular hard drive contains illegally downloaded material worth five million dollars. The only way of becoming aware of this crucial piece of information is by reading the catalogue text or visiting Art 404’s website. Thus, in reality there is a gap between the experience gained from actually encountering the work in the gallery and from reading about it – a gap that is not really addressed in aesthetic research within the digital paradigm, since it interprets the works of art according to technological features and thereby confuses these different subject positions.

Especially the subject position of the audience seems to be neglected in the digital paradigm insofar as audience experiences are assumed in aesthetic analyses to be identical to the artist’s intention, curatorial/critical framing, or theoretical accounts of technical characteristics and potentials of new art types. In the digital paradigm, if the use of a specific technology in a work of art is considered to have interactive, or critical, or alienating potentials it is more or less automatically assumed that the audience/users’ experiences correspond to those potentials without paying much attention to the fact that different contexts and subject positions invite different aesthetic considerations. In this sense, aesthetic research within a digital paradigm is governed by techno-essentialism rather than contextualism.

 

Art and aesthetics of a post-digital paradigm

As mentioned, a digital and a post-digital paradigm to a large extent co-exists. One evidence of their parallel existence is the recurrent lament over the gap between the world of mainstream contemporary art and the ‘ghetto’ of new media art, digital art or similar terms of technologically informed prefix art. (Quaranta, 2013) We may consider the digital paradigm as the academic equivalence to new media art – we may even claim that the digital paradigm has created new media art as a practice that differs from mainstream contemporary art – and the post-digital paradigm as affiliated with mainstream contemporary art insofar as the post-digital paradigm is not concerned with specific technologies or materials.

Obviously, different media have been in fashion at the scene of contemporary art in different periods (happening was more popular in the 70s whereas painting was more popular the 80s), but contemporary art as a discourse is governed by art as its point of fixation, and not by any specific technology, just like the post-digital paradigm does not favour a technology that succeeds digital technology. Since the post-digital paradigm is also a post-media, or in this context a post-technological, paradigm, we may ask what kind of nodal point that fixates its meaning as a discursive field – and this question is not easily answered. Whereas the digital paradigm is focused on digital technology, the driving force of the post-digital seems first and foremost to be not to automatically focus on digital technology more than positively subscribing to anything specific. The post-digital paradigm, however, is very outspoken in relation to contemporary art in the sense that curators and critics within the art field explicitly have articulated views in favour of a post-digital paradigm (see Quaranta).

One significant potential of applying a post-digital perspective on works of art, as well as on other objects or phenomena, is that it paves the way to, once again, consider the genuinely aesthetic potentials of works that make use of new media and technology – without automatically subjecting aesthetic experience to technology. Hence, we may now ask the ‘naïve’ questions to the field of contemporary art, such as: Are new media of aesthetic relevance in a work of art if they go unnoticed by the audience? How do we elaborate on the fact that the same work of art potentially gives rise to different kinds of aesthetic experiences depending on which subject positions (artist, curator/critic, user, audience) engage with the work and in what manners (as intended by someone else or not)? And how do we consider the aesthetic appeal of works of art whose medium is not accessible to our physical senses?

In order to investigate such aesthetic questions thoroughly it is necessary to insist that the subject positions of artist and audience are separated like they are in Kantian aesthetics. As demonstrated above, a digital paradigm is in accordance with Kant’s paragraph 48 in respect of the separation of art and aesthetics, because it does not confine aesthetics to the domain of art – however, it subsumes aesthetics under the governance of technology, which means that the aesthetic judgement is not given the free play, Kant assigned to it. In the very same paragraph, Kant makes another significant distinction that of relevance here for two reasons:

First, Kant describes how aesthetic taste is at work on the side of the artist when he creates his work insofar he ‘checks his work [against manifold examples from art or nature]; and after many, often toilsome, attempts to content taste he finds the form which satisfies him.’ Kant then crucially states: ‘But taste is merely a judging and not a productive faculty’. In other words: Even when the artist judges his own work during its production, he does so by stepping back from the work ‘after he has exercised and corrected it’ (Kant) in order to create the distance necessary for passing an aesthetic judgement of taste, before stepping towards the work to once again correct it. The artist thus oscillates between two different subject positions: That of the immediate creator and that of the contemplative judge, of which only the latter, according to Kant, is able the pass an aesthetic judgement of taste on the artefact that is being created. In this sense aesthetics is always implicitly an aesthetics of reception – even when it is part of an overall production process.

Now the fact that Kant defined aesthetics as a matter of reception in 1790 does not automatically renders it relevant today. After all, why should we still insist on a separation between the artist and the audience when, for instance, the fields of new media art and relational aesthetics in many cases is characterised by participation and interactivity that result in co-creation to the extent that such a distinction might seem irrelevant? For instance, the Ars Electronica Prix category of ‘Digital Communities’ consists of works in which such a distinction may seem absurd, since the digital communities function collectively in the participants’ everyday life.

One example could be the 2013 Golden Nica winner “El Campo de Cebada”, the name of an enclosed city square in Madrid, where residents and the council work together – on the physical place and via online social media – to define the use of the square. (Fisher-Schreiber, 200-203) No artist or artists group is credited for this ‘work’ since this is genuinely a collective project. Now, participating in “El Compo de Cebada” may (or may not) result in aesthetic reflective judgements among the individuals who engage in the project on an everyday basis in Madrid, as accounted for above with reference to Kant, but the moment the project is framed by the Ars Electronica as an outstanding work belonging to the ‘Digital communities’ category a non-creating audience is created for the project and it becomes an object for potential aesthetic reflective judgement to that audience too.

In fact, the very act of presenting or exhibiting the project within an art (or at least cultural) institutional framework – like Ars Electronica – renders the prime purpose of “El Campo de Cebada” to one of prompting aesthetic reflection rather than immediate function – even if it is the functional dimensions that, contemplated from the point of view of a audience subject position, prompt reflection. Whereas in Madrid the square is inhabited, in the context of Ars Electronica it is ‘exhibited’, and this sole act of exhibiting automatically installs “El Compo de Cebada” as an object for potential reflective aesthetic judgement of taste by subject positions that differ from the work’s immediate producers. Hence, at least three different subject positions are at work in the case of “El Compo de Cebada”: The active participants that create the phenomenon, the active participants that step back to contemplate the phenomenon (who in flesh and blood are identical to the first position), and the audience at Ars Electronica who contemplates the project that is presented to them.

Second, especially in the realm of so-called new media art, there are more than one audience subject position. As lucidly accounted for by Dominic Lopes, in interactive art we may distinguish between the ‘user’ (who explores a work by generating displays in a prescribed manner) and the ‘audience’ (who explore a work by watching users generate displays by interacting with a work). (Lopes, 2010) Similar distinctions have been made between ‘visitors’ and ‘shy visitors’ to exhibitions of interactive art (Scott et al., 2013), and audience members acting as ‘object signs’ and ‘meta signs’ respectively when experiencing digital art (Qvortrup, 2004). Thus, in many cases we may add yet another subject position to the three detected above in relation to “El Compo de Cebada”, because the overall category of audience is often split into (at least) two different subject positions.       The difference between Lopes’ two different subject positions of user and audience can be illustrated with reference to the work “OCTO P7C-1” (exhibited at Transmediale 2013). The work (produced by the Telekommunisten group) consisted of a spectacular, seemingly chaotic, network of yellow plasic tubes that criss-crossed the entire main venue of the Transmediale Festival, and worked as an ‘Intertubular Pneumatic Packet Distribution System’, that enabled visitors to communicate between different locations on the festival by way of sending written notes or small objects through the tube system.

 

OCTO at Transmediale 2013

OCTO at Transmediale 2013 

In the exhibition Lopes’ term ‘users’ describes those visitors who engaged actively with “OCTO P7C-1” by, for instance, writing/drawing/crafting messages for the postal tubes or sending/receiving such messages by communicating commands to the OCTO-staff working the distribution centre. The distinctive sound accompanying each packet’s travel through the tube system, the messages, the conversations between users and OCTO-workers etc. are all different kinds of audible, visual and sensual displays by which the user gradually explores physical and semiotic dimensions of the work (and potentially gets involved with aesthetic relations with it).

In addition to the user, who acts in accordance with a prescribed manner staged by the creators of the work, the subject position of what Lopes terms ‘audience’ are of relevance when investigating aesthetic implications of a work like OCTO. The audience do not engage directly with the work like the users do, but they watch how users interact with OCTO and they observe how displays are generated as results from this interaction. As such, the audience explores the work, too, albeit in a different manner than users (and may enter in aesthetic relations with the work).

The reason that the subject position that Lopes calls ‘audience’ has been left out of the equation in the digital paradigm, is that the potential aesthetic reflective judgement with this subject position does not fit a techno-essentialist view on new media art. An audience may experience what might be intented by the artist or described by a curator as an ‘interactive, networked installation’ in a very non-interactive, non-networked manner. And even ‘users’, who do interact actively with a work, may have aesthetic experiences that differ from the technologically defined ones governing a digital paradigm. While we may think that this is a problem, because it means that something has gone wrong in the course of communicating fully the essence of the work to the audience, this article will conclude by pointing out why such a ‘mis-communication’ is a good thing, and why the digital paradigm to a large extent ought to support it.

 

Conclusion

First of all, to challenge the close interpretative connection between creator, technical properties of the work, and audience that governs the digital paradigm is in perfect accordance with Roland Barthes’ account of the birth of the reader and the Death of the Author and with Michel Foucault’s subsequent distinction between author – in flesh and blood – and author function – as an important, yet virtual, character. (Barthes, 1999; Foucault, 1991) When Barthes and Foucault articulated the radical break between artist and audience, the work was simultaneously transformed to text – a transformation that actually fits very well with the digital paradigm, since it is the same transformation strategy the digital paradigm itself applies to phenomena and artefacts that according to a more traditional point of view belongs to different domains of engineering, art, politics, science, etc. Within the digital paradigm, traditional meanings of such different phenomena and artefacts are disregarded in favour of new progressive acts of interpretation that focus on new, technological dimensions and their wider implications.

In other words: The digital paradigm in itself transforms works to texts in order to read them. And this is why it is a strange paradox that the digital paradigm does not seem to allow the same post-structural practice to unfold with regard to the works of art that it, so to speak, adopts (or monopolizes) as the paradigm’s own by incorporating them in books and exhibitions on ‘digital art’ or ‘new media art’.

Apart from the theoretical critique of a digital paradigm – that it does not do justice to the post-structural ideas of separating and acknowledging the functions of different subject positions – another paradox related to the concrete artistic practices is at work in the digital paradigm. Namely that especially when it comes to works of contemporary art that make use of new media and technologies, it seems obvious that the cultural and institutional uncertainties surrounding the works may in fact boost the potentials of ‘readers’ gaining aesthetic experiences from encountering such works due to the lack of an overall concept by which the works might be comprehended rationally: Oil paintings and marble sculptures are conventionally framed and pinned down as ‘works of art’ that we are meant to appreciate as such. Hence, the insistence in Kantian aesthetics that the subject’s aesthetic judgement of taste is governed by reflective rather than determined relation to the object encountered (Kant, 1790: §4), may be compromised when the object is fixed by one specific institutional framing established over a long period. In contrast to paintings or sculptures, many of the objects, designs, events, phenomena, hacks, etc. taken under the wings of the digital paradigm have tremendous aesthetic potential due to the institutional and cultural ambiguity they (still) possess. It seems, therefore, paradoxical when survey books, analysis, critics or curators within a digital paradigm attempt to account for the aesthetic characteristics of such works by subsuming them under determined technological categories.

Therefore, one significant advantage of moving from a digital to a post-digital paradigm is that a post-digital paradigm enables us to approach art in a more open and critical way than what has been practiced in the digital paradigm. Specifically, a post-digital paradigm allows us to seriously plug in the subject positions of the audience when we conduct aesthetic research and analysis of contemporary works of art that make use of or refer to digital technology.

 

References:

Barthes, R.: Image, Music, Text, 1999 [1977], Noonday Press. “The Dearth of the Author” pp. 142-148 and “From Work to Text”, pp. 155-164.

Fischer-Schreiber, I. (ed.): CyberArts 2013, 2013, Hatje Cantz.

Foucault, M.: “What is an Author?” [1969] in The Foucault Reader (ed.: Rabinow), 1991, London: Penguin, 101-120.

Giannetti, C.: Ästhetik des Digitalen, 2004, Springer.

Lopes, D.: A Philosophy of Computer Art, 2010, Routledge.

Paul, C.: Digital Art, 2008, Thames & Hudson.

Quaranta, D: Beyond New Media Art, 2013, Link Editions

Qvortrup, L.: “Det gode digitale kunstværk”, in Digitale verdener (ed.: Engholm & Klastrup), 2004, Gyldendal: 119-142

Rush, M.: New Media in Art, 1999 + 2005, Thames & Hudson.

Scott, S.; Hinton-Smith, T.; Härmä, V; and Broome, K.: “Goffman in the Gallery: Interactive Art and Visitor Shyness” in Symbolic Interaction, 2013, Vol. 36, Issue 4: 417-438.

Shanken, E. (ed): Art and Electronic Media, 2009, Phaidon.

Strathausen, C.: ”New Media Aesthetics” 2009, in Koepnick & McGlothlin (eds.): After the Digital Divide?,  Camden House.

Tribe, M. & Jana, R.: New Media Art, 2006, Taschen.

Wilson, Stephen: Information Arts – intersections of art, science, and technology, 2002, Cambridge (MA): MIT Press.

Wilson, Stephen: Art + Science Now, 2010, Thames & Hudson

www.telekommunisten.net/octo/ (visited 6 Oct. 2013)

 



[1] Regarding the original/copy issue in relation to digital imagery see Boris Groys, “From Image to Image File – and Back: Art in the Age of Digitalization” in Groys: Art Power, 2008, MIT Press, 83-91

[2] A brief historiography of ’new media art’ and ’the post-digital condition’ is provided by Dominico Quaranta in his book Beyond New Media Art, 2013, Brescia: Link Editions, pp. 23-26 and 199-207 respectively

Post Digital Publishing, Hybrid and Processual Objects in Print

Introduction.

This paper analyses the evolution of printed publishing under the crucial influence of digital technologies. After discussing how a medium becomes digital, it examines the ‘processual’ print, in other words, the print which embeds digital technologies in the printed page. The paper then investigates contemporary artist’s books and publications made with software collecting content from the web and conceptually rendering it in print. Finally, it explores the early steps taken towards true ‘hybrids’, or printed products that incorporate content obtained through specific software strategies, products which seamlessly integrate the medium specific characteristics with the digital processes.

How a medium becomes digital (and how publishing did).

For every major medium (vinyl and CDs in music and VHS and DVD in video, for example) we can recognize at least three stages in the transition from analogue to digital, in both production and consumption of content.

The first stage concerns the digitalization of production. It is characterized by software beginning to replace analogue and chemical or mechanical processes. These processes are first abstracted, then simulated, and then restructured to work using purely digital coordinates and means of production. They become sublimated into the new digital landscape. This started to happen with print at the end of seventies with the first experiments with computers and networks, and continued into the eighties with so-called “Desktop Publishing”, which used hardware and software to digitalize the print production (the “prepress”), a system perfected in the early nineties.

The second stage involves the establishment of standards for the digital version of a medium and the creation of purely digital products. Code becomes standardized, encapsulating content in autonomous structures, which are universally interpreted across operating systems, devices and platforms. This is a definitive evolution of the standards meant for production purposes (consider Postscript, for example) into standalone standards (here the PDF is an appropriate example, enabling digital “printed-like” products), that can be defined as a sub-medium, intended to deliver content within specific digital constraints.

The third stage is the creation of an economy around the newly created standards, including digital devices and digital stores. One of the very first attempts to do this came from Sony in 1991, who tried to market the Sony Data Discman as an “Electronic Book Player” – unfortunately using closed coding which failed to become broadly accepted. Nowadays the mass production of devices like the Amazon Kindle, the Nook, the Kobo, and the iPad – and the flourishing of their respective online stores – has clearly accomplished this task [1]. These online stores are selling thousands of e-book titles, confirming that we have already entered this stage.

The processual print as the industry perceives it (entertainment).

Not only are digitalization processes yet to kill off traditional print, but they have also initiated a redefinition of its role in the mediascape. If print increasingly becomes a valuable or collectable commodity and digital publishing also continues to grow as expected, the two may more frequently find themselves crossing paths, with the potential for the generation of new hybrid forms. Currently, one of the main constraints on the mass-scale development of hybrids is the publishing industry’s focus on entertainment.

Let’s take a look at what is happening specifically in the newspaper industry: on the one hand we see up-to-date printable PDF files to be carried and read while commuting back home in the evening, and on the other hand we have online news aggregators (such as Flipboard and Pulse) which gather various sources within one application with a slick unified interface and layout. These are not really hybrids of print and digital, but merely the products of ‘industrial’ customisation — the consumer product ‘choice’ of combining existing features and extras, where the actual customising is almost irrelevant.

Even worse, the industry’s best effort at coming to terms with post-digital print (print embedding some active digital qualities) is currently the QR code — those black-and-white pixelated square images which, when read with the proper mobile phone app, allow the reader access to content (usually a video or web page). This kind of technology could be used much more creatively, as a means of enriching the process of content generation. For example, since they use networks to retrieve the displayed content, printed books and magazines could include QR codes as a means of providing new updates each time they are scanned – and these updates could in turn be made printable or otherwise preservable. Digital publications might then send customised updates to personal printers, using information from different sources closely related to the publication’s content. This could potentially open up new cultural pathways and create unexpected juxtapositions [2].

Printing out the web.

Many possibilities emerge from the combination of digital and print, especially when networks (and therefore infinite supplies of content that can be reprogrammed or recontextualized at will) become involved. A number of different strategies have been employed to assemble information harvested online in an acceptable form for use in a plausible print publication.

One of the most popular of these renders large quantities of Twitter posts (usually spanning a few years) into fictitious diaries. My Life in Tweets by James Bridle is an early example realized in 2009 [3]. The book compiled all of the author’s posts over a two-year period, forming a sort of intimate travelogue. The immediacy of tweeting is recorded in a very classic graphical layout, as if the events were annotated in a diary. Furthermore, various online services have started to sell services appealing to the vanity of Twitter micro-bloggers, for example Bookapp’s Tweetbook (book-printing your tweets) or Tweetghetto (a poster version).

Another very popular “web sampling” strategy focuses on collecting amateur photographs with or without curatorial criteria. Here we have an arbitrary narrative, employing a specific aesthetic in order to create a visual unity that is universally recognizable due to the ubiquitousness of online life in general, and especially the continuous and unstoppable uploading of personal pictures to Facebook.

A specific sub-genre makes use of pictures from Google Street View, reinforcing the feeling that the picture is real and has been reproduced with no retouches, while also reflecting on the accidental nature of the picture itself. Michael Wolf’s book a series of unfortunate events, points to our very evident and irresistible fascination with “objets trouvé”, a desire that can be instantly and repeatedly gratified online [4].

Finally, there’s also the illusion of instant-curation of a subject, which climaxes in the realization of a printed object. Looking at seemingly endless pictures in quick succession online can completely mislead us about their real value. Once a picture is fixed in the space and time of a printed page, our judgments can often be very different.

Such forms of “accidental art” obtained from a “big data” paradigm, can lead to instant artist publications such as Sean Raspet’s 2GFR24SMEZZ2XMCVI5… A Novel, which is a long sequence of insignificant captcha texts, crowd-sourced and presented as an inexplicable novel in an alien language [5].

There are traces of all the above examples in Kenneth Goldsmith’s performance Printing Out The Internet [6]. Goldsmith invited people to print out whatever part of the web they desired and bring it to the gallery LABOR art space in Mexico City, where it was exhibited for a month (which incidentally also generated a number of naive responses from environmentally concerned people). The work was inspired by Aaron Swartz and his brave and dangerous liberation of copyrighted scie ntific content from the JSTOR online archive [7].

It is what artist Paul Soulellis calls “publishing performing the Internet” [8].

Having said all this, the examples mentioned above are yet to challenge the paradigm of publishing – maybe the opposite. What they are enabling is a “transduction” between two media. They take a sequential, or reductive, part of the web and mould it into traditional publishing guidelines. They tend to compensate for the feeling of being powerless over the elusive and monstrous amount of information available online (at our fingertips), which we cannot comprehensively visualize in our mind.

Print can be considered as the quintessence of the web: it is distributing a smaller quantity of information available on the web, usually in a longer and much better edited form. So the above mentioned practices sometimes indulge in something like a “miscalculation” of the web itself -  the negotiation of this transduction is reducing the web to a finite printable dimension, denaturalizing it. According to Publishers Launch Conferences’ cofounder Mike Shatzkin, in the next stage “publishing will become a function… not a capability reserved to an industry…” [9]

Hybrids, the calculated content is shaped and printed out.

This “functional” aspect of publishing can, at its highest level, implies the production of content that is not merely transferred from one source to another, but is instead produced through a calculated process in which content is manipulated before being delivered. A few good examples can be found in pre-web avant-garde movements and experimental literature in which content was unpredictably “generated” by software-like processes. Dada poems, for example, as described by Tristan Tzara, are based on the generation of text, arbitrarily created out of cut-up text from other works [10].  One of the members of the avant-garde literature movement Oulipo created a similar concept later: Raymond Queneau’s Cent Mille Milliards de Poèmes is a book in which each page is cut into horizontal strips that can be turned independently, allowing the reader to assemble an almost infinite quantity of poems, with an estimated 200 million years needed to read all the possible combinations [11]. That an Oulipo member created this was no accident – the movement often played with the imaginary of a machinic generation of literature in powerful and unpredictable ways.

Contemporary experiments are moving things a bit further, exploiting the combination of hardware and software to produce printed content that also embeds results from networked processes and thus getting closer to a true form.

Martin Fuchs and Peter Bichsel’s book Written Images is an example of the first ‘baby steps’ of such a hybrid post-digital print publishing strategy [12]. Though it’s still a traditional book, each copy is individually computer-generated, thus disrupting the fixed ‘serial’ nature of print. Furthermore, the project was financed through a networked model (using Kickstarter, the very successful ‘crowd-funding’ platform), speculating on the enthusiasm of its future customers (and in this case, collectors). The book is a comprehensive example of post-digital print, through the combination of several elements: print as a limited-edition object; networked crowd-funding; computer-processed information; hybridisation of print and digital – all residing in a single object – a traditional book. This hybrid is still limited in several respects, however: its process is complete as soon as it is acquired by the reader; there is no further community process or networked activity involved; once purchased, it will forever remain a traditional book on a shelf.

A related experiment has been undertaken by Gregory Chatonsky with the artwork Capture [13]. Capture is a prolific rock band, generating new songs based on lyrics retrieved from the net and performing live concerts of its own generated music lasting an average of eight hours each. Furthermore the band is very active on social media, often posting new content and comments. But we are talking here about a completely invented band. Several books have been written about them, including a biography, compiled by retrieving pictures and texts from the Internet and carefully (automatically) assembling them and printing them out. These printed biographies are simultaneously ordinary and artistic books, becoming a component of a more complex artwork. They plausibly describe a band and all its activities, while playing with the plausibility of skillful automatic assembly of content.

Another example of an early hybrid is American Psycho by Mimi Cabell and Jason Huff [14]. It was created by sending the entirety of Bret Easton Ellis’ violent, masochistic and gratuitous novel American Psycho through Gmail, one page at a time. They collected the ads that appeared next to each email and used them to annotate the original text, page by page. In printing it as a perfect bound book, they erased the body of Ellis’ text and left only chapter titles and constellations of their added footnotes. What remains is American Psycho, told through its chapter titles and annotated relational Google ads only. Luc Gross, the publisher, goes even further in predicting a more pervasive future: “Until now, books were the last advertisement-free refuge. We will see how it turns out, but one could think about inline ads, like product placements in movies etc. Those mechanisms could change literary content itself and not only their containers. So that’s just one turnover.”

Finally, why can’t a hybrid art book be a proper catalogue of artworks? Les Liens Invisibles, an Italian collective of net artists have assembled their own, called Unhappening, not here not now [15]. It contains pictures and essential descriptions of 100 artworks completely invented but consistently assembled through images, generated titles and short descriptions, including years and techniques for every “artwork”. Here a whole genre (the art catalogue or artist monograph) is brought into question, showing how a working machine, properly instructed, can potentially confuse a lot of what we consider “reality”. The catalogue, indeed, looks and feels plausible enough, and only those who read it very carefully can have doubts about its authenticity.

Conclusions.

Categorising these publications under a single conceptual umbrella is quite difficult and even if they are not yet as dynamic as the processes they incorporate, it’s not trivial to define any of them as either a ‘print publication’ or a ‘digital publication’ (or a print publication with some digital enhancements). They are the result of guided processes and are printed as a very original (if not unique) static repository, more akin to an archive of calculated elements (produced in limited or even single copies), than to a classic book, so confirming their particular status. The dynamic nature of publishing can be less and less extensively defined in terms of the classically produced static printed page. And this computational characteristic may well lead to new types of publications, embedded at the proper level. It can help hybrid publications function as both: able to maintain their own role as publications as well as eventually being able to be the most updated static picture of a phenomenon in a single or a few copies, like a tangible limited edition. And since there is still plenty of room for exploration in developing these kind of processes, it’s quite likely that computational elements will extensively produce new typologies of printed artifact, and in turn, new attitudes and publishing structures. Under those terms it will be possible for the final definitive digitalization of print to produce very original and still partially unpredictable results.

References and Notes

1. Sony Data Discman <http://en.wikipedia.org/wiki/Data_Discman>, accessed 1 July 2013.

2. Alessandro Ludovico, Post-digital Print – The Mutation Of Publishing Since 1894

(Eindhoven, The Netherlands: Onomatopee, 2012).

3. James Bridle (2009), <http://booktwo.org/notebook/vanity-press-plus-the-tweetbook/>, accessed 1 July 2013.

4. Michael Wolf (2010), <http://photomichaelwolf.com/#asoue/14>, accessed 1 July 2013.

5. Sean Raspet (2013),  <http://thehighlights.org/wp/captcha>, accessed 1 July 2013.

6. Kenneth Goldsmith (2013), <http://printingtheinternet.tumblr.com/>, accessed 1 July 2013.

7. Connor Kirschbaum, “Swartz indicted for JSTOR theft. Digital activist gained access through MIT network drops” The Tech (2011), <http://tech.mit.edu/V131/N30/swartz.html>, accessed 1 July 2013.

8. Paul Soulellis, “Search, compile, publish.” (2013), <http://soulellis.com/2013/05/search-compile-publish/>, accessed 1 July 2013.

9. Mike Shatzkin, “Atomization: publishing as a function rather than an industry” (2013), <http://www.idealog.com/blog/atomization-publishing-as-a-function-rather-than-an-industry/>, accessed 1 July 2013.

10. Florian Cramer, “Concepts, Notations, Software, Art” (2002), <http://www.netzliteratur.net/cramer/concepts_notations_software_art.html>, accessed 1 July 2013.

11. Raymond Queneau (1961), <http://en.wikipedia.org/wiki/Hundred_Thousand_Billion_Poems>, accessed 1 July 2013.

12. Martin Fuchs, Peter Bichsel (2011), <http://writtenimages.net/>, accessed 1 July 2013.

13. Gregory Chatonsky (2009), <http://chatonsky.net/project/capture/>, accessed 1 July 2013.

14. Mimi Cabell, Jason Huff (2010), <http://www.mimicabell.com/gmail.html>, accessed 1 July 2013.

15. Les Liens Invisibles (2013), <http://www.atypo.org/it/work/unhappening-not-here-not-now/>, accessed 1 July 2013.

What is “Post-digital”?

Typewriters versus memes

"You’re not a real hipster – until you take your typewriter to the park"

In January 2013, a picture of a young man sitting on a park bench while typing on a mechanical typewriter went viral on the popular website Reddit. It had been designed in the typical style of an "image macro" or "meme" (Klok 16-19): On top of the photograph, bold white letters in the Impact typeface sarcastically stated that "You’re not a real hipster […] until you take your typewriter to the park".

The meme, which continued to make waves until late in 2013 (Hermlin), emblematizes the rift between digital and post-digital cultures. Imageboard memes are arguably the best example of a contemporary popular mass culture born in the Internet. They differ from older popular forms of visual culture such as comic strips because they are anonymous creations that even gave birth to the Anonymous movement, as described by in (Klok 16-19). Furthermore, they are based on creation by users, disregard of intellectual property, viral dissemination among users and potentially infinite repurposing and variation (through collage or different lettering). As small file size and low resolution images, they exhibit a favor of speed of creation and dissemination over traditional publishing processes with their slower speeds of creation, editing and distribution.

The meme image of the typewriter hipster is a negative self-reflection since it shows the opposite of itself. In a strict technical sense, even a mechanical typewriter is a digital writing system (as explained later in this text) and embodies by virtue of its keyboard the immediate prehistory of personal computer systems, including the one on which the lettering for the image meme had been typed on.

In a colloquial sense however, this machine is "analog" because it does not contain computational electronics. In the year 2013, choosing a mechanical typewriter instead of a mobile computing device is, as the image suggests, no longer a sign of being old-fashioned, but a conscious decision against electronics. It questions the view that computers, as meta-machines, represent obvious technological progress and therefore are the logical upgrade to any older media theory – much in the same way as using a bike today questions the older ideology that the car is a rationally superior means of transportation.

Typewriters are not the only media that have been revived as literally post-digital devices: vinyl records, lately also audio cassettes, analog photography and artists’ printmaking should be named, too. And when looking at the work of contemporary young artists and designers, including art school students, such media are vastly more popular than making, for example, image memes.1

Post-digital: a term that sucks but is useful

1. Disenchantment with "digital"

Through my student Marc Chia – now Tara Transitory, performing under the moniker One Man Nation -, I was first confronted with the term "post-digital" in 2007 . My first reflex was to dismiss it as moot in an age of cultural, social and economic ruptures driven to major extents by computational digital technology. Today, in the age of ubiquitous mobile devices, drone wars and the gargantuan data operations of Google, the NSA and other global players, it may appear even more questionable than in the year 2007: as either ignorance of our times or Thoreauvian-Luddite withdrawal from them.

More pragmatically, "post-digital" could be understood as a moniker for a contemporary disenchantment with digital information systems and media gadgets, and for a time where fascination for them has become historical (just like the dotcom age ultimately became historical in the 2013 novels of Thomas Pynchon and Dave Eggers). After Edward Snowden’s disclosures of all-pervasive digital surveillance, this disenchantment has grown from a niche “hipster” phenomenon to a mainstream position that will likely impact all cultural and business practices built upon networked electronic devices and Internet services.

2. Revival of "old" media

While Thoreauvian-Luddite withdrawal might appear tempting for many, it is naive. For the arts, it boils down to the 19th century Arts and Crafts movement repeating itself, with its program of handmade production as resistance to industrialization. It is undeniably at work in today’s renaissance of artists’ printmaking, handmade film labs, limited vinyl editions, rebirth of audio cassette, mechanical typewriters, analog cameras and synthesizers. An empirical study our research center in Rotterdam conducted among Bachelor students from most art schools in the Netherlands proved a clear preference for working with non-electronic media among contemporary young artists and designers. About 70% of them would rather make a poster than a website if they had a choice (van Meer, 14). Digital technology experimentation has almost completely transitioned towards engineering schools, and is often considered commercial and mainstream by arts students.

post-what?

think postcolonial, not post-histoire

On closer inspection, the dichotomy between digital big data and neo-analog DIY is however not as clear-cut as it may first seem and give the attribute "post-digital" more significance than being just a sloppy descriptor for a trend in contemporary culture:

This age is clearly not a post-digital age – neither in regards to technological developments, nor in a historico-philosophical (geschichtsphilosophische) perspective.

Regarding the latter, (Cox, xxx) puts down a valid critique of the term "post-digital" as a questionable continuation of other historico-philosophical nouns prefixed with "post", from postmodernity to posthistoire. However, "post-digital" can be more pragmatically and meaningfully defined within popular cultural and colloquial frames of references, both in regard to the prefix "post" and to the notion of "digital". Rather than "postmodernity" and "posthistoire", the reference of the "post" prefix could be post-punk, punk culture continued in ways that were both punk and not; post-communism as it is still the reality in former East block countries, postcolonialism and, to a lesser extent, the post-apocalyptic whose modern iconography has been established by the Mad Max films in the 1980s. They do not suggest that the apocalypse is over, but has transformed from rupture to enduring condition (or from Ereignis to Being).

Popular takeaway restaurant in Rotterdam echoing a part of 19th century Dutch colonial history where the members of the Chinese minority of Java/Indonesia were brought as contract workers to a government-run plantation on Suriname

Popular takeaway restaurant in Rotterdam echoing a part of 19th century Dutch colonial history where the members of the Chinese minority of Java/Indonesia were brought as contract workers to a government-run plantation on Suriname

None these words – post-punk, post-communism, postcolonialism, postapocalyptics – would be done justice if one read them as Hegelian notions. Rather, they describe cultural shifts and ongoing mutations: Postcolonialism does not mean the end of colonialism akin to Hegel’s and Fukuyama’s "end of history", but quite on the contrary its transformation into less clearly visible power structures that are still in place, have left their mark on languages and cultures, and most importantly still govern geopolitics and global production chains. In this sense, the post-digital condition is the post-apocalyptic condition after the computerization and global digital networking of communication, technical infrastructures, markets and geopolitics.

"digital" as sterile high tech

The second half of the word "post-digital" refers to a popular cultural – rather than scientific or media theoretical – definition of "digital", the kind of connotation illustrated by contemporary Google image search results on the word "digital":

google.nl image search result for “digital”, 10/2013

google.nl image search result for “digital”, 10/2013

"Post-digital" first of all describes any media aesthetics leaving behind those clean high tech and high fidelity connotations. The word was coined by musician Kim Cascone in 2000 in relation to glitch aesthetics in contemporary electronic music (Cascone, 12). In the same year, the Australian sound and media artist Ian Andrews broadened it into a "post-digital aesthetics" that rejects the "idea of digital progress" and "a teleological movement toward ‘perfect’ representation" (Andrews).

In other words, Cascone and Andrews primarily thought of "post-digital" as an antidote to techno-Hegelianism. Their papers were firmly based on the culture of audiovisual production where "digital" had long been synonymous with "better": the launch of the Fairlight sound sampler in 1979, of the digital audio CD in 1982 and the MIDI standard in the same year, software-only digital audio workstations in the early 1990s, real-time programmable software synthesis with Max/MSP in 1997. Such teleologies are still effective in video and TV technology, with the ongoing transitions from SD to HD and 4K, from DVD to BluRay, 2D to 3D, always sold with the same narrative of innovation, improvement, and higher fidelity reproduction. By rejecting this, Cascone and Andrews opposed the paradigm of good technical quality altogether. "Post-digital" was a confusing coinage in Cascone’s paper because the glitch music it covered and advocated actually was digital, even based on specifically digital sound processing artifacts. But just like post-punk as a reaction to punk, Cascone’s notion of the "post-digital" might best be considered a reaction to an age where even tripods are being sold with "digital" stickers attached in order to suggest that they are new, superior technology:

"digital" tripod

“digital” tripod

"digital" as low-quality trash

Such post-digital rejections of high tech oddly coincidence with post-digital rejections of digital low quality: the persisting argument that vinyl LP sound better than CDs let alone mp3, that film slides look better than digital photographs let alone smartphone snapshots, that 35mm film projection looks better than digital cinema projection let alone bittorrent video downloads or YouTube, that paper books are richer media than websites and e-books, and that something typed on a mechanical typewriter has more value than a throwaway digital text file let alone E-Mail spam. In fact, the glitch which Cascone advocates as something "post-digital" is exactly the kind of digital trash that "post-digital" vinyl listeners dismiss.

against the universal machine

But no matter whether they reject high fidelity or trash, both post-digital attitudes dismiss the idea of the digital computer as the universal machine – and hence digital computational devices as all-purpose media.

Cascone’s "post-digital" resistance to digital high tech echoed older forms of resistance to formalist, mathematically driven progress narratives in music; particularly, the opposition to serialist composition in 20th century contemporary music which started with John Cage, continued with the early minimal music of La Monte Young and Terry Riley and did not end with improvisation/composition collectives such as AMM and Cornelius Cardew’s Scratch Orchestra. The serialism of Stockhausen, Boulez and their contemporaries was digital in the most literal sense of the word: It broke down all parameters of musical composition into computable values in order to process them by the means of numerical transformations. In the later times of mass consumer media technology, computation shifted from a means of composition to a means of signal processing, and from audiovisual production to audiovisual reproduction. (Sometimes involving the same companies, such as Philips which founded a studio for contemporary electronic music in the 1950s and co-developed the audio CD in the early 1980s.)

Most serialist music, however, was not electronic but composed with pen and paper and performed by orchestras. This reveals a crucial issue: unlike in its colloquial meaning (which includes its common understanding in the arts and humanities), "digital" does not necessarily involve electronics. In this sense, the technical-scientific notion of "digital" can – paradoxically enough – be applied to devices that would be called "post-digital" in the arts and humanities. By virtue of its differentiated letters, the hipster’s mechanical typewriter is "digital" system" according to information science and analytical philosophy (Goodman, 161), "analog" by virtue of its mechanics for the anonymous creator of the hipster meme and maybe "post-digital" for an art curator.

What is post-digital then?

(The following is an attempt of recapitulating and ordering observations which I had gathered in previous publications.2)

post-digital = post-digitization

Going back to Cascone and Andrews, but also to post-punk, postcolonialism and Mad Max, "post-digital" most simply describes the messy state of media, arts and design after their digitization, or at least after digitization of crucial parts of their communications. Sentiments of disenchantment and skepticism may add to the mix but not necessarily so. Somteimes, "post-digital" can mean the opposite. Contemporary visual art, for example, only slowly begins to accept net artists as regular contemporary artists (and among them, rather those whose work is white cube-compatible like Cory Arcangel’s), but its discourse and networking have profoundly changed through the e-flux mailing list, art blogs and the electronic e-flux journal. These media that have largely superseded paper art periodicals in circulation, power and influence at least for the art system’s in-crowd of artists and curators. Likewise, paper newspapers have become post-digital, or post-digitization, media wherever they shift their own emphasis from news (for which the Internet is faster) to investigative journalism and commentary, like The Guardian in its coverage of the NSA’s PRISM program.

post-digital = anti-"new media"

"Post-digital" thus refers to a state where disruption through digital information technology has already occurred. Which can mean – such as for Cascone – that it is no longer perceived as disruptive. Therefore, "post-digital" is positioned against the notion of "new media". At the same time, as its negative mirror, it exposes (arguably even deconstructs) the latter’s hidden teleology: If "post-digital" evokes critical reactions concerning the historico-philosophy inscribed into the prefix "post", then it also the reveals a previous lack of such criticality towards the older yet no less Hegelian term "new media".

post-digital = hybrids of "old" and "new" media

"Post-digital" describes a perspective on digital information technology that is no longer focused on technical innovation or improvement but rejects innovation narratives. Consequently, it eradicates the distinction between "old" and "new" media, in theory as well as in practice. Kenneth Goldsmith notes that his students "mix oil paint while Photoshopping and scour flea markets for vintage vinyl while listening to their iPods" (Goldsmith, 226). Working at an art school, I observe the same. Young artists and designers choose media for their own particular material aesthetics including artifacts, whether they result from analog material qualities or from digital processing. Lo-fi misbehavior is embraced no matter whether in digital glitch and jitter like in Cascone’s music or in analog grain, dust, scratches or hiss, as a form of practical exploration and research that explores materials through their misbehavior. It is a post-digital hacker attitude of taking systems apart and using them against their design intentions.

Cassette Store Day: 2013 riff on the Record Store Day

Cassette Store Day: 2013 riff on the Record Store Day

post-digital = retro?

Mo doubt, post-digital mimograph printmaking, audio cassette production, mechanical typewriter experimentation and vinyl DJing overlap with hipster retro media trends including digital simulations of analog lo-fi in popular smartphone apps such as Instagram, Hipstamatic and iSupr8. On the other hand, there is a qualitative difference between using superficial and stereotypical ready-made effects and the thorough work and study required for making "vintage" media work again driven by an desire for non-formulaic aesthetics.

Still, such practices can only be meaningfully called post-digital when they not simply revive older media technologies, but functionally repurpose them in (critical) relation to mainstream digital media technologies: zines that become anti- or non-blogs, vinyl as anti-CD, cassette tapes as anti-mp3, analog film as anti-video.

post-digital = "old" media used like "new media"

At the same time, ethics and cultural conventions that became mainstream with Internet communities and Open Source culture become retroactively applied to the making of non- and post-digital media products. A good example are collaborative zine conventions, a thriving subculture documented amongst others on the blog fanzines.tumblr.com. These events, where people gather to collectively make and exchange zines, are the perfect opposite of the zine cultures of the post-punk 1980s and 1990s where most zines were hyper-individualistic product and personality platforms of one maker. If one maps Lev Manovich’s new media taxonomy of "Numerical Representation", "Modularity", "Automation", "Variability" and "Transcoding" (Manovich, The Language of New Media, 27-48) to a contemporary zine fair or mimography community art space, then "modularity", "variability" and – in a more loosely metaphorical sense – "transcoding" would still apply to the contemporary cultures of working with these "old" media. In these cases, "post-digital" usefully describes "new media"-cultural approaches to working with so-called "old media".

DIY vs. corporate instead of "new" versus "old" media

When hacker-style and community-centric ways of working are no longer tied to specific technologies, but can equally be found in computer labs and zine fairs, the classical dichotomy of "old" and "new" media, analog and digital, shifts to a new differentiation between shrink-wrapped versus do-it-yourself culture. No mainstream medium embodies this better than the magazine and web site Make, published by O’Reilly since 2005 and instrumental for the foundation of the contemporary maker movement. Make covers 3D printing, Arduino hardware hacking, FabLab technology, as well as classical DIY and crafts, and hybrids between them.

Conversely, the 1990s/early 2000s equation that the "old" mass media such as newspapers, movies, television and radio are corporate and "new media" such as web sites are DIY, is no longer true ever since user-generated content has been co-opted into corporate social media and mobile apps. The Internet as an self-run alternative space – central to many activist and artists’ online projects from The Thing onwards – is no longer intuitive for anyone born after 1990. For younger generations, the Internet is largely identical to corporate, registration-only services.3

Semiotic shift to the indexical

The Maker movement, whether in FabLabs or on zine fairs, embodies a shift from the purely symbolic, as privileged in digital systems (for which the login is the perfect example), towards the indexical: from code to traces, and from text to context. 1980s post-punk zines, for example, resembled manifestos such as those of the Berlin Dadaists in the 1920s and 1980s Super 8 films made as part of the Cinema of Transgression and other post-punk movements created underground narratives against mainstream cinema. The majority of contemporary zines and experimental Super 8 films, however, tend to shift from content to pure materiality where the medium, such as paper or celluloid, indeed is the message; from semantics to pragmatics, and from metaphysics to ontology.4

When ‘post-digital’ is ‘digital’ and vice versa

misunderstandings of "digital" as binary and electronic

From a technological and scientific point of view, the word "digital" is wrongly understood and used by Cascone. That also applies to most of what is commonly labelled "digital art", "digital media" and "digital humanities". If something is "digital", it neither has to be electronic, nor involve binary zeros and ones. It does not even need to be attached to electronic computers or any other kind of computational device.

Conversely, analog does not mean non-computational or pre-computational, since there are also analog computers. (Using water and two measuring cups for computing additions and subtractions – of quantities that can’t be exactly counted – is a simple example for analog computing.) "Digital" simply means that something is divided up into exactly countable units – countable with whatever system one uses, whether zeros and ones, decimal numbers, strokes on a beer mat or the digits of one’s hand. (Which is why "digital" is called "digital"; in French, for example, the word is "numérique".) Therefore, the Western alphabet is a digital system, the movable types of Gutenberg’s printing press constitute a digital system, the keys of a piano are a digital system, Western musical score notation is digital aside from such non-discrete value instructions as adagio, piano, forte, legato, portamento, tremolo and glissando. Floor mosaics made from monochrome tiles are digitally composed images. These examples show, too, that "digital" never exists in any perfect form but is always is being abstracted and idealized from matter that, by nature and the laws of physics, has chaotic properties and often ambiguous states5.

misunderstandings of "analog" as non-binary and non-electronic

"Analog" conversely means that something has not been chopped up into discrete, countable units. But it consists of a signal that by itself has no discrete units but is gradually and continuously changing, such as a sound wave, light, a magnetic field such as on an audiotape but also on a computer hard disk, the electrical flows in any circuit including computer chips, a painted color gradient. (Goodman, 160) therefore defines analog as "undifferentiated in the extreme" and "the very antithesis of a notational system".

The fingerboard of a violin is analog, because it is fretless – undivided -, the fingerboard of a guitar is digital, because frets divide it into single notes. What is commonly called "analog" photographic and cine film is actually a hybrid of analog and digital: the particles of the film emulsion are analog, because they are undifferentiated blobs in organic-chaotic order and not reliably countable like pixels -, the single frames of a film strip are digital since they are discrete, chopped up and unambiguously countable.

The only ordering principle for analog signals are their analogy: their physical mimesis of the signals they reproduce. In the case of the photographic emulsion, the distribution of the otherwise chaotic particles mimics the distribution of light rays making up an image the human eye sees; on the audiotape, decreasing and increasing magnetization of the otherwise chaotic iron or chrome particles mimics the rising and falling of the sound wave it reproduces.

Technically, there are no such things as "digital media" and "digital # aesthetics"

This means that media, in the technical sense of storage, transmission, computation and display devices, are always analog: The electricity in a computer chip is analog because its voltage can have arbitrary, undifferentiated values between its minimum and maximum just like a fretless violin string. Only through filtering, one can make a certain range of high voltage correspond to a "zero" and a certain range of low voltage to "one". Hardware defects can make bits flip and turn zeros into ones. The sound waves produced by a sound card and a speaker are analog, etc. (This is what (Kittler, 81-90) refers to, albeit opaquely, when arguing that in computing "there is no software".) An LCD screen is a hybrid digital-analog system because its display has discrete, countable, single pixels, but the light they emit constitutes an analog continuum.

There is hence no such thing as digital media, only digital or digitized information: chopped-up numbers, letters, symbols and whatever other abstracted units as opposed to continuous, wavelike signals such as physical sounds and visuals. Most "digital media" devices are really analog-to-digital-to-analog converters: An mp3 player with a touchscreen interface, for example, takes analog, non-discrete gesture input, translates it into binary control instructions that trigger computational information processing of a digital file, ultimately decoding it into analog electricity that another analog device, the electromagnetic mechanism of a speaker or headphone, turns into analog sound waves.

The same principle applies to almost any so-called digital media device no matter whether a photo or video camera or a military drone. As soon as something becomes perceivable, it takes the form of non-discrete waves. Therefore, anything aesthetic (in the literal sense of aisthesis, perception) is analog by strict technical definition.

digital = analog = post-digital…?

"Digital art" that would bebased on the above rigorous technical definition of "digital" would likely be called "post-digital" or even "retro analog" by art curators and humanities scholars: stone mosaic floors from Internet image memes, for example, mechanical typewriter installations6 or countdown loops running in Super 8 or 16mm film projection.

The everyday colloquial meaning of "digital" is metonymical: anything connected to computational electronic devices – even if it is a tripod. This notion has mostly been cultivated by product marketing and advertising. In their own name, the "digital humanities" have simply taken it over, without any questioning. By challenging uncritical notions of digitality, "post-digital" art, design and media (whether or not one should technically call them post-digital) often make up for lacks of scrutiny among "digital media" critics and scholars.

Revisiting the hipster meme

The alleged typewriter hipster later turned out to be a writer who lived from custom-written stories that he offered passengers for sale. The meme picture had been taken from an angle that left out his sign "One-of-a-kind, unique stories while you wait". In an article for the web site The Awl, he recollects how it made him "An Object Of Internet Ridicule" and open hatred.7 Knowing the complete story, his decision to take a mechanical typewriter to the park was pragmatically the best: electronic equipment (a laptop with a printer) would have been cumbersome to set up, run on battery power and keep safe from rain and stealing while handwriting would not have been easily readable enough and lack the appearance of a professional writer’s work.

C.D. Hermlin, the alleged "typewriter hipster"

C.D. Hermlin, the alleged “typewriter hipster”

If he had been an art student, even in a media arts program, the typewriter would still have been the right choice for this project. It is a post-digital choice because it didn’t default to a "new media" device for the sake of its contemporariness. It also exemplifies post-digital hybridity of "old" and "new" media since the writer advertises his Twitter account "@rovingtypist" and conversely uses this account to promote his story-writing service. He repurposed the typewriter from a prepress tool to a personalized small press, giving the "old" technology new function relative to "new media" and exploiting qualities in it that make up for the latter’s deficiencies. At the same time, he applies a "new media" sensibility to "old media" use: user-customized products, created in a social environment, with voluntary amounts of payment. Or rather, the notion of community media versus mass media has flipped so that typewriters repesent the former while participatory web sites have turned into the likes of Reddit and replaced yellow press mass media – including mob hatred incited by willful misrepresentation.

Desires for agency

Cascone and Andrews partly contradicted themselves when they coined the notion "post-digital" in the year 2000. On the one hand, they rejected "new media" advocacy, on the other, they heavily relied on it. Cascone’s paper drew on Nicholas Negroponte’s Wired article "Beyond Digital" (Negroponte), Ian Andrews’ paper on Lev Manovich’s "Generation Flash", an article that promoted the very opposite of the analog/digital, retro/contemporary hybridizations associated with the term "post-digital" today (Manovich, Generation Flash). If post-digital cultures are made up of, metaphorically speaking, postcolonial practices in a communications world taken over by the military-industrial complex of only a handful of global players, then it can most simply be described as mental opposition to phenomena like Ray Kurzweil’s and Google’s Singularity University, the Quantified Self movement, sensor-controlled "Smart Cities" and similar dystopian techno utopias.

Nevertheless, Silicon Valley utopias and post-digital subcultures (whether in Detroit, Rotterdam or elsewhere) have more in common than it might seem. Both are driven by fictions of agency.8 There’s a fiction of agency over one’s body in the ‘digital’ Quantified Self movement, a fiction of the self-made in the ‘post-digital’ DIY and Maker movements, a fiction of a more intimate working with media in ‘analog’ handmade film labs and mimeograph cooperatives. They stand for two options of agency, over-identification with systems or skepticism towards them. Each of them is, in their own way, symptomatic of system crisis. It is not a crisis of one or the other system but a crisis of the very paradigm of "system" and its legacy from cybernetics. It’s a legacy which (starting with their mere names) neither "digital", nor "post-digital" succeed to leave behind.

Works cited

Andrews, Ian. "Post-digital Aesthetics and the return to Modernism." (2000) Web. December 2013 http://www.ian-andrews.org/texts/postdig.html

Cascone, Kim. "The Aesthetics of Failure: ‘Post-Digital’ Tendencies in Contemporary Computer Music." Computer Music Journal, 24.4 (2000): 12-18. Print.

Cox, Geoff. "Prehistories of the Post-digital: some old problems with post-anything." (2013) Web. December 2013

Cramer, Florian. "Post-Digital Aesthetics." Jeu de Paume le magazine, May 2013. Web. December 2013 http://lemagazine.jeudepaume.org/2013/05/florian-cramer-post-digital-aesthetics/

Cramer, Florian. "Post-Digital Writing." electronic book review, December 2012. Web. December 2013 http://electronicbookreview.com/thread/electropoetics/postal

Eggers, Dave. The Circle. New York: Knopf, 2013. Print.

Goldsmith, Kenneth. Uncreative Writing: Managing Language in the Digital Age. New York: Columbia UP, 2011. Print.

Goodman, Nelson. Languages of Art, Indianapolis/Cambridge: Hacket, 1976. Print.

Hermlin, C.D.. "I Am An Object Of Internet Ridicule, Ask Me Anything." The Awl, 18 September 2013. Web. December 2013 http://www.theawl.com/2013/09/i-was-a-hated-hipster-meme-and-then-it-got-worse

Kittler, Friedrich. "There Is No Software." Stanford Literature Review 9 (1992): 81-90. Print.

Klok, Timo. "4chan and Imageboards", post.pic. Ed. Research Group Communication in a Digital Age. Rotterdam: Piet Zwart Institute, Willem de Kooning Academy Rotterdam University, 2010: 16-19. Print.

Manovich, Lev. "Generation Flash." (2002). Web. December 2013 http://www.manovich.net/DOCS/generation_flash.doc

Manovich, Lev. The Language of New Media. Cambridge, MA: MIT, 2002. Print.

Negroponte, Nicholas. Beyond Digital. Wired 6.12 (1998). Web. December 2013 http://web.media.mit.edu/~nicholas/Wired/WIRED6-12.html

Pynchon, Thomas. Bleeding Edge. London: Penguin, 2013. Print.

Van Meer, Aldje. "I would rather design a poster than a website." Willem de Kooning Academy Rotterdam University, 2012-2013. Web. December 2013 http://www.iwouldratherdesignaposterthanawebsite.nl, http://crosslab.wdka.hro.nl/ioi/C010_folder.pdf

(With cordial thanks to Wendy Hui Kyong Chun, Nishant Shah, Geoff Cox, Søren Pold, Stefan Heidenreich and Andreas Broeckmann for their critical feedback.)


  1. As empirically researched for Dutch art school students by (van Meer).

  2. (Cramer, Post-Digital Writing), (Cramer, Post-Digital Aesthetics).

  3. In a project on Open Source culture with Bachelor students from the Willem de Kooning Academy Rotterdam organized by Aymeric Mansoux, it turned out that a number of students believed that web site user account registration was a general feature and requirement of the Internet.

  4. It’s debatable to which degree this reflects the influence of non-Western, particularly Japanese (popular) culture on contemporary Western visual culture, particularly in illustration (which amounts to a large share of contemporary zine making). This influence even more clearly exists in digital meme and imageboard culture.

  5. Even the piano, if considered a medium, is digital only to the degree that its keys implement abstractions of its analog-continuous strings.

  6. Such as – six years before the typewriter hipster meme – Linda Hilfling’s contribution to the exhibition MAKEDO at V2_, Rotterdam, 29-30 june 2007.

  7. (Hermlin) writes: "Someone with the user handle "’S2011′ summed up the thoughts of the hive mind in 7 words: ‘Get the fuck out of my city.’ Illmatic707 chimed in: I have never wanted to fist fight someone so badly in my entire life".

  8. This is how (van Meer), coordinator of CrossLab at Willem de Kooning Academy Rotterdam, interprets art students’ preference for working non-electronically and "rather make a poster than a website".

Prototyping the Future of Arcade Cabinet Emulation (v1.5a)

Introduction:

 

This paper is a background research piece into the development of an interactive installation that prototypes a possible future trajectory for arcade videogame emulation. The project aims to explore how the experience of interfacing with complete arcade videogame cabinets can be recreated in virtual reality space. As an interactive experience it is intended to not just authentically recreate the audio visual aesthetics of the videogame input and feedback mechanisms, but also the full physical design of the cabinet, including the appearance of the enclosed game circuitry. This virtual arcade cabinet exists in a digital construct that emulates the ambience of a videogame arcade, filtering the the situated experience of coin-op gaming to the user complete with its original surrounding environment.

 

 

 

Emulation as Platform Augmentation:

 

An emulator is a software or hardware system that recreates the system architecture of a computer system on another platform. Through the virtual machine of an emulator it is possible to experience a computer system transplanted as a subroutine of a more advanced platform, whether it be hardware of software based. They are computers within computers.

 

Emulation is a legal grey area, and is tolerated to an extent by the owners of the emulated system. Upon boot up the MAME emulator presents a splash screen reminding the user that they must legitimately own a copy of the game rom they are about to load. In practice however, most users don’t actually own the rare and costly game PCBs that physically contain the game code. Instead they simply use an online search engine to obtain the required rom files illicitly.

 

Emulators replicate the functionality of a past platform while also leveraging the additional affordances offered by the emulation host. For example, MAME features a memory editor and dissembler that allows users to edit a games code as it runs, viewing changes of the end user experience immediately. In this case the emulator takes a system that was designed purely for the ‘play only’ consumer space and augments it with a developer level interface. With the additional use of an assembler package and an eprom burner, it is possible to transfer this new code creation to an eprom chip, and in turn to an arcade PCB, thus allowing the hacked game to be played through the original arcade hardware platform.

 

When a game originally designed for playback on a cathode ray tube display is presented through the clear viewfield of an LCD or LED display, its gains pixel sharp clarity, but also loses part of the original monitor colouration that was taken into consideration by game designers. The CRT filter built into the Atari 2600 emulator Stella addresses this issue, allowing for image ghosting and colour mixing that helps to partially mask the systems high level of sprite flickering. Similarly, the SLG-1000 hardware device by Arcade Forge recreates the scanlines of bulky CRT tubes on flat panel HD displays, improving aesthetic authenticity when playing classic games by embracing an outdated display limitation into an essential feature.

 

 

 

 

The Physiology of an
 Arcade Cabinet:

 

In comparison to their home computers and videogame consoles, the underlying technology powering arcade videogame platforms is lesser known. Each arcade PCB is a standalone computer. These devices range from bespoke PCBs for single games such as Pong, to standards based upon home console technologies like the Sega Naomi which is closely related to the Sega Dreamcast console, to adapted PC compatible machines.

 

One main unifying standard between the disperate hardware types is the JAMMA standard. It is not the only standard of its kind, but it is the most prolific. Up until 1985 arcade game manufacturers used a variety of different wiring systems in the design of their cabinets. This lack of hardware interchangeability led to increased costs for arcade owners, who had to replace entire cabinets each time they bought a new game. The JAMMA standard agreed by the Japanese Arcade Amusement Manufacturers Association introduced a 56 pin connection for connecting game PCBs to cabinets, allowing the exchange of JAMMA PCBs between compatible machines in a manner similar to the process of swapping a game cartridge on a home console system. These pins allow the connection of a power source, speakers, monitor, coin-slot switch, and the action buttons and joysticks or other controller peripherals.

 

Structurally arcade cabinets are unglamorous, built from the same materials as their kitchenware namesakes. Indeed, Atari’s Irish operation in the 1970s bought a local furniture manufacturer to produce arcade cabinets for the European market (McCormack). Wear and tear on these wooden frames in the arcade environment has led to high collectors prices for well preserved originals. This battle damage adds character, but is also a problem for their preservation. Rust, chipped fiberboard, and split veneers all add up to heavy restoration projects worthy of a Discovery Channel show.

 

An arcade cabinet is a host shell for the game logic contained on the arcade board, and in many cases the design of this enclosure adds an additional level of atmosphere and immersion to the game that is difficult to recreate outside of it’s natural environment. At the most basic level, these enhancement typically amount to cabinet artwork and an illuminated title marquee that seek to sell the game narrative to prospective punters. At the high end of the market arcade games move close to simulator territory, adding enhancements such as hydraulics and force feedback. Many of the arcade cabinet designs by Yu Suzuki for Sega meet this level.

 

 

 

 

Recreating the Arcade Cabinet as a Digital Artifact:

 

While working at Sega Japan, Yu Suzuki was responsible for the design of several of Sega’s arcade hits, including Hang On (1985), Afterburner (1987), ThunderBlade (1987), and Out Run (1986). Each of the cabinets featured simple stand-up (SD) and also sit-down deluxe (DX) models. The deluxe models of all these videogames all brought a high level of technical and aesthetic polish to their cabinet design. For instance, the deluxe model of Hang On takes the shape of a 500lbs reproduction of a Ducati motorcyle, which the playermust lean left and right upon to steer. It is a game that demands the player to move their whole body weight to control it.

 

Suzuki’s emphasis on the physical design of the arcade game recognises that the physical design of the cabinet is the most immediate part of a games ‘attract mode’: “with arcade games, the cabinet is the most important thing. When you see a cabinet, that’s usually when you decide whether you want to play a game or not… The form is the most important thing when you buy a car, right?” Yu Suziki, Sega (ブライアン・アッシュクラフト and Snow 131–132).

 

In the pioneering 3d sandbox games Shenmue (1991) and Shemue II (2001) on the Sega Dreamcast console, Yu Suzuki recreated a number of his coin operated arcade videogames in the virtual space1. The interactive 3d renderings of his deluxe arcade cabinets including the aformentioned Hang On and Out Run, in addition to Space Harrier (1985), which is widely credited to be the first sit down arcade cabinet. Each game is a full emulation of the original system, and the player can walk around the virtual space and inspect the design and artwork of the the arcade cabinets from different angles, all while sampling the ambiance of a 1980s Japanese arcade amusement centre.

 

Upon starting each virtual arcade game, the player viewpoint switches from a 3rd person perspective to completey replacing the playfield with the arcade monitor view. The design decision to momentarily switch out of the surrounding environment and allow the diagetic onscreen space of the emulated system to take over the host games screen space is understandable, since these sub games are not critical to the overall narrative. Also the 1998 Dreamcast hardware was already pushed to its maximum when emulating the aforementioned arcade games, so adding any image filtering or other graphical embelishments would have been beyond its capabilities.

 

 

This perspective on the monitor is developed a step further in the arcade games included as part of Grand Theft Auto: San Andreas2. When a player steps up to a coin-op to play either Let’s Get Ready to Bumble, Go Go Space Monkey, or Duality, the screen is taken over by the coin-op, except that unlike Shenmue the view takes a step backwards. GTA:SA acknowledges the medium of the CRT screen, showing the tubes curvature as well as the surrounding plastic bezel.

 

GTA:SA modder ThePaddster has modified the arcade machine textures from San Andreas, replacing them with the artwork for Bally Midway’s Mortal Kombat (1992)3. Unfortunately the modification does not change the subgames, but the effect of changing the cabinet graphics is interesting and a tangible step towards a customisable, virtual arcade, where game roms manifest as digital game cabinets in a 3d space instead of 2d images in a folder.

 

In a visual and touchscreen interface style common to mobile and tablet conversions of arcade and console titles, Capcom’s Mega Man II on iPhone4 uses an onscreen representation of the arcade cabinet facade to frame it’s emulated Nintendo Entertainment System game. This style of virtual arcade machine takes a further step back from the monitor than GTA:SA, incorporating a joystick control panel as well as the game logo embedded into a representation of an arcade cabinet marquee. The additional graphics also form a necessary visual filler between the games original display ratio and the widescreen aspect of the iPhone.

 

The next logical step in improving experiential and aesthetic experience of the virtual arcade machine is to take an additional step back in perspective to encompass both the onscreen space and the peripheral vision of the player. While this expanded view adds distractions to the subgame experience, it can be argued that blocking out the imediate ambiance environmental ambience causes existing virtual coin-op gaming experiences to lose a level of reality and authenticity.

 

 

 

 

Post-Digital Emulation: Reproduction, Ritual, and the Third State:

 

A post digital emulator is as much about preserving human ritual as it is concerned with the reproduction of interaction aesthetics and game logic. By expanding beyond the glass space of the screen interface to include the situated play environment of the arcade coin-op as part of the interface, VRAME enables extra performative layers of interaction unique to the arcade game centre environment.

 

The VRAME prototype allows the user to start and continue games by inserting virtual coin tokens to start and continue games. Games are changed by exchanging JAMMA pcb boards through the engages with the machine by inserting virtual coin tokens to start and continue games, while adjusting their viewing angle to a comfortable vantage point, and plays while surrounded by the ambience of the arcade gaming space. This style merges the arcade environment of Shenmue with it’s separate arcade screen view.

 

Emulation by nature is a fuzzy approximation because of the noise that separates its etherial digital component in terms of fidelity from its real world source analog. Even in the case of an emulation platform that perfectly reproduces the sonic and visual properties of the legacy system to the end user, the underlying computational technology imitates rather than replicates.

 

The Fixed Gate Programmable Array chip is a chameleon component that can be reshaped by code to mirror the schematics of classic platforms. The FPGA based MIST console emulates the Commodore Amiga and Atari ST home computers while adding the affordance of using flash memory storage. However the miniature form factor reduces the physical appearance of the machine to becomes a nondescript box without any of the outward design character of the original 16 bit computers. The user of an emulator receives ease of use and new nondiegtic operator acts such as a pause function (Galloway) while sacrificing constraints that defined the nature of the original system.

 

The Third State refers to the experience of using an emulator for someone who has never experienced the original artefact. For the ‘experienced’ user, their memory fills in the gaps of imperfection in the user experience, it is colourised by their sense of context. Similarly, a lack of linkage to the source experience creates a new situation. The gaps in comparison between the analog and digital versions of the given experience are less obvious, leading to the flaws in resolution becoming features. We can see this in the use of pixel sharp imagery from the 8bit era. The sharp right angles of pixel art in its post digital context contradict the blurry phosphor tinged images viewed by gamers during the early years of videogaming, yet are accepted as authentic by the 8bit revival audience. 

 

 

 

 

Considerations in Prototyping a VR Arcade Machine Emulator:

 

A prototype aims to provide the experience of using a technology, whilst not necessarily using the same technology as the envisioned end product. It is intended as a demo of an arcade emulation style that goes beyond displaying the arcade artwork in a 2d form, instead actually wrapping it around a 3d model of the particular coin-op machine, while allowing the player to view the inside of the arcade machine.

 

At the time of writing, the powerful and affordable Oculus Rift development kit has made virtual reality a viable option over two decades since the first commercial attempts at immersive VR. By using a virtual reality headset the user can experience the playfield from a real-world perspective.

 

If used as part of the digital arcade prototype this would allow momentary glances at the digital arcade cabinets control panel and frame during gameplay. The player could also opt to move away from the screen and inspect the cabinet internally, viewing the PCB from the perspective of the arcade operator while accessing information on its hardware specifications.

 

The ComputerSpeilMuseum in Berlin has a Pong cabinet with plexiglass fitted to the back so that visitors can view the circuitry of the machine. This is an important consideration as the electronics of this artifact are as noteworthy a part of the interface as the controllers and audiovisual feedback. A complete VR arcade cabinet simulator should include an option to view the internal structure of the cabinet itself.

 

This internal view of the digital arcade cabinet serves three purposes. Firstly it provides an operator level interface for the user beyond the game calibration screen that allow operators to change in-game variables such as the default number of lives and difficulty levels. Secondly it demystifies the internal structure of the arcade machine, presenting the internal aesthetics of the wiring and circuitry as a visible and essential part of the overall cabinet build. The third advantage is that it provides an historical and educational document of the machine hardware that is impervious to wear and tear.

 

A real consideration for if this concept prototype were to become an actual emulation system is the workload involved in sourcing and producing 3d models. Emulation software relies on community effort for the continued updating of the source code, as well as the procurement of the less legal items such a rom files, game artwork, instruction manuals. For a 3d arcade cabinet emulator to succeed, it would need an open format that allows the community to create their own 3d cabinets, complete with exterior artwork and interior game wiring and PCBs.

 

The VRAME environment is a virtual shell through which MAME or a similar emulator is experienced while embedded into a 3d representation of an arcade cabinet. It uses the open source 3d engine and game creation tool Blender, along with the Oculus Rift virtual reality headset. Blender allows the use of VNC sessions as textures. VNC stands for Virtual Network Client, and is a system that allows a user to view a computer desktop remotely through another system. In VRAME the texture that represents the arcade screen connects to a system running the MAME emulator. All input commands sent to the interaction built in blender are rerouted to the MAME system, which broadcast all audio visual feedback back to the user through the virtual screen running VNC session texture.

 

In an exhibition setting, the VRAME installation consists of a minimal pedestal containing a harness for the VR headset along with a control panel using physical game controls. Visually appears as an arcade cabinet that has been significantly minimalised. A square outline on the ground is used to reflect the immaterial object now built in virtual space. When the user steps up to the pedestal, they don the VR headset, and find themselves standing in front a full arcade cabinet.

 

The second option is to remove the controls, instead using a wireless gesture capturing system to match the players hand movements to a 3d representation of their hands in 3d space, registering collisions with the digital renderings of the control panel. Both options have their pros and cons. The gesture based version keeps the physicality of the emulated control system purely digital, in a maleable, etherial digital state. On the other hand, the tangible controller adds a grounded, solid, yet distant link between the playing human and the cyber arcade cabinet.

 

 

 

 

 

 

Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis [u.a.: University of Minnesota Press, 2007. Print.

McCormack, Jamie. “Atari and Ireland.” Game Developers Ireland. N. p., 13 Nov. 2008. Web. 1 Dec. 2013.

ブライアン・アッシュクラフト, and Jean Snow. Arcade Mania! = ゲーセン・マニア: The Turbo-charged World of Japan’s Game Centers. Tokyo; New York: Kodansha International : Distributed in the United States by Kodansha America, 2008. Print.

 

 

 

 

1http://youtu.be/gBwkVo7logk

2http://youtu.be/9Nu93tumooU

3http://youtu.be/XrqeQrulwXE

4http://youtu.be/dpRkHiARlso

Post-Digital is Post-Screen – Shaping a New Visuality – Josephine Bosma

If the interest in the post-digital seems to point at anything, it is that the usefulness of the digital as a discursive element in analyzing the impact and place of technology in society and culture is waning. Digital technologies on the other hand only grow and proliferate. This raises the question: why do we need or want to discuss matters in terms of a post-digital condition if digital media do not seem to loose ground by far? I look at this issue in the context of art. Here, the digital realm tends to be perceived as screen-based. This tendency is validated by popular approaches in media art, most notably that of Lev Manovich. One could argue however that the screen is not the most important part of a digital computer, and thus also not of digital media. Paul E. Ceruzzi states in his History of Computing that the computer can be defined in various ways. One definition is that the computer is a system applicable to many different tasks, even beyond a ‘purely technical arena’. Another is that it is a social construct (Ceruzzi 4). This means the definition and shape of a computer is flexible, both technologically and socio-culturally. A screen-based analysis of art in this context literally glosses over the issues in this area, and makes certain works partially or completely ‘invisible’. The development of a post-digital media theory possibly helps us break away from a dominant screen-based analyses of art in the context of digital media. The issue here is not one of medium specificity though. The aim is to develop a more comprehensive view of specific works and practices to depart from in criticism, theory, and education.

Not only does the screen get overvalued. What is not directly visible is also less likely to get noticed. Additional problems for art in the context of digital media seem to be the visual impermeability or the spatial dispersion of specific works and practices. What I mean with visual impermeability is the presence of somehow ‘hidden’ structures, like network technologies, code and software processes, and even indirect influences of the Internet or of computer technology, in specific works of art. Spatial dispersion on the other hand points to works in which the various elements of a work are out of reach physically, hiding them in another way. In the case of networked installation art or performance they are either in another space, in another town, in another country (Malpas 109; Shanken 35). In conceptual or tactical applications of networked space there often is only a second-degree, and thus also distant, network connection (Greene 119; Cramer, “Anti-Media” 221). With art consisting entirely of code executed in a computer the work of art is not just hidden inside the fiber and plastics of a machine, but it is also spatially dispersed in terms of the time consumed and the movements, inside and outside the computer, produced in the process (Arns 198; Goriunova, Shulgin, “Read_Me 2004 Edition” 20).

Art created in the context of digital media generally possesses a high degree of openness, because it is often time-based, interactive (Paul 23), and interdisciplinary, or, what Frank Popper calls, ‘poly-artistic’ (131). The shape of the works described above asks for a perspective that reaches not only beyond the screen, but which also takes into account the instability and interdisciplinary basis of the works in question. Earlier approaches suggest using Jack Burnham’s Systems Aesthetics (Shanken Digital Arts and Culture Conference 2009) or Callon and Latour’s Actor Network Theory (ANT) (Lichty ISEA2011) as a basis. The enduring prevalence of the visual arts in contemporary art institutions and exhibitions seems to suggest developing a view beyond the screen however asks for an alternative visual approach, rather than a predominantly conceptual, or actor network approach. The work of Rudolph Arnheim offers a possible basis for an overarching theory for a new visuality in his book Visual Thinking (274). Arnheim describes how a non-retinal way of seeing exists in science, where the knowledge of the existence of events, structures and objects often precedes or even constitutes their visibility. It potentially connects conceptual and scientific approaches, such as also the still relevant methodologies based on Systems Aesthetics and ANT, to the visual domain.

At the same time there is of course a level of abstraction in all art, including the examples used here, which cannot be described in terms of a visualization derived from scientific knowledge or insight alone. What is needed is an elaboration of the notion of the expanded image towards forms of imagination that combine the actual and the immeasurable, or the poetic. By including conceptual visualizations of actual or virtual i.e. possible events, systems or objects in an understanding of visual art the space of interpretation and engagement with art should be enriched rather than limited. An understanding of how material dimensions of a work of art expand, exist, or behave beyond the line of sight, and in the case of digital art beyond the screen, need be no more prescriptive concerning interpretation or appreciation than seeing a painting or a sculpture. I see my work as an addition to the discussions about a new approach to or interpretation of materialism in art and media theory (Daston 14; Parikka, “New Materialism as Media Theory” 99; Dolphijn, van der Tuin 98; Barret, Bolt 3), because of the unwanted but inescapable battle about ‘what matters’ in art, a battle one has to fight in new media art all too often (Graham, Cook 6). The work of Alexander Galloway is also always an inspiration to explore the connections and crossovers between the digital and the old-fashioned ‘Real’, and this text borrows heavily from his The Interface Effect. A revaluation of the material dimensions of art and culture seems at hand, and it seems most urgent than in the fast growing digital domain.

The perceptional model borrowed from Arnheim needs to be understood in all its variability if it is to be used for art. Refinements from specific fields and sub-fields of media theory, contemporary philosophy, the media art field and the contemporary art field are necessary to complete any picture of art after the collapse of the digital screen: the post-digital sphere.

The Bright and Blinding Screen

In her book Where Art Belongs the art writer Chris Kraus puts what she calls ‘digital forms’ in the same realm as video (119). She is but one of many critics and theorists that describe art in the digital realm in terms of the image and the screen (Bourriaud 69; Foster 105; Jameson 110; Krauss 87; Virilio 14; Rancière 9). The manner in which it is described is almost always negative. Computers are described as the present day epitome of Guy Debord’s The Society of the Spectacle, or as problematic because prolific image copy machines. Virilio, in all his poetic paranoia, expresses this feeling precisely: ‘What was still only on the drawing board with the industrial reproduction of images analysed by Walter Benjamin, literally explodes with the ‘Large-Scale Optics’ on the Internet, since telesurveillance extends to telesurveillance of art.’ (14)

This superficial view of the computer and digital media in general is supported or at least barely countered by influential writers from the media art field. Lev Manovich’s bestseller The Language of New Media describes the computer almost entirely in terms of cinema. Even the chapter called The Operations, after a chapter on screens, solely focuses on image editing and image sequencing (117). In his book The Interface Effect Alexander Galloway starts off with a respectful yet also critical analysis of Manovich’ cinematic approach of new media. Galloway takes his criticism of this approach further by connecting it to another popular approach, that of remediation (20). The theory of remediation draws a straight line from medieval illustrated manuscripts to linear perspective painting to cinema to television and lastly to digital media (Bolter, Grusin 34). The radical transformations brought on by digital technology are explained only by stating it ‘can be more aggressive in its remediation’ (Bolter, Grusin 46). Galloway however observes that, far from remediating a visual language like that of cinema, the computer ‘remediates the very conditions of being itself’ (21). In terms of art practice this means that digital media remediate art as is, with all its complexities and contradictions. Digital media however do so from their own form of Dasein, which comes to be through their design and application.

The focus on the screen therefore is not a problem produced by digital technologies per se. To find a possible cause and solution for this problem it seems more appropriate to approach it as a continuation and amplification of issues in art criticism and cultural theory at large. Though a variety of approaches to discuss art involving digital technologies exists (Blais, Ippolito 17; Cramer 8; Popper 89; Bazzichelli 26; Holmes 14), “no clearly defined method exists for analyzing the role of science and technology in the history of art” as a whole (Shanken 44). Edward Shanken notes how after the heydays of modern art historians stopped describing technological developments in art (45). In this period especially digital technologies have prospered exponentially. This change in art historical method seems to have created a lack of analytical tools to grasp the realities of art in the age of digital media. What the ongoing screen-based analysis of digital media shows is that this causes the consistent variability of the digital in art to go largely unnoticed.

Visualization of Highly Complex Forms

The illusionary malleability and disappearance of digital media in the remediation of being Galloway describes, should not be interpreted as digital technologies having no form. What Galloway’s Interface Effect means for art is that the art object exists within a complex system of elements that are technological and political at once. A certain amount of institutionalization slips into the deepest layers of life and practice through everyday tools for expression, production, and recollection. Galloway speaks of an ‘anti-anthropocentrism of the realm of practice’ (22). We run our economical, cultural, social, and military environments increasingly in collaboration with machines, rather than that we simply use those machines. For art this means we have bypassed the stage of the medium almost completely. Art exists within an ecosystem of humans and machines, whereby the latter reproduce their design in the way in which they compose an outcome. Though digital technologies are human-made and can be subjected to a huge variety of possible applications and couplings, their underlying structures are created with and from a mathematical efficiency that is highly rigid. Galloway illustrates this quite literally by discussing the way the Internet itself is visualized through various digital imaging software. Galloway implicitly criticizes screen-based analysis of digital media technologies when he reveals how all visualizations of the Internet look more or less the same (83). Analyses and views of art and culture today based on images and imaging alone miss the point. He calls for ‘a poetics as such for this mysterious new machinic space’. Galloway writes: ‘Offering a counter-aesthetic in the face of such systematicity is the first step toward building a poetics for it, a language of representability adequate to it’ (99).

Galloway’s call for a poetics as such for digital environments is a challenge to Jacques Rancière, who in his book The Future of the Image discusses the unrepresentable today in terms of violent images (109), but completely overlooks the challenges concerning acts of violence in today’s information society, and how to represent these new forms of violence (Galloway 91). The difficulty to represent events, shapes, and practices within the digital realm is however not limited to those of violence. Of the many events and practices that escape simple imaging in digital media environments the highly varied field of art practices is one. The merging of machine space and, in this case, art practice asks for a visualization method that is simultaneously applicable to both. Within a context that is deeply connected to the scientific realm applying a form of visualization common in science seems fitting.

In his book Visual Thinking the psychologist and art theorist Rudolph Arnheim describes various forms of visualization, one of which is that of scientific speculation and knowledge. It boils down to ‘seeing’ things you know are there but which cannot be seen by the naked eye. It is not a form of imaginative mental construction of unreal events or phenomena. Arnheim calls such visualizations ‘models for theory’ (274). He describes examples of how such models appear in nature sciences and geometry. Even if he uses examples from the hard sciences, his approach of scientific visualizations is largely psychological (275). He explains how every scientific model of an ‘invisible’ event or object is never static or stable, as it is based on a mixture of theory, observation, experience, and psychology. In other words, these visualizations are as much subjective as they are objective views of events, phenomena, or objects that exist beyond the reach of the human eye.

As an illustration: Gallileo not only had to battle church dogmas. He also had to constantly challenge his own, learned modes of perception, and in the end he did not completely succeed. Gallileo refused to accept planets rotated around the sun in ellipses rather than in circles. His refusal was based on cultural notions of an underlying perfection existing in all of God’s creation, and ellipses were considered imperfect. Arnheim quotes Erwin Panofsky pointing out that ‘the ellipse, the distorted circle, “was as emphatically rejected by High renaissance art as it was cherished in mannerism” (278).

Models for Theory and Interpretation

A method of visualization based on that of science therefore is not prescriptive, but flexible and even dynamic. Works of art can still be explored from different perspectives, for the development of which intuition, theory and physical experience are combined. According to Arnheim, in a scientific form of visualization ‘all shapes are experienced as patterns of forces and are relevant only as patterns of forces’ (276). The shapes he refers to do not need to be physical. ‘The kind of highly abstract pattern I have been discussing is applicable to non-physical configurations as readily as to physical ones, because there again the concern is with the pattern of forces, a purpose best served by exactly the same means’ (Arnheim 279-280). Pictures, models, or visualizations developed from interpreting these patterns of forces depend on former experiences and intellectual, cultural, or emotional preconceptions of the beholder.

To illustrate how this can play out: whereas Jacques Rancière describes the future of the image and representation in terms of ‘machines of reproduction’ (9), Galloway looks at the same surface and sees what he calls The Interface Effect, which is an effect ‘of other things, and thus tells the story of the larger forces that engender them’ (preface). One sees a copy and editing tool, the other a change of what images represent. Different positions and different levels of knowledge can produce subtle differences in experience. Yet also a highly informed viewing of, say, a network installation piece, may still evoke a variety of interpretations and readings.

Artistic practice is at least as varied as that of science. Not just any model for theory will fit every individual work. Which specialism to approach an individual work from depends on obvious indications or pretheoretical intuitions about the disciplinary realm this work most clearly seems based in. When an artist presents his own software as a work the obvious choice could be to approach this work from computer linguistics and literary theory, as well as from art. When the emphasis in a work is on achieving some kind of political or social effect the obvious choice might be to include a tactical media perspective, in which a political and a technological analysis of media technologies is mixed, in an interpretation. Though in practice most works of art in the context of digital media will turn out to need an interdisciplinary approach, the ‘remediation of being’ Galloway describes does seem to preserve a continuation of the same diversity we find al through art practice, even if certain visible elements appear the same (the presence of computers, cables, screens, windows on a screen, predominant formats for sharing texts, etc.).

Literature on art in this context shows a variety of forms, of which a poetic use of code (Baumgärtel 11; Goriunova, Shulgin 4; Arns 194; Cramer, “Words Made Flesh”, 8), a sculptural use of networks (Popper 181; Weiß 175; Shanken 140), and conceptualist practices (Greene 9; Holmes 20; Hand 10) are examples that show the heterogeneity of the field. I concentrate on these, while being aware of the interdisciplinary character of each work in these areas, and of the physical and conceptual overlaps between them. What all have in common is of course a connection to the digital field. This means all include some form of application of, or reference to, executable code.

Visual Thinking in Action: Code Art

Various authors have described the deep entrenchment of code in culture and society, and its defining role in new systems of power (Galloway, Thacker 30; Galloway 54; Wark [029]). Others have emphasized the generative aspect of code, and its prominence outside institutional realms (Arns 201; Goriunova, Shulgin 6). Some even go as far as describing code art as a virus, or as an antibody against a sick culture (Blais, Ippolito 17). What is clear from all descriptions of code art is that it cannot be represented on a retinal plane in its entirety, or in its full capacity. Code as a written text, deep within a computer or presented on screen or paper, encompasses a potential activity that cannot be grasped from a literal reading or retinal observation alone. Code is perceived through textual representation, as screen-based results of software, through its effects within a physical environment, or through all of these. To create visualization, a ‘model of theory’, it is necessary however to be fully aware of the potential activity inherent to any work of code art. Visualizing the work in full force would entail movement through time and space, however minimal in the machine or subject it runs on, as well as its relation to cultural, social, and political realms.

Let us take a work like Jaromil’s Forkbomb for example, a highly aesthetic and minimal string of code designed to replicate itself endlessly. When seeing it displayed as text, like it was painted on a wall at Transmediale 2012, we could admire the beauty of the string of signs. Awareness of it being a piece of executable code of a very specific kind, a fork bomb virus, however leads us beyond this relatively simple visible dimension. We could imagine a proliferation of that string of code in the shape of maybe a family tree, much like the poetic experiments Florian Cramer describes (“Words Made Flesh”, 94), but constantly splitting, moving, growing. We could at the same time see the hard disc working away and filling up, its design standardized so as to allow indeterminate applications and thus also viruses, along the observations in Matthew Fuller’s Media Ecologies (93). We could wait to see how much time it takes for the computer to crash, placing it in both the media archeological domain and the new materialism described by Jussi Parikka (97). We could also see a computer failing at being a productive machine in terms of expectations of what its purpose is in ways Galloway describes (22).

I already mentioned this paper is not a call for a renewed medium specificity per se. What I describe is explicitly also not the splitting of the work into a collection of elements or aspects. In a criticism of influential and limiting art theoretical models Garry L. Hagberg explains the tendency to downgrade physical forces in a work of art to ‘aspects’ as a justification and reinforcement of institutional approaches of art. Isolating physical traits of a work into separate elements or aspects facilitates an equally isolated, narrow path of interpretation. Yet, he writes, ‘What we call an “aspect” of a thing, in a particular context of perception, is not successfully generalizable’ (502). An interpretation of Forkbomb purely from the angle of visual poetry effectively would block the wide reach of the work from view, as does an approach of it as a virus alone. When ‘the art object is described as having aspects, only a set of which are put forward as candidates,’ (Hagberg 502) a work tends to be judged on simple traits: the presence of a screen, be it interactive or not; the production of image cultures; technofetishism; etc. We want to avoid that a strategic or simplistic selection of ‘aspects’ comes to ‘constitute the aesthetically relevant part of the work’ (Hagberg 502). What I describe however is a pattern of forces, some of which are stronger than others and pull the work in a certain direction, i.e. poetry, sculpture, performance, installation, or activist art.

Conceptualism and the Digital Sphere

The reason I call particular practices conceptualist is that they largely manifest themselves in some form outside of digital media, yet these media do inform their shape. The technology seemingly disappears in them. Maybe more than in other art practices digital media here ‘remediate the very conditions of being itself’ (Galloway 21). Works range from performance and activist art to sculpture, painting, video, and prints (Holmes 47; Olson 59). Works in this highly diverse group of practices seem to have three things in common: they use the Internet as an information or material resource; they use the Internet as a community space; and they use digital media for publication purposes (Bazzichelli 28; Goriunova 29; Holmes 66; Hand 47). Some works, such as that of the Yes Men/rtmark, are described in books about net art and digital art (Baumgärtel 106; Stallabras 8; Greene 92; Paul 209). More object-based work, like that associated with the ‘Post-Internet’ label, still largely needs to find its way into literature. Marisa Olson describes the extensive use of found photography in Post-Internet practices in terms of a revaluation of ‘portraits of the Web’. ‘Taken out of circulation and repurposed, they are ascribed with new value, like the shiny bars locked up in Fort Knox’ (59). Like code art, these two extremes, of activist and object-based art, can only be understood fully from a perspective that takes note of those ‘patterns of forces’ that give them their power.

Sculpture and Performance across Digital Networks

The visualization of digital networks in art requires an explicit visualization of hardware as well as of information flows. In network art installations hardware is essential, and most of it is far beyond sight. Any Internet connection quite easily runs halfway around the world (Terranova 44). The myriad of specific operations to realize an Internet connection happens almost entirely automated (Weiß 36). It runs across different national borders in ways largely beyond our control. Internet connections therefore are not neutral, straightforward couplings of machines. Yet Internet connections in works of art are mostly discussed in terms of technology, virtual spaces, and telepresence, and seldom in terms of visualization of the mixed physical and techno-political essence of the network (Goldberg 3; Popper 363; Shanken 32; Paul 93). I think this is a strange oversight. By making an Internet connection part of a decentralized installation or performance, an artist creates an installation that involves the temporary application of a shared, semi-public infrastructure. By interpreting the ‘patterns of forces’ involved conceptually, spatially and physically, a larger and less abstract view of this installation emerges.

Finally

I realize I walk a tightrope when I suggest using Arnheim’s theory of scientific visualization to art. Arnheim has been accused of having a highly formalist approach to art (Fox, NY Times). The chapter Models for Theory in Visual Thinking however describes a visualization method that leaves more room for subjectivity and interpretation than one would expect. Arnheim extensively describes the subjective development of scientific models (279). He describes them as changing over time and being open-ended. There is never final outcome, since any visualization in this context concerns phenomenal events that largely escape the eye, and will undergo constant re-assessment. I am not proposing to follow Arnheim’s ideas to the letter. I propose to take the concept of a scientific visualization, and adapt it to art that involves structures, systems, or processes that are too large, too dispersed, or too small to see with the naked eye.

References:

Arnheim, Rudolf. Visual Thinking. Berkeley: University of California Press. 1969-1997. Print.

Arns, Inke. “Read_me, run_me, execute_me.” Media Art Net 2, Thematische Schwerpunkte. Eds. Frieling, Rudolf, Daniels, Dieter. Vienna: Springer. 2005. 194-208. Print.

Bazzichelli, Tatiana. Networking, The Net as Artwork. Aarhus: Digital Aesthetics Research Center, Aarhus University. 2008. Print.

Blais, Joline, Ippolito, Jon. At the Edge of Art. London: Thames and Hudson. 2006. Print.

Barrett, Estelle, Bolt, Barbara. Carnal Knowledge, Towards a ‘New Materialism’ through the Arts. London: I.B.Tauris, 2013. Print.

Baumgärtel, Tilman. [net.art 2.0], Neue Materialien zur Netzkunst, New Materials Towards Net Art. Nürnberg: Verlag für moderne Kunst Nürnberg. 2001. Print.

Bolter, Jay David, Grusin, Richard. Remediation, Understanding New Media. Cambridge, Mass.: MIT Press. 2002. Print.

Bourriaud, Nicholas. Relational Aesthetics. Dijon: Les Presses du Réel, 1998. Print.

Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.: MIT Press, 2003. Print.

Cramer, Florian. Anti-Media, Ephemera on Speculative Arts. Rotterdam: NAi10 Publishers. 2013. Print.

Cramer, Florian. Words Made Flesh, Code, Culture, Imagination. Rotterdam: Piet Zwart online publication, 2005. Web.

Daston, Lorraine. Ed. Things That Talk, Object Lessons from Art and Science. New York: Zone Books, 2004. Print.

Dolphijn, Rick, van der Tuin, Iris. New Materialism: Interviews & Cartographies. Michigan: Open Humanities Press, 2012. PdF. Web. 8 December 2013.

Foster, Hal. The Return of the Real. Cambridge, Mass.: MIT Press, 2001. Print.

Jameson, Fredric. The Cultural Turn, Selected Writings on the Postmodern, 1983-1998. London: Verso. 1998. Print.

Galloway, Alexander. The Interface Effect. Cambridge: Polity Press. 2012. Print.

Galloway, Alexander, Thacker, Eugene. The Exploit, A Theory of Networks. Minneapolis: University of Minnesota Press. 2007. Print.

Goldberg, Ken. “Introduction: The Unique Phenomenon of a Distance.” The Robot in the Garden, Telerobotics and Telepistemology in the Age of the Internet. Ed. Ken Goldberg. Cambridge, Mass.: MIT Press. 2000. Print.

Goriunova, Olga, Shulgin, Alexei. Read_Me 2.3 Reader. Helsinki: NIFCA Publication. 2003. Print.

Goriunova, Olga, Shulgin, Alexei. Read_Me, Software Art and Cultures Edition 2004. Aarhus: Digital Aesthetics Research Centre, University of Aarhus. 2004. Print.

Graham, Beryl, Cook, Sarah. Rethinking Curating, Art after New Media. Cambridge, Mass.: MIT Press, 2010. Print.

Greene, Rachel. Internet Art. London: Thames and Hudson. 2004. Print.

Hagberg, Garry L. “The Institutional Theory of Art.” A Companion to Art Theory. Eds. Paul Smith and Carolyn Wilde. Oxford: Blackwell Publishing, 2002. Print.

Hand, Autumn. Intersecting Art Experiences – Approaching Post-Internet Art as a medium for dialogue in this information age. University of Amsterdam MA New Media paper. 2012.
Holmes, Brain. Escape the Overcode. Eindhoven: Van Abbemuseum Public Research. 2009. Print.

Kranenburg, van, Rob. The Internet of Things – A critique of Ambient Technology and the All-seeing Network of RFID. Amsterdam: Institute of Network Cultures. 2008. Print.

Kraus, Chris. Where Art Belongs. Los Angeles: Semiotext(e). 2011. Print.

Krauss, Rosalind. Perpetual Inventory. Cambridge, Mass.: MIT Press. 2010. Print.

Lichty, Patrick. “Network Culture, Media Art: Cultural Change Dialectics.” ISEA2011. 2011. Web. 7 December 2013.

Mahoney, Michael S. “The Structures of Computation.” The First Computers: History and Architectures. Eds. Raúl Rojas, Ulf Hashagen. Cambridge, Mass: MIT Press, 2002. 17-32. Print.

Malpas, Jef. “Acting at a Distance and Knowing from Afar: Agency and Knowledge on the Internet.” The Robot in the Garden, Telerobotics and Telepistemology in the Age of the Internet. Ed. Ken Goldberg. Cambridge, Mass.: MIT Press. 2000. 108-124. Print.

Manovich, Lev. The Language of New Media. Cambridge, Mass: MIT Press. 2000. Print.

Olson, Marisa. “PostInternet: Art after the Internet.” FOAM International Photo Magazine. Winter 2011/2012. 59-63. Print.

Parikka, Jussi. “New Materialism as Media Theory: Medianatures and Dirty Matter.” Communication and Critical/Cultural Studies Vol. 9, No. 1. March 2012. 95-100. Print and Web.

Paul, Christiane. Digital Art. Revised and Expanded Edition. New York: Thames and Hudson. 2003-2008. Print.

Popper, Frank. From Technological to Virtual Art. Cambridge, Mass: MIT Press. 2007. Print.

Rancière, Jacques. The Future of the Image. London: Verso, 2007. Print.

Shanken, Edward. Art and Electronic Media. London: Phaidon Press. 2009. Print.

Shanken, Edward. “Historizing Art and Technology: Forging a Method and Firing a Canon.” Media Art Histories. Ed. Oliver Grau. Cambridge, Mass: MIT Press, 2007. 43-70. Print.

Shanken, Edward. “Reprogramming Systems Aesthetics: A Strategic Historiography.” eScholarship, University of California. 2009. Web. 7 December 2013.

Stallabrass, Julian. Internet Art – The Online Clash of Culture and Commerce. London: Tate Publishers. 2003. Print.

Terranova, Tiziana. Network Culture – Politics of the Information Age. London: Pluto Press. 2004. Print.

Virilio, Paul. Art as Far as the Eye Can See. Oxford: Berg. 2005-2007. Print.

Wark, McKenzie. A Hacker Manifesto. Cambridge, Mass.: Harvard University Press. 2004. Print.

Weiß, Matthias. Netzkunst, ihre Systematisierung und Auslegung anhand von Einzelbeispielen. Weimar: Verlag und Datenbank für Geisteswissenschaften. 2009. Print.

Prehistories of the Post-digital: or, some old problems with post-anything – Geoff Cox

According to Florian Cramer, the “post-digital” describes an approach to digital media that no longer seeks technical innovation or improvement, but considers digitization something that already happened and thus might be further reconfigured (Cramer). He explains how the term is characteristic of our time in that shifts of information technology can no longer be understood to occur synchronously – and gives examples across electronic music, book and newspaper publishing, electronic poetry, contemporary visual arts and so on. These examples demonstrate that the ruptures produced are neither absolute nor synchronous, but instead operate as asynchronous processes, occurring at different speeds and over different periods and are culturally diverse in each affected context. As such, the distinction between “old” and “new” media is no longer useful.

Yet despite the qualifications and examples, there seems to be something strangely nostalgic about the term – bound to older ‘posts’ that have announced the end of this and that. I am further (somewhat nostalgically too perhaps) reminded of Frederic Jameson’s critique of postmodernity, in which he identified the dangers of conceptualising the present historically in an age that seems to have forgotten about history (in The Cultural Logic of Late Capitalism, 1991). His claim was that the present has been colonised by ‘pastness’ displacing ‘real’ history (20), or what we might otherwise describe as neoliberalism’s effective domestification of the transformative potential of historical materialism.

In this short essay I want to try to explore the connection of this line of thinking to the notion of the post-digital to speculate on what is being displaced and why this might be the case. It is not so much a critique of the post-digital but more an attempt to understand some of the conditions in which such a term arises. Is contemporary cultural production resigned to make empty reference to the past in ‘post-history’: thereby perpetuating both a form of cultural amnesia and uncritical nostalgia for existing ideas and mere surface images? As Cramer also acknowledges, one of the initial sources of the concept occurs in Kim Cascone’s essay “The Aesthetics of Failure: Post-Digital Tendencies in Contemporary Computer Music” (2000), and it is significant that in his later “The Failures of Aesthetics” (2010) he further reflects on the processes by which aesthetics are effectively repackaged for commodification and indiscriminate use. The past is thereby reduced to the image of a vast database of images without referents that can endlessly reassigned to open up new markets and establish new value networks.

Layering of covers of key source texts for this article, generated from a script by James Charlton

Layering of covers of key source texts for this article, generated from a script by James Charlton

Posthistory
The Hegelian assertion of the end of history – a notion of history that culminates in the present – is what Francis Fukuyama famously adopted for his thesis The End of History and the Last Man (1992) to insist on the triumph of neoliberalism over Marxist materialist economism. In Fukuyama’s understanding of history, neoliberalism has become the actual lived reality. This is both a reference to Hegel’s Phenomenology of Spirit but also Alexandre Kojève’s Introduction à la lecture de Hegel: Leçons sur “La Phénoménologie de l’Esprit” (1947), and his “postscript on post-history and post-historical animals,” in which he argues that certain aesthetic attitudes have replaced the more traditional ‘historic’ commitment to the truth.

These aesthetic changes correspond somewhat to the way that Jameson contrasts conceptions of cultural change within Modernism expressed as an interest in all things ‘new’, in contrast to Postmodernism’s emphasis on ruptures, and what he calls ‘the tell-tale instant’ (like the ‘digital’ perhaps), to the point where culture and aesthetic production have become effectively commodified. He takes video to be emblematic of postmodernism’s claim to be a new cultural form but also reflects centrally on architecture because of its close links with the economy. For critical purposes now, digital technology, more so than video even, seems to encapsulate the kinds of aesthetic mutability as well as economic determinacy he described in even more concentrated forms. To Jameson, the process of commodification demonstrated the contradictory nature of the claims of postmodernism: for instance, how Lyotard’s notion of the end of grand (totalizing) narratives became understood to be a totalizing form in itself. Furthermore, it seems rather obvious that what might be considered to be a distinct break from what went before clearly contains residual traces of it (“shreds of older avatars” as he puts it), not least acknowledged in the very use of the prefix that both breaks from and keeps connection to the term in use.

So rather than a distinct paradigm shift from modernism, he concludes that postmodernism is “only a reflex and a concomitant of yet another systemic modification of capitalism itself” (Jameson xii). Referring to Daniel Bell’s popular phrase ‘postindustrial society’, Jameson instead argues for ‘late-capitalism’ (a term allegedly taken from Adorno). This preferred choice of prefix helps to reject the view that new social formations no longer obey the laws of industrial production and so reiterates the importance of class relations. Here he is also drawing upon the work of the Marxist economist Ernest Mandel in Late Capitalism (1978) who argued that in fact this third stage of capital was in fact capitalism in a purer form – with its relentlessly expanding markets and guarantee of the cheapest work-force. If we follow this line of logic, can we argue something similar with the post-digital? What are its residual traces and what is being suppressed? How are new markets and social relations are being reconfigured under these conditions?

Determining logic
To begin to think about these questions it should be understood that Jameson adopts Mandel’s ‘periodising hypothesis’ or ‘long wave theory’ of expanding and stagnating economic cycles to explain developmental forces of production. In this unashamedly dialectical model, growth is explained in parallel to the previous period’s stagnation. Three general revolutions in technology are described, in close relation to the capitalist mode of production since the ‘original’ industrial revolution of the later 18th century: Machine production of steam-driven motors since 1848; machine production of electric and combustion motors since the 90s of the 19th century; machine production of electronic and nuclear-powered apparatuses since the 40s of the 20th century (Mandel 119). Correspondingly Jameson characterises these as: market capitalism; monopoly capitalism, or the stage of imperialism; multinational capitalism (35), each expanding capital’s reach and effects. He then relates these economic stages directly to cultural production, as follows: realism – worldview of realist art; modernism – abstraction of high modernist art; and postmodernism – pastiche.

Although this model may seem rather teleological and over-determined on first encounter, he explains that these developments are uneven and layered, without clean breaks as such, as “all isolated or discrete cultural analysis always involves a buried or repressed theory of historical periodization” (Jameson 3). The acknowledgement of what lies historically repressed provides a further link to Hal Foster’s The Anti-Aesthetic, and his defence of Jameson’s adoption of the long wave theory as a “palimpsest of emergent and residual forms” (Foster 207). However he does consider it not sensitive enough to different speeds nor to the idea of ‘deferred action’ (that he takes from Freud’s the return of the repressed).  This aspect is important to any psychoanalytic conception of time and implies a complex and reciprocal relationship between an event and its later reinvestment with meaning.

This feedback loop (or dialectic) of anticipation and reconstruction is perhaps especially important to understand the complex symptoms of psycho-social crisis. For instance, and to understand the present financial crisis, Brian Holmes traces cycles of capitalist growth and the depressions that punctuate them by also referring to long wave theory. Rather than Mandel, he refers directly to the Russian economist Nikolai Kondratiev, who identified three long waves of growth underpinned by techno-economic paradigms: “rising from 1789 to a peak around 1814, then declining until 1848; rising again to a peak around 1873, then declining until 1896; and rising once more to a peak around 1920 (followed by a sharp fall, as we know, in 1929).” (Holmes 204) He explains that what Kondratiev discovers is that large numbers of technological inventions are made during the slumps, but only applied during the upsurges (205). This pattern in turn informs Joseph Schumpeter’s influential idea of how innovations revolutionize business practices – what he later calls “creative destruction” and later “disruptive innovation” by others (1995)  – to demonstrate how profit can be generated from stagnated markets. Holmes traces the contemporary importance of these concepts to establish how capitalism follows a long wave of industrial development that presents opportunities for social transformation from a complex interplay of forces, and innovation is applied: “Investment in technology is suspended during the crisis, while new inventions accumulate. Then, when conditions are right, available capital is sunk into the most promising innovations, and a new long wave can be launched.” (206)

Is something similar taking place with digital technology at this point in time following the dotcom hype and its collapse? Is the pastiche-driven retrograde style of much cultural production a symptom of these complex interplay of forces, and an indication of business logic that seeks to capitalize on the present crisis (given the paucity of other options) before launching new innovations on the market? Yet before making such a bold assertion we should also be wary of other determinisms as the relays of technological innovation alone do not reveal the inner mechanisms of the broken economy, but broader analyses that reach beyond technology: “Technology has as much to do with labour repression as it does with wealth and progress. This is our reality today: there is too much production, but it is unaffordable, inaccessible, and useless for those who need it most.” (Holmes 209)

This position seems to concur with the overall problem of endless growth and collapse – the reification of class divisions – where old technologies are repackaged but in ways that serve to repress historical conditions. In a similar vein Jameson would have us conceive of the contemporary phase of capitalism in terms of both catastrophe and progress (Jameson 47). This means to inscribe the possibility of change into the very model of change offered up as unchangeable – or something similarly paradoxical (and dialectical). Other kinds of innovations outside of the capitalist market might be imagined in this way but there also seems to be a problem here in that the very processes have been absorbed back into further stages of social repression.

Postscript
Are these periodisations simply too mechanical, too economically determining? Probably. Indeed, are Marxist theories of capitalist crisis bound to outmoded notions of the development of the forces of production, in order to conceptualise decisive (class) action? That may not be such a bad thing if our memories are fading about what is being displaced and how. Having said this let us perhaps better conclude that economic crises are increasingly subject to the conditions of what Peter Osborne refers to as ‘global contemporaneity’. The suggestion is that neither modern nor postmodern discourses are sufficient to grasp the characteristic features of the historical present. In this view, the contemporary is not simply a historical period per se, but rather a moment in which shared issues that hold a certain currency are negotiated and expanded.

“As a historical concept, the contemporary thus involves a projection of unity onto the differential totality of the times of lives that are in principle, or potentially, present to each other in some way, at some particular time – and in particular, ‘now’, since it is the living present that provides the model of contemporaneity. That is to say, the concept of the contemporary projects a single historical time of the present, as a living present – a common, albeit internally disjunctive, historical time of human lives. ‘The contemporary’, in other words, is shorthand for ‘the historical present’. Such a notion is inherently problematic but increasingly irresistible.” (Osborne)

The term contemporaneity has become useful to deal with the complexities of time and history, if not politics, in ways that neither modernism nor postmodernism seemed able to capture. Beyond simply suggesting something is new or sufficiently different, the idea of the contemporary poses the vital question of when the present of a particular work begins and ends. Osborne’s point is that the convergence and mutual conditioning of periodisations of art and the social relations of art have their roots in more general economic and socio-technological processes– that makes contemporary art possible, in the emphatic sense of an art of contemporaneity.

Thus contemporaneity begins to describe the more complex and layered problem of different kinds of time existing simultaneously across different geo-political contexts. Doesn’t this point to the poverty of simply declaring something as post something else? When it comes to the condition of the post-digital, the analogy to historical process and temporality seems underdeveloped to say the least. The post-digital can be considered to be “badly known,” as Osborne would put it.

References:
Cascone, K. “The Aesthetics of Failure: Post-Digital Tendencies in Contemporary Computer Music.” Computer Music Journal 24.4, Winter 2000. Print.
Cramer, F. “Post-digital Aesthetics,” 2013. Web. http://lemagazine.jeudepaume.org/2013/05/florian-cramer-post-digital-aesthetics/
Foster, H. “Whatever Happened to Postmodernism?” in The Anti-Aesthetic: Essays on Postmodern Culture, New York: The New Press, 2002. Print.
Jameson, F. Postmodernism, or, The Cultural Logic of Late Capitalism, London: Verso, 1991. Print.
Kojève, A. Introduction à la lecture de Hegel: Leçons sur “La Phénoménologie de l’Esprit.” Paris: Gallimard, 1947. Print
Holmes, B. “Crisis Theory for Complex Societies.” in Bazzichelli, T. & Cox, G. eds., Disrupting Business, New York: Autonomedia, 2013: 199-225.
Mandel, E. Late Capitalism. London: Verso, 1972. Print.
Osborne, P. “Contemporary art is post-conceptual art/L’arte contemporanea è arte post-concettuale”, Public Lecture, Fondazione Antonio Ratti, Villa Sucota, Como, July 2010. Web. http://www.fondazioneratti.org/mat/mostre/Contemporary%20art%20is%20post-conceptual%20art%20/Leggi%20il%20testo%20della%20conferenza%20di%20Peter%20Osborne%20in%20PDF.pdf
Osborne, P. “Contemporaneity and Crisis: Reflections on the Temporalities of Social Change.” Lecture at CUNY Graduate Center, November 2012. Web. http://globalization.gc.cuny.edu/2012/11/videopodcast-peter-osborne-on-contemporaneity-and-crisis/

(With thanks for helpful feedback from Florian Cramer, Robert Jackson and Georgios Papadopoulos.)

 

Digital Money, the end of privacy, and the preconditions of Post-digital resistance

Why not …

“I don’t want to live in a world that everything I do say is recorded” said whistleblower Edward Snowden in his recent interview with the Guardian, in order to justify his revelations over the extend of the surveillance and data-mining of communication around the world by the National Security Agency (NSA). The exposures about “Prism” a surveillance program that allegedly gives NSA direct access to email and telephone communication both in the United States and abroad, has raised concerns about privacy around the globe, including in some of US’ closest allies, including Germany and France. The fears about communication surveillance is fully justified, but there is seems to be little concern about the fate of the information about our economic data and how they circulate in electronic networks. Networked based economic transactions are founded on the principle of absolute verifiability and supervision, and in this domain the fear of Edward Snowden’ is becoming a reality. E-commerce and e-banking can exist only because everything is recorded, retrievable and verified. The same principles apply also to conventional banking before that, but there is one important difference. The information about electronic transactions is in a format that can be processed by the newly available software technologies in low cost and unprecedented speed giving insights about individual and collective behavior that can be both economically and politically useful.

What I think is the most obvious conclusion about the NSA surveillance program “Prism” is the complete failure of the rule of law to protect the privacy of citizens, independently of their location or the particular legal safeguards in their jurisdiction. However, the legal status of data about the economic transactions processed by banks and credit card companies does not entail the same degree of protection as private communications, even though bank secrecy laws give a sense of relative safety. Such data are owned both by the organization that processes the transaction and transacting parties. The proprietary status of the records of virtual economic transactions makes the possibilities of compromising the privacy of banking and credit card information likely. The value of such information is already recognized and in many used for marketing, for the prediction of price movements, and for the screening of transaction for potential dangers of fraud or default. Economic profiling is at par with security profiling but not in relation to potential illegal and harmful for the society actions, but for the creation of profit and the exclusion of the economically disadvantaged. The new flows of economic information may raise new barriers to participation in the official banking and monetary system, excluding first the illegal, then the migrant and potentially the poor and the precarious from accessing the financial system.
Usually the argument that is used to address privacy concerns such as those raised against is that if somebody has nothing to hide, there is no reason to be afraid. Such an argument is premised on the assumption of a benevolent and more importantly of a infallible government.It is not only the case that mistakes can and do happen, even in the most advanced systems of surveillance and processing of economic (and not only) information. What is even more troubling is that when such mistakes happen, there is no forum, or authority that can be called to rectify such mistakes. Once our digital profile is rejected by the algorithms of economic profiling there is nobody, and probably nothing that we could do to rectify our un-attractiveness as clients, something that can limit our access to credit, insurance, and even to a bank account.

Digital money rising

The revolution in information and communication technologies facilitated the expansion of the electronic payment systems and the organization of new types of payment instruments. Communications have became faster, easier and safer but also considerably cheaper. More efficient fund transfers systems emerged and as a result direct debits and credit transfers have been expanding at an increasing pace. Cards payments have been developing by providing added value services to consumers that rely on application of novel transaction interfaces, limiting the use of cash and of other paper based payment technologies and laying the foundations for a cashless society.

With increasing competition from all these new payment media the use of cash is confined only to a fraction of the total value of monetary transactions as the recent editions of the Blue and the Red Book indicate (ECB, figures for 2005; CPSS, figures for 2003). Before the introduction of the Euro (in 2000) cash in circulation amounted only to 1.9% of the GDP in Luxembourg (the lowest in the union), 2.1% in Finland, 6% in Italy, 6.2% in Germany and 8.9% in Spain (ECB, Blue Book 2003 27; CPSS, 84). In the same year cash in circulation as a share of narrow money (M1) was 0.8% in Luxembourg, 6.5% in Finland, 14.3% in Italy, 21.9% in Germany and 17% in Spain(ECB, Blue Book 2001 figures, 9). These figures imply that most of the economic value is transferred through other payment media, but cash still remains dominant in retail. In the Netherlands 70% of all retail payments in 2001 were made in cash, despite the availability and sophistication of electronic payment instruments available (CPSS, 298), in the UK the same figure was 74% (CPSS, 403). The numbers for 2011 within the EU, less than ten years after the introduction of the Euro, suggest a radical change in the landscape of payment technologies if one compares it with the pre-Euro, pre-SEPA times and the there is a strong tendency towards immaterialization. Only in 2011 the total of non-cash payments increased by 4.4% to 24.9 billion. The importance of paper-based transactions continued to decrease, with the ratio of paper-based transactions to non-paper-based transactions standing at around one to five. The number of cards with a payment function in the EU remained stable at approximately 727 million. This figure amounts to 1.44 payment cards per EU inhabitant. The number of card transactions rose by 8.7% to 37.2 billion, with a total value of €1.9 trillion. Finally, only in 2011, the total number of automatic teller machines (ATMs) in the EU increased by 0.9% to 0.44 million, while the number of points of sale (POS) terminals increased by 3.2% to 8.8 million (ECB, press-release). The average value per card transaction is around €52. Chart 1 below shows the use of the main payment instruments from 2000 to 2011.

Chart 1: Use of the main payment instruments in the EU 2000 – 2011 (ECB various publications, estimates of number of transactions in billions)

The phasing out of cash and other paper based payment instruments raises important theoretical questions both about the nature of money and the economic relationships in the new network economy. Interfaces, protocols and networks influence the structure of the market, the degrees of participation of different social groups and also the distribution of the social wealth. In addition the immaterialization of money, brought about by the gradual disappearance of cash opens new possibilities of bio-political control as well as new forms of suppression and resistance.

Digital Economy and the Bureaucratic control of Participation

The digital revolution has not exhausted all its potential, and the application of information technologies seems to be still expanding, but for some time there is a discussion about a new phase. The description of the new condition of the technological and consequently of the social and economic development as post-digital refers to maturation of information and communication technologies and the normalization of their use. We could describe the new condition of sociality as post-digital referring to a series of new organizing principles. The use of digital technologies becomes pervasive at the same time as it gets normalized and integrated in economic activity. The normalization suggest a series of further consequences for the digital framework of socioeconomic interaction which include commercialization, enforcement of common standards that often constrain freedom of expression, surveillance, and concentration in the power and control of electronic network in the hands of a limited number of agents. This later development is especially troubling but also unsurprising since digital networks have an ingrained tendency towards concentration.

The gradual replacement of the networked computer, which is the general purpose technology that carried more of the weight of the socio-economic transformation, by other information processing-devices which have a more restricted domain of application is a further important indication of the normalization of the ICT revolution. Smart-phones, e-readers, tablets, media players, and game consoles provide more restrictive access to content and to interaction, build around graphic interfaces, and allowing limited if any access to their supporting protocol. IT companies, which are simultaneously the producers of the devices, their software, and the retailers of the content, have a vested interest to prevent sharing and cooperation among users to a minimum. Controlled consumption, a term used by Henri Lefebvre, to describe the bureaucratic control of supply and demand in the affluent society, has assumed a new meaning where it becomes a model of restricted and temporary access to information, conditioned by the interfaces and protocols.

In the post-digital age, it is the interface, rather than the personal computer, that emerges as the medium of social participation and consequently as the object of analysis and critique, “for it is the place where flesh meets metal or, in the case of systems theory, the interface is the place where information moves from one entity to another, from one node to another within the system.” (Galloway, 936) If information becomes the main resource and the most valuable commodity, if the economy becomes post-digital, the interface is the most authentic concatenation of technological, social and economic principles. The transformation of individual property rights, and the consequent surveillance for their enforcement, have far reaching consequences over the individual and the economic freedom, reaching even to the fundamental right of economic as well as of political freedom. The intervention of money in digital exchanges commodifies cultural content by the ascription of prices. Here we allude to the economic function of money as an abstract standard of value (Papadopoulos, 957). In this capacity money supports interfaces of controlled consumption, transforming content into economic value and imposing the rules of market exchange on digital culture (Lefebvre, 9). Controlled consumption regulates the participation of the user by creating artificial constraints in the form of intellectual property rights that are inscribed on digital content.

The Payment Interface and the Constitution of the Subject

The investigation of the contribution of transaction interfaces in the support of the symbolic order should explain how the mystifications and the fetishistic attachments that money encourages are enacted in electronic networks. The informatization of money has increased the control of the master signifier of value over the subject by adding more layers of mediation between the subject and its desire, and new mechanism of control, intensifying surveillance and normalization. In the current juncture it is important to reflect on how desire and identity are represented or at least regulated by the new visual architecture of electronic interfaces. The new graphic interfaces impose a new aesthetic, normalizing further the visual representations of sociality and value. As Anne Friedberg argues “this remade visual vernacular requires new descriptors for its fractured, multiple, simultaneous, time-shiftable sense of space and time. Philosophies and critical theories that address the subject as a nodal point in the communicational matrix have failed to consider this important paradigm shift in visual address.” (Friedberg, 3) The forced participation in the market, the alienation of desire by the signifier, the inconsistency of the system of prices, the unjust distribution of wealth and resources, and the vacuity of the notion of economic value find their way in the simulated economic systems, in the interfaces social media and the aesthetics over-commercialized Web 2.0.

The ritualistic character of money is manifest in its repetitive and unreflective everyday use. Subjects relate to money on a practical level; theoretical understanding of the meaning and the functions of money comes only later, if at all. The process of acquiring this practical understanding is quite similar to that of language-learning. The subject is socialized in the use of money through guidance and imitation of the shared practices that involve the use of money. The unreflective relation to the monetary system is not limited to the quasi-automatic rule-following of the norms that regulate money, but extends to the acceptance of the dominant discourse about money and its relation to value. The subject may be agnostic about the role of money, the mysteries of economic value or the constitution of the system of prices, but the use of money is a continuous ritual of investiture in the ideological content. Money develops from a mere carrier of its social function, as standard of value and a means of payment, to the dominant organizing force of social interaction. Social relations are mediated and reconfigured through the intermediation of money. The signifying omnipotence of the master signifier is combined with the omnipresence of everyday use, effectively quilting the signifying chain of the system of prices both at the level of meaning and at the level of practice. The distance that the subject may assume from ideological content is neutralized by the reliance on money for social engagement. The intermediation of money in social relations affirms the symbolic order for the subject as well as its mandate inside this order, even despite the subject.

Money is the master signifier and provides the foundational organizing principle in the contemporary configuration of global capitalism. The salience of money is manifest in the dominance of financial speculation over ‘real’ production1. Money emerges as the vehicle that realizes the global economy of unequal exchange, and as the instrument that commodifies social relations and regulates bio-politics; it is the signifier par excellence. Money signifies the particular content that hegemonizes the universal ideological construction of capitalism providing a particular and accessible meaning to economic value, which colors the very universality of the system of prices and accounts for its efficiency. In addition, the use of money involves a ceremony of initiation in the ideological form, an everyday practice that reifies the dominant ideological form in everyday transactions. Money is the signifier/cause of desire, which symbolizes and signifies all commodities, as well as the articulation of desire and lack in the symbolic order of capitalism. Money is “the unconscious sinthome, the cipher of enjoyment, to which the subject is unknowingly subjected” (Žižek, 106) in and by the market.

The interfaces that support the circulation of economic value in the internet are imbued with a complex machinery for hiding things, be it the emptiness of the value form, the self-referentiality of money and its ability to mask its own history of production and the social division of labor that it generates. The success of of the interface is the ability to regulate information through inscription and execution, which is no doubt both an abstraction or a re-territorialization of the actual circulation of value globally. The structure of electronic payment facilitates the global system of unequal exchange. The relationships between center and periphery, between producers and consumers, between labor and capital, between finance and society are all neutralized by the algorithms of money and networks. The ability of money to reduce all qualities in an absolute quantity is being intensified by the functionality of protocols to domesticate social relations. Protocols reproduce the same fetishistic logic of money. “Users know very well that their folders and desktops are not really folders and desktops, but they treat them as if they were – by referring to them as folders and desktops” (Galloway 2006, 329); in the same fashion the semiotic flow of monetary value, be it through PayPal, through MasterCard or through Bitcoin, even though just a simulation it acquires a modicum of reliability through enforcement and representation as money via the providers of monetary interfaces.

Payment Interfaces and Post-digital challenges; a set of questions

Despite the disillusionment and the concerns about the emergence of a new totalitarian economy of controlled consumption, the new economic condition of digital culture is described by the proponents of the model of controlled consumption as a revolution, with its simulated existence presenting itself as the ultimate reality of value, which tries to make earlier forms of social participation subordinate and even unreal. Starting from this mystification of the effect of digital interfaces on social interaction, the paper aims to raise a series of questions for the analysis of the cultural effects of the mediating function of post-digital interfaces by focusing on their economic, technological and aesthetic conditions of existence. A critique to the new digital architecture of the monetary system and the market should start by investigating the different protocols of digital transactions, focusing on the dynamics of commodification by locating how money intervenes and signals the creation and transfer of economic value. The aim should be a theoretical framework for the analysis of the model of controlled consumption and its dependence on money and its function as a standard value. The ability of interfaces to impose, both overtly and covertly, new relations of ownership as well as well as new forms of surveillance, suggests their capacities as technologies of biopolitical control of the individual.

The model of controlled consumption is challenged by alternative economies, of sharing, gifting, and exchanging based on different standards of value. The critique of money interfaces and controlled consumption should start by studying the collective representations of value in money, the technologies of their dissemination, and investigate their contribution in the constitution of subjectivity in the digital realm. The shared representations of economic value support consumption and commodification by illustrating the cultural significance of the system of prices. A post-digital critique of money can be developed following a series of questions, the most important of which is how the new visual vernacular of digital monetary interfaces informs and shapes the representations of economic value and how are such representations are challenged and informed by post digital practices? The answer to this question comes from critical theory and philosophy rather than from economics, building on the literature on the reliance of the economy on representation and signification, and on an extensive literature on the social function of representation that spans from social ontology, and psychoanalysis, to media theory. The new socio-technological paradigm challenges the cultural foundations of the economy encouraging new representations of value that fit the format of the new media of circulation and the symbolic universe they inhabit. A post-digital critique of electronic money should try to assemble, organize and interpret the emergent iconographies in an attempt to construct a theoretical framework for the analysis of the new ‘digital’ identity of economic value investigating both its authoritative expression in the official monetary system and its alternative post-digital configurations.

The analysis of ‘digital value’ should be supported by the study of three interconnected themes of research combining the methodological framework of interface criticism and aesthetic analysis of monetary interfaces with a critical perspective on economic discourse. The analysis may start by looking back to the growth of the informational sector of the economy, revisiting the most important episodes, integrating them to the overall trajectory of social development tracing the relation of value and money with equivalent transformations in language and image. Such a historiography is important to contextualize the role of information about the economy as separate socio-economic system and to describe its input in social production. In this context the notion of economic value would be central as well as its transfigurations in the new economic system. Equally important would be the relation between money, language and code, which will support the analysis of the immaterialization of economy and value. The second theme would be the issue of uncertainty and its relation to economic growth. In the recent decades the financial markets have thrived on computational models that partly reduce uncertainty to risk, making it manageable. Uncertainty could be considered in two different capacities. It denotes both the unpredictability of future outcomes given the availability of information and the resources of processing it in the present, but also points to a gap between reality and representation, where uncertainty is the part of the undomesticated real that disrupts the relations of our theories to the world. The third part the analysis will address the dialectic relation between interface criticism and the further development of interfaces with a specific attention to artistic practice and political projects that aim at actual alternatives to the monetary system of valuation and exchange, both within and outside digital networks of participation. Ideally the outcome would be an archeology of digital payment media that is informed by the process of social antagonism. To that effect the project should try to compile a typology of the aesthetic and the operational principles of monetary interfaces including both their mainstream version and the critical attempts from the edges of the economic system. The conclusion of the analysis would be a critical history of money and its current reconfigurations in the digital condition.

Interface criticism emerges as a necessary methodology in order to understand the conditions of participation in the new social paradigm. Interface criticism addresses the conditioning of human behavior by new technological media with a specific emphasis on the sensible and persuasive qualities of the interface. Obviously aesthetics and its relation to economics and technology represents an important part in the methodological framework that is used in interface criticism and is a necessary supplement to socio-economic analysis. Here aesthetics is used in three interconnected meanings. Aesthetics denote sensory perception; an interface has a sensible component in order to create meaning and allow for the interaction between the user and the system that are connected through the interface. A second dimension of the aesthetics of the interface has to do with beauty; interfaces are often designed to be appealing, pleasing, and even seductive in an attempt to address the subject and its desire and to invite interaction. The key here is that the interface is within the aesthetic, not a window or doorway separating the space that spans from here to there. It is a type of aesthetic that implicitly brings together the edge and the center, or the protocol and the node, but one that is now entirely subsumed and contained within the visual architecture of the interface. This tension brings us to the last, and most subversive possibility in the aesthetic quality of the interface, the notion of aesthetics as artistic production. Art can operate as a force of consolidation of the power of the interface as it can function disruptively, unmasking the limitation and the normativities of the system, and acting as the real form of transparency.

Works cited:
Committee on Payment and Settlement Systems. Payment and Settlement Systems in Selected Countries. Basel: Bank of International Settlements, 2003.
Committee on Payment and Settlement Systems, Survey of Electronic Money Developments, Basel: Bank of International Settlements, 2001.
Drucker, Joanna. “The Humanities Approach to Interface Theory.” Culture Machine vol. 12 (2011): 1-20.
European Central Bank. Press-release of the payments statistics. 2011. (Web)

http://www.ecb.europa.eu/press/pr/date/2012/html/pr120910.en.html.

European Central Bank. The Single Euro Payments Area (SEPA): An Integrated Retail Payments Market. Frankfurt: ECB Publications, 2006.
European Central Bank. Payment and Securities Settlement Systems in the European Union. Frankfurt: ECB Publications, 2006.
European Central Bank. Payment and Securities Settlement Systems in the European Union. Frankfurt: ECB Publications, 2004.
European Central Bank. Payment and Securities Settlement Systems in the European Union. Frankfurt: ECB Publications, 2001.
European Central Bank. Report on Electronic Money. Frankfurt: ECB Publications, 1998. (Web) http://www.ecb.int/press/pr980831.htm.
Ferguson, Niall. The Ascent of Money. New York: Penguin, 2008.
Flusser, Vilém. Towards a Philosophy of Photography. London: Reaktion Books, 2000 [1983].
Friedberg, Anne. The Virtual Window; from Aliberti to Microsoft. Cambridge: The MIT Press, 2006.
Galloway, Alexander. “The Unworkable Interface”. New Literary History, vol. 39 (2009): 931-955.
Galloway, Alexander. “Language Wants To Be Overlooked: On Software and Ideology.” Journal of Visual Culture, vol. 5 (2006): 315-331.
Galloway, Alexander. Protocol; How Control Exists After Decentralization. Cambridge: MIT Press, 2004.
Galloway Alexander and Eugene Thacker. “Protocol, Control and Networks.” Grey Room, vol. 17 (2003): 6-19.
Genette, Gérard. Paratexts; Thresholds of Interpretation. Cambridge: Cambridge University Press, 1997.
MacAskill, Ewen. “Edward Snowden, NSA files source: ‘If they want to get you, in time they will’.” The Guardian, Monday, 10 June 2013. (Web) http://www.theguardian.com/world/2013/jun/09/nsa-whistleblower-edward-snowden-why.
Papadopoulos, Georgios. Notes towards a Critique of Money. Maastricht: Jan Van Eyck Academy, 2011.
Papadopoulos, Georgios. “Between Rules and Power: Money as an Institution Sanctioned by Political Authority.” Journal of Economic Issues, vol. 43, 4 (2009): 951-969.
Žižek, Slavoj. “Object a in Social Links”. In (eds) Clemens, Justin and Grigg Russell. Jacques Lacan and the Other Side of Psychoanalysis: Reflections on Seminar XVII. Durham: Duke University Press, 2006: 107-128.

Trash Versionality for Post-Digital Culture

MEDIA TRASH

Rhetoric

Following a 14-day visit to parts of the UK, the United Nations’ special rapporteur on adequate housing Raquel Rolnik, issued an end-of mission press statement[1]. Her recommendation was to immediately suspend the UK’s social housing welfare reform (known to opponents as the ‘Bedroom Tax’). Researched and submitted according to UN protocol (Gentleman), the advice was however vehemently rejected by the UK government; the rapporteur’s personal and professional credibility were then attacked in the media and elsewhere[2].

Changing dynamics between the public and political spheres are especially visible online, where social media is having an impact in many areas. In one instance a court trial was abandoned after new evidence came to light. This evidence was obtained from a disused Twitter account. Though all charges in the case were dropped, details of the accused were subsequently reported in a national newspaper, in print and on the Web[3]. Legal proceedings have also been derailed because of jurors’ activity on the Internet (Davis). In other circumstances, incautious tweets have resulted in prosecutions for libel (BBC News).

Indispensability

As quickly as attention has switched away from these episodes, they offer us a snapshot of a media landscape in which trash, in the form of dispensable news and information, is merging with public opinion and political rhetoric. The combination of booming mass culture and creativity is now producing a variety of images – including data images – which are not easily locatable within the apparatus’ of political, social and economic assemblages. Consequently, these images are open to conjecture. Their position on the continuum between media, platform and network transport renders them equivocal, ambiguous entities; where identity, trust and authenticity come under review.

In an artwork titled The Formamat (2010) Kripe, Schraffenberger and Terpstra investigate the value individuals place on data they have stored on their mobile devices. The work is a vending machine, “…which returns candy in exchange for the deletion of [an individual's] digital data”. The authors “…invite people to experience the joy of deletion in a public space and encourage them to think about the value and (in-)dispensability of their files while also researching the subject in a broader sense by storing and analysing their deletion-behavior. ” (Formamat)

Revision

With hindsight, The Formamat can also be seen to capture uncertainties in our relationship with data; already an unexpected revision can be seen, reformulating the question, not of which, but of whose files are going to be deleted. Taken together with the Internet’s long memory – from the Internet Archive’s Way Back Machine[4] to playfully macabre, assisted Facebook-identity suicides[5] – this observation underlines the attention now being given to choice and ownership of data. Here, Nissenbaum’s notion of contextual integrity is significant; This advocates for an individual’s right to control the flow of their personal information, rather than insisting on absolute control (Nissenbaum).

The perspective might be welcomed by the Sunlight Foundation, known for co-ordinating crowd sourced analysis of US government records. Transparency initiatives commonly use Wikis to manage document revisions made by multiple authors (Sifry). In the case of Wikipedia, software for ‘version control’ becomes the image of a community and its knowledge, a reflection of that community in code:

“People can and do trust works produced by people they don’t know. The real world is still trying to figure out how Wikipedia works…Open source is produced by people that you can’t track down, but you can trust it in very deep ways. People can trust works by people they don’t know in this low cost communication environment.” (Cunningham qtd in many2many)

Version Control

Other types of version control system (VCS) are useful in co-ordinating software development groups. The Linux kernel project is one example. For this a very specific VCS was created: Git[6] was created to manage all the code for the Linux kernel. It solves problems of ownership and responsibility with its own purpose built command: git-blame[7]. The command finds the author of an edit or addition and reports when changes were made. Git is a broad framework, designed to address the techno-social problems of making and releasing new versions of the kernel image (more than nine million lines of code; the core of the GNU/Linux operating system).

Git was created with security, authentication and traceability as paramount concerns. Contributors to any Git-maintained project are encouraged to advance development by regularly committing smaller changes into a main line of development. Additions and revisions can be written and tested in isolation before being introduced to the main line or ‘branch’. Copies of this branch become distributed as changes are written back to the computers of other developers as they also introduce their work. Files ‘checked out’ from the main development tree can be introduced to newly created branches. Typically these are later merged into the project’s main branch or abandoned. In some instances, new branches diverge substantially from the main development effort. This is basically the concept of project forking. It might be apparent from this summary that talk about governance in Git is necessarily and intrinsically also a discussion about technical operation.

git

Benevolent dictator work flow (image: Git reference manual)

Issues of governance are also dealt with in creative projects which utilize and discuss version control. Simon Yuill’s Social Versioning System[8] and Matthew Fuller and Usman Haque’s Urban Versioning System 1.0[9] concern the relevance of Free Software principles to consensus and co-operation in design practice:

“…one of the most interesting aspects of open source software is the continuous interleaving of production, implementation, usage and repurposing processes, all of which can and sometimes must be open—not just an “open design” that then gets implemented in a closed manner.” (Fuller and Haque 17)

Soon after Git, came GitHub[10]. Using the apparatus (the ‘plumbing and porcelain’) which comprises the Git software, GitHub establishes a web-based repository for software projects whose source code is released in the public domain. GitHub has been adopted by a huge and rapidly expanding user community, as a platform for developing and publishing software, as well as a range of other creative works. GitHub provides a large-scale, distributed means to recognize and pin point different stages in the production of these works. GitHub has also become home to a mass of never changing, user-generated software configuration files. In GitHub these can be Git configuration files, stored in a Git repository, on a platform built using Git.

800px-Matryoshka_dolls_in_Budapest

SOCIAL OVERLOAD

Ethics and etiquette

Besides the benificent feelings this sharing of data naturally inspires, proliferating codes also produce tensions. Where interest increases, the scale and relative value of contributions can in turn challenge a project’s direction. WikiLeaks’ Iraq War Logs are an example where the relevance and reliability of material have been key considerations (Domscheit-Berg). In other guises, this problem of managing contributions has been encountered in projects from Community Memory (an electronic bulletin board), through to contemporary hacker spaces and Open Source tech communities. In all these instances, it seems that mutual agreement – whether or not this has been explicitly defined – is a central issue. Arguments often focus on leadership, personal style and the possibility of ‘benevolent dictatorship’ (Lovink). Whilst positive feedback, generated by self-enhancing ‘recursive geek publics’ is not necessarily without drawbacks (Kelty), neither is it clear how this energy can work best – in the case of the Debian Software Project there is the Debian Social Contract[11], and an effective hierarchy to address the flow of contributions. Free Speech has been intrinsic to the development of Free-Libre Open Source Software (Turner). Its influence is evident, for example, in the protocols and conduct written into the Debian Linux Project and embedded in Wikipedia.

Moving away from hacker-styled communities to other kinds of governance, in a study of public sector adoption of open source software, Maha Shaikh describes situations “where information technology and users are not defined outside their relationship but in their relational networks”, where the focus moves away from actors, “…towards a more complex, and less defined phenomenon…the interaction”. This perspective, emphasizing mutability and becoming is advantageous to understanding materializing of public sector adoption of open source software: “performativity leaves open the possibility of events that might refute, or even happen independently of what humans believe or think”. We are presented with a different means to envision interaction, “…drawing on ideas of becoming, tracing versus mapping and multiplicity alongside the shared ontology of Actor Network Theory”. Shaikh concludes that,

“…the becoming of adoption can be both constrained and precipitated by various forms of materiality (of the assemblage of the open source ecosystem)…open source software – a much touted transparent and open phenomenon – is by its nuanced mutability able to make the process and practices surrounding it less visible.” (Shaikh 123-140)

Beneath the Street, the Network

Revelations following the release of NSA files by Edward Snowden at first underlined governments’ ability to track and target individuals (for example, by following calls and data from mobile phones). Subsequent leaks moved attention somewhat away from wireless networks and ‘eyes in the sky’ to the image of massive submerged and underground data pipes connecting (really big) data centres – routinely serving information to government secret services. Documents detailing these practices provoked strong objections from businesses who insisted on the ‘right to reveal’[12]. This twist on the ‘right to know’ placed mutability and truth centre stage.

Besides this totalizing image of state control and vested corporate interests, is the changing interplay between humans, machines and geography. The activities of Anonymous, and organizations such as WikiLeaks and The Pirate Bay continue to demonstrate the actually fragmented, disorganized and dis-regulated condition of government and businesses, which are not always pulling in the same direction. Meanwhile, activist groups find identities outside of pre-existing ones (of public friend or foe) as their operations compose new and revised networks; in street action, engagement with news media, and in online provocations.

Anonymous-Facebook

In the encounter between Anonymous and their targets, a firmament of politics and identity shows the interconnectedness of free speech and anonymity. Alternatively, the evidence in revelations about state surveillance precisely demonstrates that anonymity is not an essential or intrinsic aspect of digital networks, but rather is a set of standards which in many places are already compromised. Cloud computing, Software as a Service and skeuomorphic interfaces readily belie the real sense in which data is exposed. Alongside the changed connotations of ‘access’, Ted Nelson’s invocation, ‘you must understand computers now’ (Nelson) is renewed by under-reporting in the media (Jarvis).

Abundance and Modification

Anonymous is one contemporary expression of the will to understand computers, as well as differing network forms: In a moment of self-reflexive wonder, members turning up for street protests in February 2008 were themselves surprised – in numerous ways – by the people converging on that day, and by the network image this manifestation bodily performed. In one documentary, protesters describe their feelings of being a part of Anonymous and how, as it entered the world, it came to exist in a significantly new way, for them and others. Information activist Barret Brown explains:

“Anonymous is a series of relationships. Hundreds and hundreds of people who are very active in it – who have varying skillsets, and who have varying issues they want to advance – these people are collaborating in different ways each day.” (BBC)

New platforms allow recursive representations of existing creative forms, whilst re-versioned political slogans and insider nods – to Surrealist and Situationist imagery – issue from anonymous channels and deviant locations[13]. These creations, designed for modification, are then absorbed into the melee engaging internet memes and personalities. One notable example of this recursion and modification concerns a prominent UK politician, Ed Balls. In April 2011 he inadvertently tweeted an empty message along with his name. This spawned a long chain of varyingly humorous and teasing responses, facebook likes, as well as many retweets. The action entered meatspace at the time of the original tweet’s two year anniversary, when Ed Balls acknowledged the joke by retweeting the following image:

edBalls

Reduction and Overloading

In the context of continued economic scarcity, the impact of flourishing social media (and its reflective potential) receives additional validation through public acquisition of artworks such as The Cybraphon[14], through WikiMedia outreach projects[15] and in metric analysis of the public mood via twitter and the blogosphere[16].

Networks of users can create, “fast, fluid and innovative projects that outperform those of the largest and best-financed enterprises” (Tapscott and Williams qtd in Heath Cull 78); As they went about establishing WikiLeaks, the value of such observations was not lost on the core team of Julian Assange and Daniel Domscheit-Berg. Starting with only minimal funds and relying on their own technical expertise the two activists would typically exaggerate the scale of WikiLeaks (for example by using fictional identities of people working in purely notional departments). During this time Domscheit-Berg used the pseudonym Daniel Schmitt. Assange used his own name, but was occasionally still identified by his old hacker handle (MENDAX) (Domscheit-Berg). Alongside this overloading, re-purposing and extension of identity within WikiLeaks, there has been the task of gathering, sifting and reproducing large quantities of data. This was achieved through various means, partnerships and collaborations. However, Domscheit-Berg’s subsequent criticism, is that WikiLeaks has essentially always been a network of one (Domscheit-Berg).

431px-Self-loop

Self-loop (image: Stefan Birker)

Anonymous forms lend themselves to analysis rather less than WikiLeaks (their direction primarily being to circumvent and override, to circumvent and override). But what these forms (including memes, reddit and 4chan forums) do present us with, are collaboratively made creative network entities. In the changing dynamic by which these appear, new conventions are being worked out. In these, overloading standards of taste and acceptability are stimulating alternatives to the ordinary narratives of conflict and resolution.

TRASH-VERSIONALITY

Disruptive Convergence

In these forms of representation which we see entering mainstream narratives, a kind of collective and competitive vandalism is esteemed. The multiplicity of voices – for which the expanding net has become more lightning conductor than conduit – increasingly provides its own self-fulfilling cycle of news, serving 24-hour comment and analysis for comment and analysis. A re-writing is under way, in which messages combining text and visual images, produce networks within networks. They become the mutable containers of doubt and disinformation, of intent and ignorance:

“…since images are two-dimensional the representations in them form a circle, that is, one draws its meaning from the other, which in turn lends its meaning to the next. Such a relationship of exchangeable meanings is magical.” (Flusser 9)

Diseaseful Media

From miniature artefacts to large network entities, whether as discrete objects or grand-scale public conceptions, the representations and mental images can seem diffuse, untraceable, and in contradictory states. Nodes, which constitute networks, are themselves potentially networks and networks are collapsible forms, in which processes, “…are recurrent [processes]…which typically involve entirely different mechanisms…larger scale assemblages of which some of the members of the original population become component parts.” (de Landa 19). Little wonder if the scale and definition of networks should induce feelings of disorientation, even anxiety.

Overload also gives rise to easeful interactions which go against any supposed disconnection between the Internet and Real Life. In TPB:AFK Pirate Bay founder Peter Sund explains assuredly to a Swedish courtroom, “We prefer not to use IRL. We believe the internet is for real”[17]. Whilst the motivation and affiliations of the the Pirate Bay trio have remained opaque to state and private prosecutors, in this film the question which achieves over-arching significance is, “Who do you trust?”. This may be a point around which easeful interactions revolve; As trojan links to the Internet meme Goatse.cx[18] showed, the merriment of a practical joke can be a hair’s breadth away from the abuse of trust.

goatse-xlarge

Fetish

As social media has refreshed the status of the Internet troll, the nuanced subterfuge of social engineering, of spreading Fear, Uncertainty and Doubt, appears diminished. Flames, defamation and libel have become the norm. The specialized rules of email etiquette have evaporated. In the merging of media, products and social interaction, trolling itself has gone viral; self-validating intercourse has been upstaged by social-media-sanctioning broadcast-media discourse. In legal proceedings (as with subterfuge against enemy combatants, and leaders of states), a game of cat and mouse is being played; in litigation, plaintiffs become complicit in a mystifying data hide-and-seek, where bytes are transferred, as if seamlessly across frontiers, until reaching new data housing facilities (fortresses of this age).

Other means of outwitting covetous censorial desires have been conceived. Perhaps none has scored higher than the self-mutilation of computers enacted by The Guardian newspaper in its office basement [19]. Primarily, this was was a pragmatic decision, to pre-empt any government moves to obtain data copies held by The Guardian. The strategy was brought sharply into focus when the journalist Glen Greenwald’s partner was subsequently detained by UK border police (Peachy).

datacentre-tpb_afk

ShockBlast: Stockholm nuclear bunker data centre

Objectification

In the drift away from trust towards greater protection, the question “Who is to blame?” is never far from view. The fixation on data and hardware objects; the advance of our litigious cultures; these elements may contribute to conditions in which bullying can be blended into human interactions. As much as hardware and new platforms may enable discourse, they also become the sites for abuse, where differences between trolling and harassment easily merge. In the UK, during 2013, a number of women in the public eye (among them MPs, campaigners and journalists) became the target of insults and threats intended to silence their contribution to public debates. Often these communications were sent through Twitter. In what is possibly the most high profile case to date, the abuse followed a successful campaign to have the Bank of England, for the first time, print a female historical figure on its banknotes[20]. Online, the equivocal status of networks is further evident where ‘trash-talk’ in gaming turns to harassment and ‘the gamification of misogyny’ (Lewis). In the competition for kudos, questions about the liberating potential of the Internet abound.

Disappearance

Identity fetishism promises certainty in a moment of profound uncertainty and harks back to a time in which physical media trash appeared more present than today; it is a moment where in many ways, absence may be more desirable than presence. The contradiction in interfaces is that in the moment they renounce claims on materiality, they retain the ability to expose us to actual and perceived threat. Trolls revel in their ability to circumvent blocks, adopt new identities or label messages in ways that reach targets indirectly[21]. The collision between anonymity and free speech makes clear why for some, disappearance is preferable to the advice, ‘do not feed the trolls’. In the UK, activist Caroline Criado-Perez was driven to delete her twitter account after she received a series of rape threats online (Topping); aged 11, Jessica Leonhardt was targeted in an online bullying campaign, which included distributing her personal details through social media (Jessi Slaughter); in 2012, as a consequence of bullying which began online and followed her during several years and different schools, the Canadian teenager Amanda Todd committed suicide (Amanda Todd’s Death).

AFTERGLOW

Versioning as Method

In a broad sense, and in different domains, we are now seeing truth and responsibility increasingly under review: In the widening push to deliver up to the minute news, the sources and verifiability of content are an ever more present consideration (think of the Yes Men’s Bhopal anniversary action [22]). Concern for information ethics, in public and private domains, means questions of accountability and trust (the veracity of versions) gain significant attention: The extended reach of media is changing the act of reflection; propagating images, collectivizing values. In the networked era, reduction is going global.

Away from the context of news and entertainment media, images also circulate in obscure ways. In the apparatus’ of political, social and economic assemblages, images now appear as agents. They are the subjects of viral exchange on social networks and potentially convey malicious executable computer code – this is no longer speculation (Tung).

Self-representation is inextricably also re-representation; agency and the individual are reflected in the network. Networks facilitate the dissemination of copies, and copies are also the by-product of networks. These networks are assemblages, collections of objects producing data images on a greater and greater scale. Other assemblages (collections of hardware) are also networks: Digital cameras readily produce images in multiple versions. The ‘pipeline’ model of digital image processing and design studio work flow invokes the trope of ‘relation’. In economic and other organizational circuits, dependencies exist which bind assemblages and apparatus’ inseparably: images today constitute networks where value, exchange, and mutability are implicit; they are pixel-assemblages to be seen as networks in and of themselves.

Relating Michael Callon’s theoretical writing to the ‘performativity of networks’, Iain Hardie and Donald Mackenzie write,

“For Callon, an actor ‘is made up of human bodies but also of prostheses, tools, equipment, technical devices, algorithms etc’. – in other words is made up of an agencement. The notion…involves a deliberate word-play. Agencer is to arrange or to fit together: in one sense, un agencement is thus an assemblage…The other side of the word-play…is agence, agency” (Hardie and Mackenzie 58)

We can envisage networks as aggregated versions, sites of recursion and reflexivity, in which circular relations establish the inter-relation of medium and method.

Post-irony for a Post-digital age

The activities of comment trolls and websites such as ask.fm demonstrate other ways in which the Internet has become a machine for reflectivity: Interactions dominated by glib and clever epithets invariably promote self-image over self-knowledge (though with notable exceptions[23]). Rhetoric turns the joke upon those who have missed the joke. These episodes thrive on lack of understanding and the connoisseur’s appreciation of the unspoken: The joke is ruined if you spell it out (Harman).

However, the targets of abuse are standing up against such misrepresentation. Their narratives are the alternative versions filling gaps in communication. In this way identities are re-presented; self-images are recomposed. Projects such as unslut[24] have this same end of allowing individuals to positively re-enact negative stories[25]. Intimate reflections like these are in contrast to celebrity relationships lived through media and social media, where the open-ended repetition of text and image insinuates another kind of performance[26]. Indeed, self-representation can also have a normalizing effect:

“…my proliferation of selfies is a small way of fighting back. The more I look at myself (in a mirror or in pictures), the easier it becomes to accept that this is really me, and this is my skin…I feel that the more pictures I post of me, sure I’m putting myself out there to be judged, but I am also adding to images out there (in the minds of friends and strangers alike) of who I am.” (stuficionado)

IMG_6792

As images and self-images re-instate a sense of place, absent themselves from rhetoric and generate their own associations, they obtain a peculiar sense of agency. They are re-entering the world as prosaic reminders of the real – hermetic emblems of an already present, post-ironic post-digital age:

ml-large-meme3_1

ml-large-meme3_2

ml-large-meme3_3

Notes

[1] http://www.ohchr.org/en/NewsEvents/Pages/DisplayNews.aspx?NewsID=13706&LangID=E

[2] http://www.dailymail.co.uk/news/article-2418194/Outrage-loopy-UN-inspector-lectures-Britain-Shes-violent-slum-ridden-Brazil-attacks-housing-human-rights.html

[3] http://www.dailymail.co.uk/news/article-2418993/Trial-collapses-men-accused-rape-police-discover-new-evidence-old-Twitter-account-14.html

[4] http://archive.org/web/web.php

[5] http://www.seppukoo.com

[6] http://git-scm.com

[7] http://git-scm.com/docs/git-blame

[8] http://www.spring-alpha.org/svs/

[9] http://www.situatedtechnologies.net/?q=node/85

[10] https://github.com/explore

[11] http://www.debian.org/social_contract

[12] http://blogs.technet.com/b/microsoft_on_the_issues/archive/2013/08/30/standing-together-for-greater-transparency.aspx

[13] http://opgraffiti.deviantart.com/gallery/

[14] http://www.nms.ac.uk/highlights/objects_in_focus/cybraphon.aspx

[15] http://en.wikipedia.org/wiki/Wikipedia:GLAM/NLS

[16] http://www.bbc.co.uk/news/technology-24001692

[17] http://watch.tpbafk.tv/

[18] http://gawker.com/5899787/finding-goatse-the-mystery-man-behind-the-most-disturbing-internet-meme-in-history

[19] http://www.theguardian.com/world/2013/aug/20/nsa-snowden-files-drives-destroyed-london

[20] http://thewomensroom.org.uk/banknotes

[21] “…online abusers continued to find “new and imaginative ways” to contact her, through her blog”. See http://www.theguardian.com/uk-news/2013/sep/03/caroline-criado-perez-rape-threats-continue

[22] http://www.museumofhoaxes.com/hoax/archive/permalink/the_yes_mens_bhopal_hoax

[23] http://www.theguardian.com/technology/2013/sep/09/jake-davis-topiary-lulzsec-answers

[24] http://www.unslutproject.com/

[25] “I felt like the chat box could see me through the computer screen.” See:

http://www.theguardian.com/society/2013/sep/21/unslut-project-against-sexual-bullying

[26] See remarks about Kayne West and Kim Kardashian’s relationship: http://www.theguardian.com/lifeandstyle/2013/oct/27/instagram-selfie-reveal-kim-kardashian-tweet

Works Cited

“Amanda Todd’s Death.” Know your Meme. Web. 5 Dec. 2013.

BBC News. “High Court: Sally Bercow’s Lord McAlpine Tweet Was Libel.” bbc.co.uk/news. British Broadcasting Corporation. 24 May 2013. Web. 29 Nov. 2013.

Brown, Barret in “How Hackers Changed the World – We Are Legion.” Storyville. BBC. 12 Mar. 2013. Television.

Cunningham, Ward quoted in many2many. “Ward Cunningham on the Crucible of Creativity.” corante.com. Corante. 17 Oct 2005. Web. 5 Dec. 2013.

Davis, David. “We Can’t Let 12 Good Men and True Be Undone by the Internet” theguardian.com. Guardian News and Media Limited. 19 Jun. 2011. Web. 5 Dec. 2013.

de Landa, Manuel. New Philosophy of Society: Assemblage Theory and Social Complexity. London: Continuum, 2006. Print.

Domscheit-Berg, Daniel with Klopp, Tina. Trans. Chase, Jefferson. Inside WikiLeaks. London: Jonathan Cape, 2012. Print.

Formamat.com. “About – Formamat.” Web. 29 Sep. 2013.

Flusser, Vilem. Into the Universe of Technical Images. Minneapolis: University of Minnesota Press, 2011. Print.

Gentleman, Amelia. “UN housing expert’s call to axe bedroom tax ‘a disgrace’– senior Tory 11.” theguardian.com. Guardian News and Media Limited. Sep 2013. Web. 29 Sep. 2013.

Hardie, Iain and Mackenzie, Donald. “Assembling an Economic Actor: The Agencement of a Hedge Fund .” The Sociological Review Volume 55:1 (2007): 57-80. Print.

Harman, Graham. Weird Realism - Lovecraft and Philosophy Alresford: Zero Books, 2012. Print

Heath Cull, Daniel. The Ethics of Emerging Media. Information, Social Norms, and New Media Technology. London: Bloomsbury Publishing, 2011. Print

Jarvis, Jeff. “I fear the chilling effect of NSA surveillance on the open internet.” theguardian.com. Guardian News and Media Limited. 17 Jun 2013. Web. 29 Sep 2013.

“Jessi Slaughter.” Know your Meme. Web. 5 Dec. 2013.

Kelty, Christopher. Two Bits. The Cultural Significance of Free Software. Duke University Press: Durham NC, 2008. Print, Web.

Lewis, Helen. “Yes, It’s Misogynistic and Violent, but I Still Admire Grand Theft Auto.” theguardian.com. Guardian News and Media Limited. 21 Sept. 2013. Web. 29 Sep. 2013.

Lovink, Geert. Zero Comments. Blogging and Critical Internet Culture. Routledge: Abingdon and New York, 2008. Print.

Nelson, Ted. Computer Lib/Dream Machine. Self-published. Print. 1974.

Nissenbaum, Helen. Privacy in context: technology, policy, and the integrity of social life. Stanford: Stanford Law Books, 2011. Print.

Peachy, Paul. “NSA Leaked Documents Row: Glenn Greenwald’s Partner David Miranda Held Notes on How to Crack Computers When Detained at Heathrow.” independent.co.uk. Independent Print Ltd. 30 Aug. 2013. Web. 5 Dec. 2013.

Shaikh, Maha. “Mutability and becoming : materializing of public sector adoption of open source software.” Shaping the Future of ICT Research. Methods and Approaches. Volume 389 (2013): 123-140. Print.

Sifry, Micah L. WikiLeaks and the age of Transparency. New Haven: Yale University Press, 2011. Print.

Stuficionado. “Just Shoot Me”. Web. 5 Dec. 2013.

Topping, Alexandra. “Caroline Criado-Perez Deletes Twitter Account after New Rape Threats”. theguardian.com. Guardian News and Media Limited. 6 Sep 2013. Web. 5 Dec. 2013.

Tung, Liam. “BlackBerry Enterprise Server Malicious TIFF Attack Discovered.” ZDNet. CBS Interactive. 19 Feb. 2013. Web. 5 Dec. 2013.

Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press: Chicago, 2005. Print.

An Ethology of Urban Fabric(s)

An Ethology of Urban Fabric(s)

‘… no one knows ahead of time the affects one is capable of; it is a long affair of experimentation…’ (Deleuze 1988/1970, p. 125)

With this piece, we wish to open up a patchwork of relational thinking of the ethology of urban fabric(s) from a post-digital perspective. The semantic of the urban fabric normally denotes the “physical aspect of urbanism, emphasizing building types, thoroughfares, open space, frontages, and streetscapes but excluding (the) environmental, functional, economic and sociocultural (…)” (Wikipedia), from an ideal top-down perspective (see e.g. Bricoleur Urbanism). Here, however, we would like to explore a non-metaphorical understanding of urban fabric(s), shifting the attention from a bird’s eye perspective to the actual, textural manifestations of a variety of urban fabric(s) to be studied in their real, processual, ecological and ethological complexity within urban life. We effectuate this move by bring into resonance a range of intersecting fields that all deal with urban fabric(s) in complementary ways (interaction design and urban design activism, fashion, cultural theory, philosophy, urban computing).

We wish to underline that this is a conceptually explorative piece written in the first year of the 7-year grant IMMEDIATIONS: Art, Media, Event. Rather than presenting defining arguments, we wish to sketch out a field of questioning that can inform future interventionist or practice-based experimentation – or research-creation – within an academic context. At this moment, we are using the notion of urban fabric(s) to produce conceptual and relational trajectories we want to investigate further during the project. To us, this means following and unfolding the conceptual richness in a number of directions, drawing on the ambiguity of the notion of fabric(s), from textures to textiles, but always in relation to the urban, and within the frame of the post-digital as it is being developed in this publication. 

In this article, rather than attempting to pin down the notion of urban fabric(s) to any absolute definition, we want to open up lines of thought and experimentation around the concept by sketching out possible ethological dimensions to be considered. We take the term ethology from Deleuze’s book on Spinoza, where he states the following:

“Ethology is first of all the study of the relations of speed and slowness, of the capacities for affecting and being affected that characterize each thing. For each thing these relations and capacities have an amplitude, thresholds (maximum and minimum), and variations or transformations that are peculiar to them. And they select, in the world or in Nature, that which corresponds to the thing; that is, they select what affects or is affected by the thing, that moves it or is moved by it. For example, given an animal, what is this animal unaffected by in the infinite world? What does it react to positively or negatively? What are its nutriments and its poisons? What does it “take” in its world? Every point has its counterpoints: the plant and the rain, the spider and the fly. So an animal, a thing, is never separable from its relations with the world. The interior is only a selected exterior, and the exterior, a projected interior. The speed or slowness of metabolisms, perceptions, actions, and reactions link together to constitute a particular individual in the world” (Deleuze [1970] (1988), 125).

Looking into the ethological workings of urban fabrics directs our attention towards a range of possible areas of investigation and propositions, among other things:

- What is the velocity of urban fabric(s)?
- What characterizes urban fabric in terms of amplitude, thresholds, variations, transformations; what affects or is affected by urban fabric(s)?
- What relations and capacities emerge through the processes concerned with the creation and distribution of urban fabric(s)?
- What interfaces between (what kinds of) exterior and interior are produced by urban fabric(s) (animal-organic, skin-textile/skin-city, language-fabric, habit-character)? How does this relate to the intensity in the formation/transformation of habits, perceptions, actions, movements in urban environments?

In the following we will sketch out some lines of thought relating to in particular the first two of these four questions, moving towards propositions for possible forms of experimentation and expositions with urban fabric(s).

VELOCITY of urban fabric(s)
When asking what the velocity of urban fabrics might be two main themes occur; the speed vs. slowness of fashion in the past and the present and the temporary nature of the built environment in a post-digital perspective.

In fashion, novelty and modernity have been aligned with the shifts and modi of fashion (la mode) since 1850, and considering that the development of capitalism had its take-off from the industrial production of linen by the meter (the Jacquard loom/weave), novelty in fashion has been a very visible force for the understanding of ‘time as progress’. The aesthetic novelty in the form of a folding, a lace trimming, a color shade or a cut in its always renewed relational connectivity with bodies and urban surroundings has been an essential part of the aesthetic attraction of fashion. In Charles Baudelaire’s essay on modernity from 1859 this passion for the transitory, fugitive element is an important indicator of the painter of modern life’s ability to be on par with his time:

‘In texture and weave [...] [modern manufacture; our note] are quite different from the fabrics of ancient Venice or those worn at the court of Catherine. Furthermore the cut of skirt and bodice is by no means similar; the pleats are arranged according to a new system. Finally the gesture and the bearing of the woman of today give to her dress a life and a special character which are not those of the woman of the past. In short, for any ‘modernity’ to be worthy of one day taking its place as ‘antiquity’, it is necessary for the mysterious beauty which human life accidentally puts into it to be distilled from it’ (Baudelaire 1859, 13).

To distill beauty from the fugitive moment became the task of Baudelaire himself as Walter Benjamin has noted in his essays on the relationship between the city of Paris and the modern poet, assembled in The Writer of Modern Life: Essays on Charles Baudelaire (2006). Baudelaire was aware that poetry was just as transitory as fashion and that clothings as well as books were goods at the marketplace, and that he like the designer of fashion had to know life as it is lived by the crowd in the streets in order to illuminate these impressions of the transitory moment to modern art. The city of the new metropole of Paris became a second skin for the reader of modern life. Baudelaire became a forerunner of the material analysis of the culture of modernity, later carried out by Benjamin and Michel Foucault. They both wanted to read modernity by its traces on the skin by digging into the structures and technologies in everyday contemporary life. In his essay, “What is Enlightenment?”, Michel Foucault comments on Baudelaire’s text in length underlining that his method of unravelling the meaning of modernity is not just being sensitive to ‘the fleeting present; it is the will to “heroize” the present’, by performing as the so called ‘dandy’ who must ‘invent himself’ in order to produce art that could still affect the masses in the urban environment of the metropole (Foucault 1984). This brings to Foucault’s method the necessity to step back from universal values in art and transcendental ideas in philosophy to propose instead his well-known archaeological method and its genealogical research design described as ‘experimental’: ‘it will separate out, from the contingency that has made us what we are, the possibility of no longer being, doing, or thinking what we are, do, or think’ (Foucault 1984).

What connects the methods of Baudelaire, Benjamin and Foucault is a search for new beginnings on par and in touch with the textures of the social formation of their own time. This entails a reconsideration of the formative technologies and organizational patterns of society and culture – in order to analytically grasp the material formations of lives lived and performed within systems of fashion, architecture, archival systems etc. But whereas Baudelaire wanted to extract the poetics of modernity from his experiences with (amongst other things) the novelty of fashion, Benjamin wanted to keep open an awareness of the social body involved in the aesthetic experiences of modernity, and Foucault wanted to question the disciplinary, driving forces of power. Foucault’s main question in “What is Enlightenment?” is phrased: ‘How can the growth of capabilities be disconnected from the intensification of power relations?’ (Foucault 1984).

This question must in a contemporary context be posed differently, since disconnection in revolutionary terms has declined in favour of an awareness of the relational and affective connections and forces involved in networks that are rapidly becoming the weaved fabric of almost all connectivity in society. Foucault’s society of control and surveillance indeed plays an important part of this fabric, but the relationship between individual and dividual, between speed and slowness has indeed changed with the overlapping networks. This entails that we can no longer inhabit the position of dandyism nor extract allegorical connectivities between past and present and furthermore envisage what the dispositif of our time would look like. The challenge as well as the potential of our time is to acknowledge that each event holds a virtual openness involving past or futurity in the actual change taking place. So, just as each modulation of digital sound or image data changes the whole, each modulation, vibration or stretching of the forces of the velocity of urban fabric(s) affects the whole.

In line of the above arguments, the contemporary recycling of former fashion clothings can be seen as a digging into (imaginary) spaces belonging to older or disappeared spaces and places in the city, forming our experiences of the urban fabric(s) anew. The culture of recycling, reusing and the compilation of fabrics belonging to different clothings and body-sizes have developed into a new ecological model of business in which the relational capacities of body and fabric are re-thought and re-worn. This ‘slowing down of fashion’ in order to focus on affect and appreciate the relational production of spaces and places in connectivity with the ethology of the fabric-becoming-body is further touched upon in the section Relational Capacities.

Focusing on the temporary nature of the built environment, we want to move from a top-down understanding of urban fabric(s) to the actual configurations and compositions of texture and their relation to experience in and of the urban sphere. Here, we are interested in the use of different forms of duration relating to the materiality of the cityscape, as well as in the changes in velocity and perception with the advent of digital activations of the city in the light of urban computing (see e.g. Greenfield & Shepard 2007) through mobile phones, media facades, urban screens and the like. The velocity of the built environment can be sped up or slowed down – disrupted – through the use of digital layers, changing our perception of the built city, as seen in the artistic practices of Rafael Lozano-Hemmer (http://www.lozano-hemmer.com), United Visual Artists (http://www.uva.co.uk/work) and the Graffiti Research Lab (http://www.graffitiresearchlab.com/blog/).

In addition, a range of practices have arisen around the creation of temporary urban spaces, among others the Danish-based Institut for (X) who are working actively with emerging spaces in the city as part of their artistic and investigative practice, as seen in the project ‘Platform 4 (http://www.detours.biz/projects/platform-4/). For a large part, Institut for (X) use wood to built structures that can easily be dismantled again. Looking at interventionist strategies such as Urban/Guerilla Gardening and Urban/Guerilla Knitting (http://knitthecity.com), it might be argued, from an ethological point of view, that we are witnessing the complexity of the ‘speeding up’ of the built infrastructure somehow merging with a ‘slowing down’ through the agency of more or less analog – post-digital? – materials, textures, fabric(s) and data.

The two trajectories presented in this section – concerning the speed vs. slowness of fashion and the temporary nature of the built environment in a post-digital perspective – in particular direct our attention towards the entanglement of human ideas, technologies, market mechanisms, power relations and individual and collective actions continuously shaping – and taking shape from – the urban fabric(s). The next section will further elaborate on this relation drawing in particular on the philosophy of Jacques Rancière and the work of Hito Steyerl to more closely unfold the characterizations of urban fabric(s).

CHARACTERIZATIONS of urban fabric(s)
When attempting to analyze what affects or is affected by urban fabric(s) through looking into what characterizes urban fabric(s) in terms of amplitude, thresholds, variations, transformations, we must explore how the urban fabric(s) we want to sketch out two (admittedly rather general) points of entry; how does the urban fabric affect our ability to act in the city and secondly, how does it act upon us and how is this manifested in the fabric?

Considering the first point of entry, we want an ethological understanding of urban fabric(s) to take into account the way in which it distributes the sensible, the aesthetics of the urban fabric(s) (Rancière 2004). The urban fabric(s) conditions our (common) everyday perception of the city, the actions we undertake (or not), on what Brian Massumi terms a microperceptual level – with, what might be termed, macropolitical implications (Massumi 2009, p. 5). Massumi links the notion of microperception to that of micropolitics, resonating with Rancières notions of the aesthetics of politics and politics of aesthetics, where the latter lies “(…) in the practices and modes of visibility of art that re-configure the fabrics of sensory experience’ (Rancière 2010, p. 140). To Rancière, these artistic practices of re-configuration can establish a  ‘(…) dissensual re-configuration of the distribution of the common through political processes of subjectivation.’ (Rancière 2010, p. 140).

Thomas Markussen has explored how this might be investigated through designerly practices of urban activism using the ‘(…) sensuous material of the city while exploring the particular elements of urban experience’ (Markussen 2012, p. 41). According to Markussen, who also builds on the work of Rancière, urban design activism ‘uses the sensuous material of the city while exploring the particular elements of urban experience’ (Markussen 2012, p. 41). He mentions a range of examples, e.g. Institute for Applied Autonomy’s iSee-project allowing people to chose the least surveilled routes through urban spaces (http://www.appliedautonomy.com/isee.html) and Santiago Cirugedas Recatas Urbanas (Urban Prescriptions), exploring the relation between the regulations of the city municipality and the need for extra room through the construction of scaffolds which are then turned into places of dwelling (http://www.recetasurbanas.net/index1.php?idioma=ENG&REF=1&ID=0003). These projects can be said to experiment with the way in which urban fabric(s) can be renegotiated through artistic and designerly experimentation, highlighting existing distributions of the sensible on a microperceptual- and political level, offering ways for people to engage with the urban fabric(s) to act upon this.

The entry into the second point – how urban fabric acts upon us and how it is manifested in the fabric – can be opened by Hito Steyerl’s video installation for Documenta XII, 2007, Lovely Andrea: http://www.ubu.com/film/steyerl_andrea.html. In Steyerl’s search for an image of japanese bondage, that was taken of her in 1987, she documents on the one hand that power relations within a contemporary visual dominance does create an endless appetite for images of ‘truth’ and ‘freedom’, and on the other hand that images can create facts and can produce realities to unravel the interconnectedness of bondage and webs. Her examples that she weaves together are bondage girls, Spiderman and prisoners at Guantánamo Bay. Like the cobweb serves the purpose of attracting and capture, weaved fabrics, web-designs and the Internet all leave marks in the skin and connects us to buildings, archives and urban distribution and traffic (cf. traficking). In Steyerl’s case the unraveling of the web actually generates an idea about the scale and amplitude of trades and transactions of bonding. The thresholds that determine Steyerl’s access to her own image are spelled out as ‘the cameraman’ and ‘the studio’.

The discursive ownerships belonging to the 1980s are still controlling the entry points to the material archives, but the search machines of the internet archives have for a long time attracted our appetite for ‘new material’. If this material is thought of as all the archives and databases of the Internet the thresholds are easily identified as Google, Facebook etc – and the code is the password, that includes and excludes. Deleuze wrote in 1990 on the (then future) web control that the code – “one’s (dividual) electronic card” – would grant or deny access to “one’s apartment, one’s street, one’s neighborhood” creating a universal modulation (“Postscript on the Societies of Control” https://files.nyu.edu/dnm232/public/deleuze_postcript.pdf). Deleuze compared his modulation, i.e. the processes by which we connect or are denied access to the weave of the Internet archive, to the coils of the serpent – whereas societies based upon disciplinary systems of control described by Foucault are compared to the ethology of mole and molehill. This line of thought makes it possible to think of the serpent in its relation to its coil as a rubbing between two surfaces – the skin and the ground. The friction created is becoming the new fiction, the affective field of creation. The fabric (of the ground) is just as much affected by the skin as the other way around. The skin leaves traces and form patterns in the fabric (of urbanity, the Internet, the brain) just as the fabric determines the possible coiled movements (of the snake).

Actively experimenting with the distributions of the sensible that characterize urban fabric(s), reconfiguring our possibilities for sensory experience through activist, designerly interventions into, amongst other things, the archives and databases that are increasingly in-forming the patterns of these fabric(s) and our experience of them, is at the core of the general project initiated by this article. Tapping into new frictional and fictional affective fields of creation focuses on uncovering existing amplitudes, thresholds, variations, transformations in the ethological workings of urban fabric(s), which will be developed in relational terms in the next section.

RELATIONAL CAPACITIES of urban fabric(s) (distribution and creation)
Talking about the relational capacities of urban fabric, we want to investigate the creation and distribution of fabric and textiles on a local and global scale. On a global scale, it is possible to look into and critically account for the complex networks of production of fabric – clothes, books, archival material on the Internet, economic transactions – to suggest a starting point. We have not yet developed a vocabulary to address this but are looking for ways to move into these explorations. An example of a recent project that deals with some of these issues is in fact entitled the Urban Fabric Project (www.urbanfabricproject.com). The project focuses on American textile cities, and how they have been shaped when the industries have departed from these cities, leaving them disenfranchised and struggling. Here, the aim is to show how it is possible to revitalize these cities – but it would also be important to trace and diagram the new globalized systems of distribution and creation emerging from the decline of these American textile cities.

Locally, we are interested in the above-mentioned business models of recycled clothes appearing around flea markets and re-sewing businesses (http://www.melangedeluxe.dk/conditions/ ). Also, we see examples of shops appearing where you have to donate a piece of clothes to buy a new one, suggesting new forms of distribution and altering power relations. In addition, bringing it back to a global scale, we want to pursue what happens to the recycled clothes and how this can be inserted into other-than-urban loops and what that might entail. Whereas this might seem rather ‘down to earth’ or even simplistic following from the previous section, we do see a potential for these investigations to enter more complex conceptual infrastructures through the analysis and experiments with different kinds of creation, distribution and circulation of urban fabric(s). In addition, we wish to explore how this might relate to textures and not only textiles.

Although this might be argued to be the least developed part of the ethology of urban fabric(s), we believe there is great potential in tying these explorations together with the previous sections to allow for a diagrammatic conceptualization of the relational complexity at stake here.

EXTERIOR/ INTERIOR of urban fabric(s) (interfaces)
One way of exemplifying what generates the surface for contemporary interfaces between art and technology is definitely the software as a weave of algorithmic codings. In the case of interactive architecture or media facades, where buildings become interfaces, and the relation between the interior/exterior is broken up, we can argue, with Rancière, that these algorithmic codings are in fact re-distributing the sensible through an (inter)activation of the urban fabric(s):

‘This is not a simple matter of an ‘institution’, but of the framework of the distributions of space and the weaving of fabrics of perception. Within any given framework, artists are those whose strategies aim to change the frames, speed and scales according to which we perceive the visible, and combine it with a specific invisible element and a specific meaning.’ (Rancière 2010, p. 141)

In continuation of this line of thought we might ask: What interfaces between (what kinds of) exterior and interior are produced by urban fabric(s) (animal-organic, skin-textile/skin-city, language-fabric, habit-character)? The animal-organic-artificial relations concern the raw material of the production of fabric (e.g. wool-bamboo-polyester) and its relation to the distribution of the sensible through affective fields. The skin-textile activates a thinking of the skin and textile as surfaces that co-constitute complex interweavings of texture and fabric, as developed in the previous section through the story of the serpent. The language-fabric relation is etymological and can be used to develop the relation between text and textile, where text has etymological roots to both ‘weaving’ and ‘tissue’. An interesting example here concerns the Minoan script of ‘Linear B’ (approximately 1250 B.C.) in which the content of the communication relates directly to the production of textiles (e.g. how many sheep are needed to produce a garment). This relation between the number of sheep and a garment has long since been lost, but today’s fabric of networks have nevertheless opened the possibility to dig into the the material relationality involved in interfaces of many kinds. In this project, it is our ambition to generate material fabrics that invite to experiment with the velocities, characterizations and the relational capacities of interfaces between animal-organic, skin-textile/skin-city, language-fabric, habit-character.

EXPERIMENTS and EXPOSITIONS
As outlined in this article, we believe urban fabric(s) can be questioned through critical conceptual, artistic and designerly experimentation, bringing forth existing ideological, sometimes totalitarian, distributions of the sensible on a microperceptual- and political level, offering ways for people to act upon the normalized distribution of urban fabric(s) through infra-ordinary micro-revolutions. Concurrently with the conceptual investigations of a possible ethology of urban fabric(s), we are contemplating how to go about this kind of experimentation, which we want to aim at different distributions of the sensible – dissensus – through new interweavings, interactions and interfaces that rupture relations and invent new relationships. Re-thinking the notion of ‘fiction’, Rancière argues that it is possible to change ‘…existing modes of sensory presentations and forms of enunciation; of varying frames, scales and rhythms; and of building new relationships between reality and appearance, the individual and the collective’ (Rancière 2010, p. 141). In future projects, we want to situate this kind of interventionist or practice-based experimentation within an academic context as a kind of diagrammatic practices of research-creation.

REFERENCES

Benjamin, Walter 2006: The Writer of Modern Life: Essays on Charles Baudelaire. Harvard University Press.

Baudelaire, Charles [1959] 1964: The Painter of Modern Life and other Essays. Phaidron Press: London.

Deleuze, Gilles and Guattari, Félix. [1980] 1987: A Thousand Plateaus: Capitalism and Schizophrenia II, trans. by Brian Massumi. University of Minnesota Press: Minneapolis.

Deleuze, Gilles [1990] 2002: “Postscript on Societies of Control”. CTRL SPACE: Rhetorics of Surveillance from Bentham to Big Brother. Thomas Y. Levin, Ursula Frohne and Peter Weibel (eds.): CTRL [SPACE]: Rhetorics of Surveillance from Bentham to Big Brother. The MIT Press: Cambridge, Massachusetts & London, England.

Deleuze, Gilles [1968] 1990: Expressionism in Philosophy: Spinoza. The MIT Press: Cambridge, Massachusetts & London, England.

Foucault, Michel [1978] 1984: “What is Enlightenment?”; in The Foucault Reader. Pantheon Books: New York.

Greenfield, Adam and Shepard, Mark (2007): Urban Computing and its Discontent. The Architectural League of New York: New York..

Markussen, Thomas 2013: The Disruptive Aesthetics of Design Activism: Enacting Design Between Art and Politics. DesignIssues 29:1.

Massumi, Brian 2009: “Of Microperception and Micropolitics. An interview with Brian Massumi, august 2008. INFLeXions no. 3. Micropolitics: Exploring Ethico-Aesthetics. October 2009.

Massumi, Brian 2002: Parables for the Virtual. Movement, Affect, Sensation. Duke University Press: Durham and London.

Rancière, Jacques [2000] 2008: The Politics of Aesthetics. Continuum: London and New York.

Rancière, Jaques [2003] 2007: The Future of the Image. Verso: London and New York

Rancière, Jacques [2004] 2010: Dissensus: On Politics and Aesthetics. Continuum Books: London and New York.

 

Dusk to dawn: horizons of the digital/post-digital (2nd draft)

FUTURE SCREENS ARE MOSTLY BLUE
“The equipment-free aspect of reality here has become the height of artifice; the sight of immediate reality has become a blue flower in the land of technology.”
- Walter Benjamin (Writings on Media 35), The Work of Art in the Age of Mechanical Reproducibility

Consider the blue flower. Its cold, unnatural luminescence. Its role in the German Romantic tradition of Novalis et al as absorptive placeholder for romantic longings of a future harmony of Nature and Self. Its sense of otherworldliness, as a result of its relative rarity in nature, acting as prop and stand-in for a striving towards an ungraspable, infinite beyond. A call to the horizon. How in pure sunlight blue fades and thus the blue flower’s preferred habitat in the threshold moment of evening, the disappearing sunlight slowly draining “warmer” colours of said apparent warmth while also giving the “cooler” colours of the spectrum a certain renewed luminescence in the moody twilit hues of what is known as “the blue hour.”

The seeming lack of more fully saturated colours in nature (as it is presented to human eyes), with its predominance of less vivid browns and greens set underneath a sky of unsaturated blue. The artificial supplementation of this “meagrely endowed” (Finlay 402) natural palette in the form of a primarily synthetic range of often vividly saturated man-made colours, each trying to catch the eye of the second sun that is the human visual cortex in ever more heliotropic stimulation.

Those synthetic blues of technology. Chroma key blue, signifier of a world predestined for post-production. The post-crash blue screen of death. The default “Bliss” wallpaper of Windows XP, one of the most widely embedded images of the digital age, with its pacifying blue-green pastoral… ah, the supreme flattery of Graphical User Interfaces and this particularly memorable “topography of pure departure” (Harpold 239). A fig leaf of an image. 

Tech logo blue. Facebook blue. Soothing, corporate IBM deep blue. The chirpy, social pastel of Twitter blue and the vaguely translucent gradients of iOS 7 blue. A showy blue LED, the engineer’s metonymous accentuation, asserting a certain “technology-ness of technology” (Shedroff & Noessel 43). Blue, blinking Bluetooth, blue. This saturated glow of the digital and its attention economy; ethereal stimulant and banal sedative; blue pill.

So many blue avatars of the digital flowering all around, each striving to stand out and still fit in at the same time. Such is the seeming ubiquity of blue in the land of technology today and this little prelude on blue is intended simply to give a sense of how blue can be seen to serve as an “index of the zeitgeist” (Frederic Jameson 69), a signifier of the viscous spread of the the digital, its ubiquity and sense of givenness. A blue digital banality to which the post-digital would seem to partly be a reaction to.

BLUE HOURS
As a preformative affix that will lay waste to its stem, the prefix of post- can be seen as signifying a recognition (and even premediation) of collapse. Perhaps it is partly intended to mark out another site of “so many ontological cave-ins,” similar to that which Rosalind Krauss (290), in her essay “Reinventing the Medium,” speaks of in relation to photography’s saturation into mainstream, everyday ubiquity. Drawing on Benjamin’s notion of the “outmoded” object, Krauss describes that particular moment of temporal limbo for a medium in which it takes on a status as outdated but not quite fossilised into what Hertz & Parikka (429) call the “archaeological phase” of a product’s lifecycle. For Benjamin, the onset of obsolescence is of interest due to its revealing of certain aspects of the object in question. By dint of its quality as impotent, denuded and ultimately discarded, the no longer valuable outmoded object can for Benjamin (The Arcades Project, 466 [B1a, 4]) act as a powerful “anti-aphrodisiac.” Or as Julia Cocuzza (8) puts it in her own reading of Benjamin, the outmoded object can be useful in the way that it “informs us of not just what society was, but what society currently is. […] Separated from the whirlwind of popularity and hysterical consumerism, the true gravity of the object’s value is revealed.”

Krauss (295) christens this in-between phase “the twilight zone of obsolescence.” In such a zone the outmoded object may be seen to cast what Benjamin (Selected Writings 209) describes as the “profane illumination” of its own afterlife, radiating an immanent and also potentially critical afterglow, both on its own form and out at the various mythologies it once helped to project. In the case of a media object, its status as medium, as an apparatus with various well- or loosely-defined technical, social, aesthetic, material, economic, institutional and other factors and ideologies that inform its everyday uses, the moment of obsolescence can be said to shed a certain light on these structures in the sense of their very disappearing out of view and noticeable, felt absence. One is reminded of Marshall McLuhan’s (24) vivid image for that transitory moment of visibility that occurs when a previously dominant mode of understanding is made obsolescent by a newly mediated form of understanding: “Just before an airplane breaks the sound barrier, sound waves become visible on the wings of the plane. The sudden visibility of sound just as sound ends is an apt instance of that pattern of being that reveals new and opposite forms just as the earlier forms reach their peak performance.” A rediscovering, even if only for a moment, of a different kind of gravity “outside to the totality of technologized space” (Krauss 304). Death becomes the medium, technology, object.

“Death” here is the obsolescence, the subsidence of a particular form of mediation, and a “blue hour” merely any instance in which a kind of temporal afterglow of mediation is presenced. In their book Life after New Media – Mediation as a Vital Process, Sarah Kember and Joanna Zylinska (55) stress the importance of understanding mediation as a being “primarily a temporal, multiagential phenomenon, a process rather than a spatialized and spatializing object.” Thus a particular media form is for them a sustained instance of a temporary “fixing” or “stabilization” of the originary, emergent and ongoing “vital process” (Kember & Zylinksa 67) of mediation itself. In this sense we might understand the process of obsolescence as being a draining of the relational vitalities of a particular medium, a process that might also offer up an illuminating afterglow in which the very felt absence of this vitality reminds us how, “Every medium thus carries within itself both the memory of mediation and the loss of mediations never to be actualized” (Kember and Zylinksa 21).

While the forward momentum in post-digital seems aimed at getting on with things, the potential in temporarily dwelling on such passing moments of obsolescence is for how they might prove conducive for tracing the contours of any particular condition of post-. Blue hours, such as those that Benjamin and Krauss outline, can be understood as providing a setting of relatively heightened atmospherics, in which mediation itself can be said to subtly flex the curvature of its horizon in a just noticeable fashion. At such moments, in such a zone, one might – like Newton fanning out the colour spectrum in his darkened room – temporarily suspend, stabilise or distinguish some of the many blended and overlapping rays that inform the so-called technological unconscious, including aspects of the technology itself and also those of the collective unconscious that continues to experience technology in certain instances as a potentially alien second nature (Benjamin, Writings on Media 37) and in others as naturalised extension of being. As hinted at above, a blue hour of obsolescence might well be compared to the “afterglow” of this year’s Transmediale theme, with its evocation of “the intense red glow of the atmosphere long after sunset (or long before sunrise), when most twilight colours should have disappeared. The afterglow is caused by dust in the high stratosphere, which catches the hues of the twilight arch below the horizon” (“Transmediale 2014″). One should tread carefully in the kind of dramatic theoretical scenes that evocative writers like Benjamin so tantilisingly set, but at the very least, one might be on the lookout for this particular scene of obsolescence, a transition period that might occasionally provide lucid, uncanny or prescient modes for perceiving the previously pervasive or oversaturated qualities of the media object in question, before it eventually subsides as residue back into the more generic atmospherics of mediation, inevitably playing a role, large or small, in the various ecologies that designate visibility, mass, time, space, velocity, value.

ANAMORPHOSIS
Scenes such as these suggest an aspect of something that was always there, awaiting its release. A capacity for rebirth that something like obsolescence, in various guises, can act as thanatological ground for. In order to give a name to this evasive yet potentially emergent quality, one might draw from discussions on anamorphosis, the optical technique of transposing a distorted projection within and according to the norms of the visual logic of linear perspective. In its most usual form, the anamorphic image requires that the viewer adopt a particular viewing angle or viewing device in order to reconstitute and better make out the enclosed anamorphic image (the iconic example of this technique being Hans Holbein the Younger’s 1533 painting The Ambassadors). Similarly, by virtue of its common function of serving as both a memento mori and an embedded augur of the workings of the medium in question, anamorphosis can be understood here as a technique and concept that highlights the emergent potentials of obsolescence and post- via the way in which it can be seen to hint at both the ephermality and seeming limits of the medium or object under consideration, while also indicating towards other horizons, such as the seemingly innate capacity of images, objects, concepts and mediation itself to accelerate again beyond our ability to keep up with their dynamic potentialities.

Viewed from the perspective of post-digital, a particular point of interest here is the uncomfortable proximaty that post- hints at, and which the embedded quality of the anamorphic partly highlights. One could of course stretch beyond a notion of post- to discuss things such as the non-human or even non-digital, but post- signals at least some kind of lingering, umbilical connection between the progenitor and its late coming prefix. The primary point of enlisting a notion of the anamorphic in this paper is for the way in which the anamorphic is able to act as a potentially unsettling augur embedded within an everyday norm, employing the same tools of the media technique in question to create further indexical yet awry scenes which can tease out the very artificial nature of the everyday perspective in question. Such signallings of a kind of resistant, “anamorphic remainder” (Boluk & LeMieux), in their very dormant yet persistent fashion, can be experienced as a second, potentially alien nature that returns and confronts the mediating and mediated subject with the primacy and weird nature of its own uncanny contortion acts.

Jacques Lacan’s (Four Fundamental Concepts; Ethics) various writings on anamorphosis are worth turning to in this context, especially for the way in which his conception of anamorphosis alerts one to such a sense of alienation that is embedded and closer in the mirror than it appears. A potentially disturbing proximity that hints at topological structures of the self that further Lacanian concepts such as lack and the Real similarly address. In such a Lacanian register, we can return to the spectre of the profane illumination of the obsolete media object and speak of how this illumination can be parlty felt as a gaze of said temporarily animated object, in the way that those many scopic rays of desire, as they are mirrored here in oblique, anamorphic fashion, are experienced as being reprojected back out from the obsolete object in question. The “pulsatile” (Lacan, Four Fundamental Concepts 89) afterglow of these possessive, saturated drives casting a dark shadow – the obsolescence of the object and of certain expended investments and energies therein. Those animating lines from Louis Aragon’s poem, ‘Contre-chant’ (counter melody), that Lacan (Four Fundamental Concepts 79) presents at the beginning of his introductory seminar on anamorphosis:

Toi te tournant vers moi tu ne saurais trouver
Au mur de mon regard que ton ombre rêvée

[Turning towards me you can find
On the Wall of my gaze only your dreamt-of shadow]

One might speak of a certain mirror of obsolescence and the “wall” that is this felt gaze of the object, presenting the “annihilating subject” (Lacan, Four Fundamental Concepts 84) with a brief reflection of their own drives and the structures and ideologies which the object has been moving between, is mediated by and yet always can be seen to resist. In the case of a media technology, a moment created by an experience of topological resistance in the overlapping ecologies involved in the medium in question, one in which their relations are temporarily but noticeably distinct and sensate. On the part of the desiring subject, a transitory moment in which said drives are temporarily turned “inside-out,” before escaping again towards the vanishing points of yet further investments of this desire.

BANALITY
“No one really dreams any longer of the Blue Flower. Whoever awakes as Heinrich von Ofterdingen today must have overslept. […] No longer does the dream reveal a blue horizon. The dream has grown gray. The gray coating of dust on this is its best part. Dreams are now a shortcut to banality.”
- Walter Benjamin (Writings on Media 236), “Dream Kitsch – Gloss on Surrealism”

In his writing on surrealism and kitsch, Benjamin (Writings on Media 236-38) highlights how the Surrealists, in their crosshatching of the dream world with the objects, furnishings and “cheap maxims” of the everyday, “are less on the trail of the psyche than on the trade of things.” At the pinnacle of such a practice, “the topmost face on the totem pole is that of kitsch. It is the last mask of the banal, the one with which we adorn ourselves, in dream and conversation, so as to take in the energies of an outlived world of things.” In the face of its own unsettling anamorphic alterity and obsolescing drive, the digital subject has shown an impulsive readiness to latch onto the banal. Something like Instagram unleashes the social practices of digital photography with a few select visual filters that aestheticise the temporal through a technique of “fauxstalgia” (Memmott) that masks something like the selfie in sufficiently profane illumination. At the same time, online meme ecologies act as conductors of a craving for a replicable, utilitarian vernacular of rough and ready image macros that can serve as express circuits to banality.

One feature of banality here is this very compressed, easily circulated quality it latches onto. The meme, in its cultural form, readily co-evolves with technological provisions such as network bandwidth constraints, easily replicable digital formats, the highly-greased and quickly churning gears of social media platforms and so on. They partake in the naturalised “trade of things” in the digital and provide a vernacular “mask of the banal” similar to that which Benjamin describes. Indeed, while one might speak of many of the predominant digital platforms of the contemporary moment as wolfs in sheeps’ clothing – such as Google, Facebook, Amazon and others, with their cheery doodles and plain vanilla shopfront windows – the banal can be understood to act as a similar masking of certain more subversive strands of cultural expression. Luke Munn speaks of a “post-internet play” that is “often operating within technology frameworks in a collaborative or even playful approach (mitspielen), utilising the logic of branding and co-option for their own benefit.” Certainly it was in a somewhat similar vein of mischief that the Surrealists were carrying on.

In something like the popular surge towards the accessible photo filters of Instagram, one senses a kind of, part defence mechanism, part tactical countering at play in its an employment of a filtered mask of the banal. To begin with, there is the much commented upon way in which the applying of a filter casts an artificial aesthetic of age and materiality upon these digital photos that are bound for an almost immediate obsolescence due to the abundance and digitally proliferated nature of the streaming content interfaces and ecologies into which they enter. Such a filtering might be understood as signalling a certain self-awareness on the part of Instagram users, or a more subliminal acknowledgement of the anamorphic, mise-en-abyme like hall of mirrors and saturation of the ever-proliferating qualities of the digital, which an apotropaic mask of the banal can both assuage and also potentially reenergize in its ability to tease out those memes and digitally-informed vernaculars which are felt to be of particular communicative power. Indeed, in many cases the banal has a knack for plucking out the cultural markers of the contemporary moment, in which one can often sense an embedded, self-aware and even implied or charged critical commentary within.

Now almost a decade on since Tim O’Reilly’s formulating of the rise of “Web 2.0,” in the mainstreaming of things like user-generated 4chan memes into mass market forums such as daily morning news shows and Facebook wall posts, one senses a kind of moment of popular, collective self-awareness – “Oh Internet” – in regards to this saturation of the digital. We are all producing “internet-aware art” (Guthrie Lonergan, in McHugh 10) now, and everything is now potentially possessed with a degree of understanding from the digital, to the point where saying so carries little value. Is any kind of “blue spill” of the digital even noticed anymore? Each discrete part, each ecology, readily overlaps on the other. And overlaps, and overlaps. In such a condition, the emphasis seems no longer to be on startling juxtapositions of everyday objects such as the Surrealists were after, but rather in the increasingly natural, i.e. banal, overlap of what was previously felt as unnatural. If anything, in such a landscape the anamorphic might be said to be itself yet another potential mask of the banal with which one might adorn oneself. Thus, perhaps, the trendings of memes such as “creepypasta,” “weird Twitter” and all things H.P. Lovecraft. In response to the viscous spread of the digital, its seeming horror vacui (“fear of emtpy space”) and kitsch-like lack of restraint and drive to cover every niche and corner with its own internet of things, why not adopt the recycling tactic of a banal ecology (or garbology) of memes in which one can make oneself at home in, or indeed tactically mask other maneuvres within.

This very ubiquitous exchange of the banal in the digitally informed ecologies of the moment could be seen to have a certain resilience when viewed through a lens of the post-digital. In formulations such as those of the theorists above, one senses a recurring theme of resistance on the part of these digitally informed media objects – and subjects. Florian Cramer (“Anti-Media, Ephemera on Speculative Arts”) describes how the terms “‘art’ and ‘media’ refused to go away” and proclaims a kind of revanchist genre of “anti-media,” which is defined as “what remains if one debunks the notion of media but can’t get rid of it.” Another hinting towards a potential for resistance embedded in the stubborn object or medium that, when viewed from a particular angle or caught in a particular relational juncture, can act as, not so much the dreamed for blue flower in the landscape of technology, but rather as “anti-aphrodisiacs” or antidotes for reencountering the ubiquitous, mythological and/or everyday ecologies in which said beings exist, relate and extend across – as well as resist against.

BEWERSDORF BLUE
As a brief example of a blue hour of obsolesence that touches on some of the themes of this paper, consider Kevin Bewersdorf’s digital performance piece PUREKev (2008). The plan of execution for the piece was noticeably barebones and conceptually humdrum and even old hat. Over the course of three-years (2008-11) an automated performance would play out, in which a looping clip of over-exposed home video footage depicting a flickering firecracker would very gradually diminish over the three years, extinguishing at a provisionally imperceptible but steady rate for its visitors, gradually becoming a field of “pure” blue. This blue void, rather than the flame, seems to be the key performer here (McHugh, 2011, 40), surrounding its increasingly pitiable flame, pushing it down and forcing us to scroll, and scroll, and scroll… hunting for a figure, no matter how fleeting, that might release us from this amorphous ground, the “MAXIMUM SORROW” that is Bewersdorf blue.

Bewersdorf’s PUREKev performance, like his Monuments to the INFOspirit series, contains an anamorphic-like, memento mori reminder and imprint of the dotcom crash of the digital and the Totentanz, post-crash condition of “2.0,” a reoccurring quality that together with his prominent use of blue is noticeable throughout Bewersdorf’s practice. One is reminded of Krauss (291-2) speaking of photography’s transition from an exciting new medium to yet another commodity that was “swallowed by kitsch,” a transition that in turn yielded a kind of faux response of “artiness” on the part of some photographic practitioners of the time, one that partly “betrays a social class under siege.” Echoing Benjamin’s classic reading of photographer Eugène Atget in “The Work of Art…” essay, Krauss points out how Atget’s photographs can be read as a kind of antidote to this “fraudulent mask of art” in the photography of the time: “Atget’s response to this artiness is to pull the plug on the portrait altogether and to produce the urban setting voided of human presence, thereby substituting, for the turn-of-the-century portrait’s unconscious mise-en-seine of class murder, an eerily emptied ‘scene of a crime.’”

In Bewersdorf’s works we witness a similar aesthetic, a pulling of the plug of the digital and even an outlining of a crime scene of sorts. Within this vacuum of the outmoded one can still sense the lingering afterglow of a pervasive, corporate INFOspirit that clearly once inflated the drama of its digitally inflected subjects while also seeming to drain them of a certain sense of vitality. Bewersdorf’s “MAXIMUM SORROW” motto, with which he brands the images and characters of his melodrama, suggests a bubble burst, a feeling of the blues or burnout that emanates in a vaguely atmospheric fashion throughout his works. It is also hard to miss the reoccurring use of blue throughout these works, which here seems turned almost inside out and serves in its own way as a kind of anamorphic call to the horizon or vanishing point – “a sensitive spot, a lesion, a locus of pain, a point of reversal of the whole of history” (Lacan, Ethics 140) – an abstract but notable signifier of the digital against which Bewersdorf can offset and perform a world of a banal, everyday, overlapping, almost sacrificial obsolescence.

BLUE FLOWER?
Is this the way they say the future’s meant to feel? Why blue? Why post-digital? This paper began with the riffing on blue as a meme-like signifier of the digital, a readymade scaffolding and prevalent filter of the digital imaginary. Having initially indicated towards the romantic conceit of the blue flower, the question returns now as to whether the post-digital is itself a conceptual blue flower? Indeed, can something as nebulous as “the digital” even be treated in a remotely similar manner to an object or a medium? Can it really become obsolete or post-? Such is the potential mire and haze of “fuzzy” (Cramer, “Anti-Media, Ephemera on Speculative Arts”) and seemingly converging concepts like “digital media.” At each turn, this very emulsive, ever-proliferating nature of the digital seems to both cling to and yet elude our grasp. Perhaps this is in part an issue relating to the particularly burdensome imposition that a prefix like post- puts on an already sufficiently problematic stem, reminding one of Frederic Jameson’s grapplings with the posited “total flow” of postmodernism and “how the thing blocks its own theorisation, becoming a theory in its own right” (Jameson 71). One would also do well to keep in mind the easily “po-faced” nature of any such applications of post-. The reactive, self-propagating nature that such a theoretical manoeuvre can readily get carried away by. At least the simple sounding of a speculative death knell of post- in relation to the digital, rather than positing it as any kind of definitive term, might act in a similar way to the moment of obsolescence, the suspending quality of its hyphen creating a temporary tension, a zone of uncertainty or wobble that might somewhat unsettle the stem that it still implicitly admits it cannot necessarily escape from, nor even wants to. The title of Florian Cramer’s recent talk on the matter, “Post-digital: a term that sucks but might be useful,” gives some indication of these kinds of strands that come into play.

In this post-PRISM revelations present maybe there is also a sense of a renewed or heightened sense of awareness and reflexivity in relation to the various ubiquitous and dominant forms of digitally informed practices today. A kind of tipping point moment, in which we are reminded, yet again, of how so many blue horizons and promises of the digital end in yet more false dawns. A time for a potential cleansing of a misguided or overused palette, one that can bring our attention to other significant shadings in the media spectrum, such as the more indiscernible, unobtrusive, uniform and unremitting “gray immanence” of “evil media” that Fuller & Goffey highlight (13-4). Or likewise, in considering the temporal and immanent qualities of media that obsolescence highlights, one can, as the likes of Hertz & Parikka have already outlined, excavate post-digital blueprints for an ethico-aesthetic DIY practice that is able to respond to the embedded post- of planned obsolescence, with its environmental saturation of obsolete technologies whose relative material permanence endows them with an extended afterlife in which they may be rediscovered, recycled, remixed, reinterpreted. Adopting “customized, trashy and folksy methodologies” that go against the grain of the still dominant “glossy, high-tech ‘Californian Ideology’” (Hertz & Parikka 427). Enacting a shift in focus from the illuminating qualities of immanent or recently occurred death, to that of the never-really-dead “untimeliness” of “media undead” (Wolfgang Ernst, cited in Hertz & Parikka 429). From dusk to dawn. The sun also rises. To trace out and get hands on with the kinds of horizons of speculation and everyday encounters that the post-digital proposal, in an interventionalist modality, might nudge into relational or resistive being.

WORKS CITED
Benjamin, Walter. Selected Writings, Volume 2, Part 1, 1927-1930. tr. Rodney Livingstone et al, Harvard University Press, 1999. Print.

Benjamin, Walter. The Arcades Project. tr. Howard Eiland and Kevin McLaughlin, Harvard University Press, 2002. Print.

Benjamin, Walter. The Work of Art in the Age of Its Technological Reproducibility, and Other Writings on Media. tr. Edmund Jephcott, Rodney Livingstone, Howard Eiland, & Others, Harvard University Press, 2008. Print.

Boluk, Stephanie & LeMieux, Patrick. “Stretched Skulls: Anamorphic Games and the memento mortem mortis.” Digital Humanities Quarterly, Vol. 6, No. 2 (2012). Web. http://www.digitalhumanities.org/dhq/vol/6/2/000122/000122.html

Cocuzza, Julia. “Walter Benjamin’s ‘Outmoded.’” self-published (2011). Web. http://www.juliacocuzza.com/blog/arthistory/cocuzza_benjamin_outmoded.pdf

Cramer, Florian. “Anti-Media, Ephemera on Speculative Arts, Florian Cramer.” Institute of Network Cultures (2013). Web. http://networkcultures.org/wpmu/portal/publication/anti-media-ephemera-on-speculative-arts-florian-cramer/

Cramer, Florian. “Post-digital: a term that sucks but is useful (draft 2).” Post-digital Research. Kunsthal Aarhus. Oct. 7-9 (2013). Web. http://post-digital.projects.cavi.dk/?p=295

Finlay, Robert. “Weaving the Rainbow: Visions of Color in World History.” Journal of World History, 18.4 (2007): 383-431. Print.

Fuller, Matthew & Goffey, Andrew. Evil Media. MIT Press, 2012. Print.

Gass, William. On Being Blue – A philosophical inquiry. David R. Godine, 1976. Print.

Harpold, Terry. Ex-foliations: Reading Machines and the Upgrade Path. University of Minnesota Press, 2009. Print.

Hertz, Garnet & Parikka, Jussi. “Zombie Media: Circuit Bending Media Archaeology into an Art Method.” Leonardo, Vol. 45, No. 5 (2012): 424–430. Print.

Jameson, Frederic. Postmodernism, Or, The Cultural Logic of Late Capitalism. Duke University Press, 1991. Print.

Kember, Sarah and Zylinksa, Joanna. Life after New Media – Mediation as a Vital Process. MIT Press, 2012. Print.

Krauss, Rosalind E. “Reinventing the Medium.” Critical Inquiry, Vol. 25, No. 2, “Angelus Novus”: Perspectives on Walter Benjamin (1999): 289-305. Print.

Lacan, Jacques. The Four Fundamental Concepts of Psychoanalysis, Seminars of Jacques Lacan, Book XI. tr. Alan Sheridan, W W Norton & Company, 1998. Print.

Lacan, Jacques. The Ethics of Psychoanalysis 1959-1960, Seminars of Jacques Lacan, Book VII. tr. Dennis Porter, W W Norton & Company, 1992. Print.

McHugh, Gene. Post-Internet: Notes on the Internet and Art 12.29.09>09.05.10. LINK Editions, 2011. Print.

McLuhan, Marshall. Understanding Media – the extensions of man. Gingko Press, 2003. Print.

Memmott, Talan. “Fauxstalgia.” Banality Based Banality blog, UnderAcademy College (2013). Web. http://bbbanality.wordpress.com/2013/03/18/fauxstalgia/

Munn, Luke. “Searching for a Technological Agency within Art.” Furtherfield blog (2013). Web. http://www.furtherfield.org/blog/luke-munn/searching-technological-agency-within-art

Shedroff, Nathan & Noessel, Christopher. Make It So – Interaction Design Lessons from Science Fiction. Rosenfeld Media, 2012. Print.

“Transmediale 2014.” Transmediale, n.d. 02 Dec (2013). Web. http://www.transmediale.de/festival