-- Leo's gemini proxy

-- Connecting to despens.systems:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

📦️ Digital Objecthood

Dragan Espenschied, 2021


This article aims to provide guidance on how to conceptualize digital objects for the purpose of digital art preservation. None of the techniques described below are meant to replace existing processes, but hopefully can serve as a guide and framework for planning the long-term preservation of, essentially, software.


A “digital object” is oftentimes thought of being identical with the physical boundaries of the media it is stored on—for instance “disks” or a “computer”—or with digital simulations of traditional media—such as “digital file,” “folder,” ZIP “archive.” However, these artifacts, usually produced by an artist in one way or another, can only be considered pieces in an assemblage of performative digital environments, in which many other artifacts, most of them not produced by an artist, need to be aligned. Described below is a process to identify object boundaries and artifacts, and their roles in the preservation process.


This is written from the perspective of Rhizome, a non-profit organization founded in 1996 and since then online at rhizome.org that, among other activities, collects net art, and at the time of writing manages more than 2000 works in its ArtBase collection. The works are frequently used in online and gallery space exhibitions, and supposed to be accessible on the web at any time. How Rhizome’s preservation team treats artworks is based on the fundamental idea that digital art needs to be historicized, that a collected artwork cannot exist as something perpetually new or a mere concept that needs to be re-instantiated constantly. Instead, at the time a work enters a collection, it is regarded as a manifestation created for a certain environment, technically and culturally. Another important consideration for Rhizome is the economic feasibility of preservation activities: no digital object is ever done being preserved, maintenance is always required. The less maintenance is required for any particular artwork, the more a collection of digital art can grow, and the more that collection will be able to represent the swiftly changing field of digital art and culture—a core part of Rhizome’s mission.


Instead of artistic curation, this article discusses the curation of software mostly. While there is considerable overlap in between the two disciplines, the focus will be on the software part, with artworks being treated rather clinically, as much as that is possible.


Performative Objects


Any object needs to be defined by its boundaries, by its differentiation from anything else that exists. This is also true for works of art.


Whenever a work of art is moved, for instance from one exhibition gallery to another, from one computer to another, or through time via preservation or restoration, its well-understood objecthood makes these activities possible and successful. A misconception of an artwork’s objecthood will make them impossible or cause them to fail later on.


The need of object boundaries to be defined rather than the boundaries revealing themselves “naturally” has caused issues for the preservation of digital art, and henceforth its collection and its usefulness for building careers for artists.


This is mainly rooted in the fact that what computers present—or rather, perform—as objects doesn’t easily translate into useful boundary definitions. It is helpful to remember that most things that appear to happen inside a computer are staged to make them accessible at all.


A digital image visible on screen is not an image but rather the result of a string of symbols interpreted as one. A digital file appearing on a computer desktop as a single unit that offers object-like manipulation can actually be distributed across several local and remote storage devices, which in turn might be abstracted and virtualized. Even the famous “Zeroes and Ones” that are supposed to be the smallest, truthful thing in a computer are merely assigned to two otherwise meaningless symbols so that binary numbers can be more easily imagined when using a computer to do math.


Only through layers of idioms and staging something that is inside the computer can be perceived and understood as any kind of “object.” Once the computer is turned off, the object is gone. The main consideration for the preservation of a digital object is not so much that it can be “stored” or “located” (as metaphors from archival practice suggest), but that it can reappear when the system is turned back on. This means that computer performance needs to become 1) reproducible and 2) portable—or simply put: not tied to a specific computer or class of computers.


In the context of art institutions, administrative and economic issues have to be taken into account, too: It must be possible to own and steward an artwork, and have it play well as an item within a collection. If maintenance of an artwork is too resource intensive, preservation will ultimately be deferred, or carried out at the expense of the needs of other items in the collection. This harms the value of the collection overall.


For the purpose of this discussion, the focus will be on artworks that are purely software. Two pieces from Rhizome’s ArtBase can serve as examples: 1) Victoria Vesna, “Bodies© INCorporated,” 1996, is a mock corporate web site allowing visitors to create a 3D body for themselves, after agreeing to terms and conditions forcing them to forfeit all rights to their creation. (Connor, et al., 2019, p.64) The project makes use of VRML (Virtual Reality Modeling Language) to render interactive 3D graphics in the web browser. Today’s web audience has no means to easily run the software required to access this piece. (Figure 1) 2) I/O/D, “The Web Stalker,” 1997, is an artist-created browser that, instead of rendering web pages as the user navigates them, reveals structural diagrams and streams of code as it moves through the web mostly autonomously. (Ibid., p.74) The software doesn’t work on contemporary operating systems and due to changes in web protocols cannot connect to most regular web sites anymore. (Figure 2) Both works have a history of gallery space exhibitions, but at their core can be considered as artist-created software.


Figure 1: Victoria Vesna: Bodies© INCorporated, 1996. Screenshot, 2018, Netscape Communicator 4.7 and CosmoPlayer 2.0 on Windows 98.


Figure 2: I/O/D, The Web Stalker, 1997. Screenshot 2017, Windows 98.


The Acts of Production and Transfer


Two pretty common events are often responsible for defining the boundary of a digital object in almost accidental and frequently misleading ways: 1) creating an object installs an idea of what has been done in the creator, and 2) the transfer of the object to another location—into a collection—installs an idea in the recipient about what has been received.


For instance, an artist might create a web site using a set of applications like a text editor to write code, a graphics package to prepare images, and a browser for viewing and testing. The work the artist is doing looks like drawing graphics, designing a site structure and interactions, and writing in a set of computer languages. The artifacts produced could be transferred either separately or bundled together in a project file or directory. However, these won’t make any sense without a target environment, a system or set of systems the work was created to be performed with.


Through their work the artist has only added a tiny amount of artifacts on top of a rather incredible software stack consisting of an operating system, hardware drivers, default resources such as system fonts, icons, libraries of interaction patterns, file format interpreters, and so forth. The artist had no role in creating any layer in this stack and might not even pay much attention to them.


If a museum collects the work, the artist would hand over what their understanding of the object is: the artifacts they created. The target environment is at risk of becoming implicit because the work performs just fine at its new location, since at the time of transfer, both sender and recipient are very likely to use similar computer systems. With the web site having been created to be accessible by a wide audience without bespoke technical requirements, the resulting lack of friction can lead to the work becoming technically decontextualized.


Even if instructions on the setup of an environment are provided, they would be prone to blind spots: since changes in common interaction patterns or the availability of standard software and documentation in the future cannot be predicted, the likelihood that the instructions will become useless quickly is pretty high. For instance, an analysis of 100+ artist-provided instructions on the operation of CD-ROM artworks from the transmediale collection, all created around the turn of the century, proved that none of them contained actionable information that couldn’t be inferred from just knowing the artwork’s production date. For instance, many pieces require to be operated via a mouse, a once common pointing device that has largely been replaced by trackpads and touch screens. Yet none of the descriptions mentions a mouse. Instead, technical components like processor models or the amount of memory required to run the artwork are enumerated. (Figure 3, Figure 4) From today’s perspective, this information doesn’t pose any limitation or meaningful guidance, since computers generally have gotten so much more powerful that components aren’t even available anymore in the listed units of measurement, such as megabytes. (Espenschied, Rechert, et al., 2013)


Figure 3: Example text file with technical instructions for CD-ROM in transmediale collection. Screenshot, 2013, author's archive.


Figure 4: Example text file with technical instructions for CD-ROM in transmediale collection. Screenshot, 2013, author's archive.


Going one step further, if information on the target environment would have been provided, such as “requires Mac OS 8.5,” that information itself doesn’t make the artwork run again. Mac OS 8.5 would need to be actually available, ready to be started up.


An artwork acquired with a misguided boundary could take a lot of effort to restage in the future. With the original target environment missing as a reference point, behavior of the software would need to be approximated from documentation or via code analysis, and reproduced for a yet unknown target environment. In the worst case the software would need to be rewritten from scratch.


In another scenario, the artist might hand over the complete computer and peripherals the work was created and tested on to a museum. The computer’s storage would then contain traces of the work process, possibly a browser history pointing to online manuals, personal messages, beta versions of the work, even deleted files. Below, of course, there would be the hardware itself, with specific components, and the complete local software stack.


On the receiving side, this could lead to a boundary definition including all of these components, resulting in the actual artwork becoming technically over-contextualized. Managing the information contained in such a development snapshot requires lots of resources, with little to gain from a preservation perspective, and considerable privacy implications for publicly showing or lending the work. The institution’s ability to grow its collection would be harmed because too many resources would need to be spent on the maintenance of mass-produced components that appear tightly connected with the artwork. A typical expression of that misconception is the meticulous cataloging of technical specification and component names of standard computer parts, assuming that the artist deliberately selected or combined them, and that in the future equivalent replacement parts would be needed. However, the exact make and model of, for instance, a hard drive or a memory element in most cases hardly make any difference for an artwork’s performance or its artistic integrity, and can be considered well within the variability that most digital artists are comfortable working with. In the same vein, listing all software present on a system might seem like a safe bet, but likewise increases the risk of the object becoming too hard to manage. For preservation purposes, it doesn’t make sense to know about every single system component.


Finally, there is a class of artifacts that might provide a core component of an artwork’s performance, but are impossible to hand over or even transport at all, because they are remote resources outside of the control of the artist or the collecting institution. Typical examples would be public interfaces to databases or software services like Google image search, Google maps, Vimeo embedded videos, a twitter timeline, or perhaps streams from public web cameras, all providing material to be processed by the artwork.


Each of these resources located on the other end of the network is a computer system in its own right but can neither be fully examined nor copied. These resources are part of the object, but their performance can only be observed, and possibly captured in part. The object becomes blurry, because its boundary is defined to contain an artifact that is impossible to be fully captured or described.


It can be easily understood that once a remote resource become unavailable or is changed—when it looses its technical function—, the artwork might stop to perform. The same can happen if a remote resource looses its cultural function. For instance, artworks that are using the Twitter API to extract political messages from the platform today might not be able to find any in the future if political discourse moves to another service or topics shift so far that processing via the artwork becomes meaningless—even if the API should remain fully compatible with the artwork’s requirements.


Technical Context and Abstractions


If an object is defined by a bundle of possibly blurry artifacts in a performative environment, what is a reasonable tactic to define fitting shapes for these artifacts? It has proven productive to place whatever is within an assumed boundary of an artwork into a different environment and observe its performance. This is basically a simulation of the technical landscape changing over time. (Rechert, Falcão, Emson, 2016) Using the examples previously discussed, in case of a web site like Bodies© INCorporated, artifacts could be moved to a new default setup of a server, or if handed over as a server image already, that could be moved into a default virtual machine. The internet connection could be disabled, and different browsers be used to access the site. In case of a custom software for desktop usage like The Web Stalker, the program could be tried in different stock operating systems, with networking disabled or enabled. The errors observed through these willful re-contextualization actions will surface preservation risks and hint towards a meaningful boundary definition. For instance, Bodies© INCorportated will not display correctly unless a browser supporting VRML is used to access it, so it makes sense to build an environment based on Windows 98 with a legacy Netscape browser and the CosmoPlayer plugin installed. If the work exhibits gaps and dysfunctional parts when the environment is disconnected from the internet, it probably loads data from remote web sites—these could be encapsulated in a web archive. Step by step, the object’s boundary can be grown or shrunk.


The distinction in between items that are mass-produced and unique provides a further guide for using the right abstractions: Operating systems are mass-produced and provide standardized, normalized access to hardware components. Applications are mass-produced software items that package the operating system’s basic functions in high-level, more user-friendly units. Unique artifacts, as created by artists, are built on top of that layer. The construction of an object therefore is defined by how a specific setup differs from the mass-produced standard setup. (Suchodoletz, Rechert, 2013, “pyramid diagram”)


For example, an operating system like Windows XP is incredibly complex and cannot be fully described or understood, unless there is an organization powerful enough to manage that knowledge, such as the company Microsoft. Even with the stupendous amount of resources available, Microsoft has to discontinue support for older products, and obviously memory institutions won’t be able to step in and carry on developing Windows XP. However, this operating system can be collected as an object on its own and be held readily available to be performed at any time—using dedicated hardware, virtual machines, or emulators.[1] Such a base environment consists of a virtual disk image and abstracted instructions on how to make the system start up. Once this is established, any artifact that has the option of being performed with Windows XP can be described as series of modification steps deliberately applied to that base environment’s disk, as a series of overlay disk images that only record the difference to the previous disk image. (Valizada, Rechert, et al., 2013)


It might make sense to manage steps in between the base environment and the combination with the unique artifact. (Ibid.) For instance, adding a legacy web browser that supports the discontinued CosmoPlayer plugin could be used to perform many different artworks which require that plugin, and remove the need to install and configure this component each time a work requiring CosmoPlayer would be encountered.


Best preservation results are achieved in a balance of summarizing as many components as possible in a single artifact while keeping enough flexibility for changing the makeup of the object in the future by only switching out artifacts. The goal should be to just require documentation on how the artifacts are connected, but not exact details on what they contain. From a previous example: the Windows XP default install doesn’t need to be accompanied by information about all the tools and features it provides. Instead, its usefulness in the collection is expressed in relation to other items, by instances of the operating system providing an adequate base environment for artworks.


Handling Blurriness


To prevent artworks from breaking when remote resources are disappearing or changing, technical records of the the interactions between local and remote systems can be captured during their operation and used as as a substitute of the remote system’s performance. In this scenario, actual “full” performance and artifacts that behave like documentation are getting mixed and the boundary is tightened: any activities for which no data exchange has been previously captured will not be possible anymore.


In the case of The Web Stalker, the software was created to to connect to arbitrary web sites. Even though it is supposed that this artist-made browser largely disregards the content of web sites and instead focuses on their structure, the technical landscape of the web has changed too much since 1997 for this process to work. Most of today’s web sites are served via the encrypted HTTPS protocol, which The Web Stalker cannot use. On top of that, instead of declaring their layout and structure in static HTML code that a browser can read and interpret, websites have become more like applications delivered using the programming language JavaScript which the browser needs to actually execute. Of course the The Web Stalker is not equipped to do this. The software needs access to legacy web sites for its performance. When presenting the work in 2017, Rhizome knew about a handful of live web sites that had not been updated for at least a decade and would produce the desired results with The Web Stalker. The web addresses of these sites were presented as a list to visitors of the work to input into the software. These unmaintained sites were at risk of being either deleted or upgraded to formats and protocols inaccessible to The Web Stalker. To prevent the issue of having no data anymore to feed into the artwork, web archive captures of the remote sites were created using a crawler. A web archive capture is working on a web site’s surface level: it contains a record of requests a browser like The Web Stalker (or any other, like Netscape, Opera, or Firefox) would send to the site, and the responses it would get in return and then interpret. Such a record of machine-to-machine interactions can be used as a mock for a remote system that has become unavailable: network requests are intercepted and matched with prerecorded responses.


As a result, the object’s performance has become more daterministic: The software will not be able to access an infinite amount of web sites anymore, but be restrained to a select set. Yet this reduced performance range will remain reproducible. The object still remains classified as blurry, because the software and the web archive are separate artifacts, and new web archives could be added in the future to increase the range of performance. The local software still contains all its performative potential, remote systems are abstracted. The object’s overall performance is located on a scale in between full performance and documentation. (Espenschied, Rechert, 2018)


Orchestration


Once an object is meaningfully bound and divided into artifacts, the artifacts should remain stable and not be expected to change anymore. What can be expected to change is the orchestration framework required to then restage the artwork by recalling its performance and providing a bridge to the current technological and cultural environment.


For “Bodies© INCorporated,” two software environment artifacts are required: one holding a web server with the project’s site, and a second one for viewing it with Windows 98, a Netscape browser, and the CosmoPlayer plugin. When a visitor wants to access the artwork, these two environments will have to be launched and connected in a simulated network. The environment’s simulated input and output devices, such as screen, keyboard, or pointing device, must be translated to the user’s current devices. “The Web Stalker” requires a software environment that is operated by users to be launched, too, but instead of a second environment acting as a server needs a replay mechanism for web archives in the simulated network.


At Rhizome, these processes are managed using Emulation as a Service (EaaS), a software preservation framework developed as an open source project at the University of Freiburg, Germany. The same framework can be used on cloud computing infrastructure for online exhibitions, computers in gallery space (Figure 5), and on regular workstations or laptops.


Matching a Mission


The presented approach to the preservation of digital art, when thoroughly applied, might appear to challenge some assumptions about how digital art is made, what its role is, and how it should be preserved: Is focusing so much on compartmentalization and the “environment,” which is composed of pretty mundane and boring stuff such as Microsoft Windows, default fonts, and system error messages, happening at the expense of deep understanding of the artwork? And why should digital art be preserved in an “artefactual” state anyway, when the rich tradition of New Media proves that artworks can exist in much more fluent ways?


Of course no preservation method is ideologically neutral. Many are actually designed to resemble previously existing preservation practices used for painting or time-based media, many are adopting certain artistic positions from conceptual art or performing arts, they might have been established in a time with lots of economic resources available or during austere times, optimized for sales on the art market or for free distribution.


Overall, the preserving institution’s position on the future role and value of the digital art it aims to preserve is the biggest defining factor on how objects are conceptualized. As laid out in the early paragraphs of this article, computers always require fictional, staged elements to make sense to their users. This also applies to matters of digital preservation. The computer is a fertile ground for concepts that can be performed: It can perform bitstream preservation akin to a classic museum vault, it can perform migration in the style of time-based media, or ephemerality like in performance art, or source code for people who are into conceptual art, or vernacular distribution for those who are intrigued by folk art—and for each of these ideas provide metrics that can be designed and tweaked until the preservation appears successful. [2]


That’s why it is important for an institution to formulate its view on artwork longevity, its understanding of the media it deals with, and the abstractions it is using to take care of its collection. There are no right or wrong approaches, only ones that demonstrably align with an institution’s mission, capabilities, and resources, and ones that don’t.


With the ArtBase, Rhizome holds a collection of digital art and net art multiple times larger in number than that of some of the world’s leading museums, and needs to maintain it with a fraction of the resources available to larger institutions. The clear breakdown of objects into artifacts and the framework for reenacting them allows to solve typical preservation issues for many artworks at once: instead of fixing individual pieces after for instance most of the audience starts using a new version of the Chrome browser, fixes are applied on the framework level.


The sometimes highly collaborative practice of net art also has produced constellations in which parts of an artwork are distributed across different institutions or commercial platforms, or remain stewarded by the artists. Rhizome then provides only a part in the preservation infrastructure puzzle, for instance a software environment with a specific legacy browser. The presented approach makes this possible. (Rhizome, 2019)


All of this covers mostly technical aspects. As the emulation framework translates a legacy technological context to a contemporary one, artistic curation is required to translate an artwork to a contemporary cultural context. Preservation alone doesn’t explain why an artwork is relevant today, or why it was considered relevant in the past. But showing work in legacy software environments automatically brings with it a lot of contemporaneous ideas about digital culture, offering an opportunity to discuss the historic role of older pieces, helping illustrate paths of cultural development that were abandoned later or perhaps have unexpectedly become industrialized. And providing access to an artwork in an emulator does not prevent a new version of the piece being made that fully exploits the latest 4k screens, VR equipment, Google Maps API, or whatever else. It will just be possible to compare it with its previous variants.


Figure 5: “The Art Happens Here: Net Art’s Archival Poetics,” 2019. Exhibition view: New Museum, New York. Photo: Maris Hutchinson / EPW Studio. Shown via EaaS: (front facing) Mark Tribe, Alex Galloway, Mark Wattenberg, StarryNight, 1999; (back facing) Entropy8Zuper!, skinonskinonskin, 1999. https://www.newmuseum.org/exhibitions/view/the-art-happens-here-net-art-s-archival-poetics


Footnotes


[1] For a discussion of the differences in between these modes of software execution see Rosenthal, 2015


[2] A repository can show you all fixty checks succeeding, a source code management platform can congratulate you on the number of significant code changes, pages of documentation can be counted—but that isn’t necessarily indicating successful preservation.


References


Dragan Espenschied, Klaus Rechert, et al. 2013. “Large-Scale Curation and Presentation of CD-ROM Art.” iPRES 2013. https://phaidra.univie.ac.at/o:378042

Dirk von Suchodoletz, Klaus Rechert. 2013. “bwFLA — Emulation as a Service.” http://eaas.uni-freiburg.de/

Isgandar Valizada, Klaus Rechert, Konrad Meier, Dennis Wehrle, Dirk Von Suchodoletz and Leander Sabel. 2013. “Cloudy Emulation — Efficient and Scaleable Emulation-based Services.” iPRES 2013. https://phaidra.univie.ac.at/o:377398

D. S. Rosenthal. 2015. “Emulation & virtualization as preservation strategies.” https://mellon.org/resources/news/articles/emulation-virtualization-preservation-strategies/

Klaus Rechert, Patrícia Falcão, Tom Ensom. 2016. “Towards a Risk Model for Emulation-based Preservation Strategies: A Case Study from the Software-based Art Domain.” iPRES 2016. https://phaidra.univie.ac.at/o:503169

Dragan Espenschied, Klaus Rechert. 2018. “Fencing Apparently Infinite Objects.” iPRES 2018. DOI 10.17605/OSF.IO/6F2NM/ https://osf.io/6f2nm/. gemini://despens.systems/fencing-apparently-infinite-objects/.

Michael Connor, Aria Dean, Dragan Espenschied (Eds.). 2019. “The Art Happens Here: Net Art Anthology.” Rhizome. ISBN-13: 978-0692173084

Rhizome. 2019. “Net Art Anthology Preservation Notes.” Live document. https://anthology.rhizome.org/preservation


Notes


Edits 2023-03-13:

Light grammer edits

Added second footnote


Citation


Dragan Espenschied. 2021. “Digital Objecthood.” In: Selçuk Artut, Osman Serhat Karaman, Cemal Yılmaz (Eds), “Technological Arts Preservation.” 2021. Sabancı University Sakıp Sabancı Museum. Istanbul, Turkey. ISBN: 978-625-7329-16-3. https://www.sakipsabancimuzesi.org/en/collections-and-research/archive-and-research-area/17294. gemini://despens.systems/digital-objecthood/.



--
344 views, last modified 2023-03-13

-- Response ended

-- Page fetched on Sat May 11 14:25:51 2024