-- Leo's gemini proxy

-- Connecting to beyondneolithic.life:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-US

Mueller Archive

Home


4 — High-Tech Luddism


The student movements of the 1960s were among the first to politicize the computer — at the time, massive mainframes that only governments, corporations, and universities could afford. Mario Savio, leader of the Berkeley Free Speech Movement, famously invoked opposition to "the machine" in his attacks on the bureaucratization and soullessness of the university and wider postwar society:


> There's a time when the operation of the machine becomes so odious, makes you so sick at heart that you can't take part! You can't even passively take part! And you've got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus — and you've got to make it stop!


As historian Steven Lubar argues, Savio's poetic invocation was likely inspired by the university's information processing machine: in other words, a computer. By the 1960s, computer punch cards had become objects that represented people's contact point with bureaucracies ranging from the Census Bureau to the billing department of their local utilities. At UC Berkeley, students were required to fill out punch cards to register for classes. As such, Lubar notes, the cards were incorporated into movement actions:


> Berkeley protestors used punch cards as a metaphor, both as a symbol of the "system" — first the registration system and then bureaucratic systems more generally — and as a symbol of alienation … Punch cards were the symbol of information machines and so they became the symbolic point of attack.


Students vandalized, burned, and otherwise destroyed punch cards designed for course registration; one student punched holes in a card to spell out the word "STRIKE." As the student rebellion intensified in response to the Vietnam War, so too did actions against the computers residing on campuses. This development made perfect sense: Vietnam was, after all, the first computational war. Computers in the White House, the Pentagon, and eventually Saigon provided military planners with massive amounts of electronic data that determined the conduct of the war.


The move toward strategies rooted in quantitative data collection and automated analysis represented a radical change in military culture — one resisted by the officer corps, who viewed warmaking as more art than science. Instead, this overhaul was led by the dictates of the civilian secretary of defense, Robert McNamara, who had previously used statistical analysis to turn around the fortunes of Ford Motor Company. Accordingly, the Pentagon judged military success in terms of quantitative metrics, such as "body count."


Even beyond the ambit of strategic calculation, the fighting of the war itself became computerized and automated, in what became known as "the electronic battlefield." General William Westmoreland revealed the concept, for several years a secret project at the Pentagon, at a public meeting with defense industry figures in October 1969:


> On the battlefield of the future, enemy forces will be located, tracked and targeted almost instantaneously through the use of data links, computer assisted intelligence evaluation, and automated fire control …


> Today, machines and technology are permitting economy of manpower on the battlefield, as indeed they are in the factory. But the future offers more possibilities for economy. I am confident the American people expect this country to take full advantage of technology — to welcome and applaud the developments that will replace wherever possible the man with the machine.


As with the automation of factories during World War II, it was the Air Force, not the Army, who led the way on the electronic battlefield. Operation Igloo White knit together sensor arrays, communication networks, and aircraft to disrupt North Vietnamese movements along the Ho Chi Minh Trail. As Ian Shaw notes in his geographic study,


> Once a sensor detected a stimulus in the surrounding atmosphere — such as the sound of a passing truck, a vibration in the ground, the chemical "smell" of an NLF soldier, or even a change in light — it broadcasted a radio signal to nearby ground and air receivers, including Lockheed EC-121 planes.


These "air receivers" were, as a pamphlet produced by the anti-war group Scientists and Engineers for Social and Political Action put it, "unmanned drones," part of an experiment with the intention of the eventual replacement of human pilots. Bombers were directed to their targets by computer algorithms, and even the release of bombs was often automatic.


The automation of war, as with the automation of industry, was an important means to reassert control over rebellious rank-and-file soldiers in Vietnam. As the anti-war movement successfully spread to the Army, soldiers increasingly refused to fight, sabotaged equipment, staged disruptive protests, and even murdered their commanders. Morale was in a state of collapse. However, the replacement of ground troops with aerial bombardment, itself increasingly automated, removed insurrectionary troops from the equation, prolonging the conflict. The conclusion of many anti-war groups was that such automation was as much a political strategy as it was guided by military prerogatives.


Much of the military computing projects' research and development, along with the data processing, relied on university computer science and engineering departments, which were thus drawn into anti-war struggles. Groups like the Union of Concerned Scientists, Science for the People, and Computer People for Peace formed to agitate within the scientific professions against collaboration with US militarism. And student movements intensified the targeting of computers, engaging in confrontations that went far beyond the playful tampering with punch cards. As the "Old Mole," a student publication based in Cambridge, Massachusetts, put it in a 1969 article, demurely headlined "Let's Smash MIT":


> MIT isn't a center for scientific and social research to serve humanity. It's a part of the US war machine. Into MIT flow over $100 million a year in Pentagon research and development funds, making it the tenth largest Defense Department R&D contractor in the country.


The mass shooting by National Guardsmen of protestors and bystanders at Kent State on May 4, 1970 ignited furor on campuses across the United States. Computers were often a target. On May 7, student demonstrators briefly occupied Syracuse University's computer center. A few days later, on the heels of a tumultuous week of campus protests, activists took over a computer lab at the University of Wisconsin, destroying a mainframe in the process. At NYU, as well, 150 protestors broke down doors and occupied the computer lab. They abandoned the occupation after two days, rigging the mainframe with improvised napalm connected to a slow-burning fuse. Two math professors managed to extinguish the fuse before the explosives went off; an assistant professor and a graduate teaching assistant were later arrested in connection with the incident. At Stanford, the Computation Center was set ablaze, though without serious damage. Months later, in early February 1971, activists again targeted the center, giving speeches and distributing flyers calling for the divestment of Stanford computer resources from Defense Department activities. One speech highlighted the tactical value of targeting the computer:


> I'm not even going to talk about a strike. It doesn't matter either way. But what, I think that it's apparent that the place that has to be hit, and has to be hit hardest and can be hit not only here but in every college campus and in every city in the country are the computer centers. Computer centers are the most vulnerable places anywhere … It could mean just an hour delay. It could mean a day delay. It could mean a week delay. It could be a month delay or a year delay. Nobody knows. It's dependent upon what's destroyed in just that power shortage. What's destroyed in core storage. What's destroyed as to the records. What could be destroyed in the tape reserve rooms by the temperature going too high. Nobody knows.


The Computer Takes Over


As the 1970s rolled on, protests fizzled out, and radical energies often flowed away from confrontation with the state and toward countercultural practices. Some of these practices took extreme anti-technology stances; others began to rehabilitate the computer as an object for personal liberation, a view that would become increasingly hegemonic in the marketing of home computers, and later, the internet. But beyond the roiling energies of the anti-war movement and their aftermaths, computers were part of another, deeper restructuring in the world of work.


Harry Braverman's classic analysis of machines and the labor process, "Labor and Monopoly Capital," concluded with a sustained examination of this transformation. Braverman saw computers as having a Taylorist effect, similar to that of the introduction of automation in factory work:


> Once computerization had been achieved, the pacing of data management became available to management as a weapon of control. The reduction of office information to standardized "bits," and their processing by computer systems and other office equipment, provides management with an automatic accounting of the size of the workload and the amount done by each operator, section or division.


As in the factory, Taylorism in the office sought to divide knowledge from execution, routinizing and deskilling the more autonomous and affective features that had characterized office work. Unlike in the factory, this division served to increase physical toil, rather than decrease it, as workers became appendages of tabulating and copying machines. A 1960 International Labour Organization report documented complaints among white collar workers of "muscular fatigue, backache, and other such ills as a result of the unaccustomed strain of operating machines." And Ida Hoos's "Automation in the Office," published in 1961, documented workers complaints about the overwork that accompanied this change. According to one of Hoos's informants, "The categories of jobs which have disappeared are those which require skill and judgement. Those remaining are the tabulating and key punching operations, which become even simpler, less varied, and more routinized as work is geared to the computer."


Braverman's 1974 synthesis touched off a debate about the nature of what was termed the "white collar proletariat." Would the deskilling of office work transform the consciousness of what Nicos Poulantzas had identified, in his theorizing on class, as "the new petty bourgeoisie," turning docile typists and college-educated professionals into militants organizing against management? Often waged from the heights of theoretical abstraction, the debate was ultimately inconclusive. Instead, it would be a business school ethnographer who provided some of the most pertinent phenomenological observations on the computerized restructuring of work.


Shoshana Zuboff, a social scientist conducting ethnographic research in factories and workplaces in the early 1980s, was well positioned to observe the changes in industrial labor processes wrought by the implementation of computers. While not a Marxist herself, Zuboff recognized the technology as a flash point for class struggle. "The new technological infrastructure," she wrote, "becomes a battlefield of technique, with managers inventing novel ways to enhance certainty and control while employees discover new methods of self-protection and even sabotage."


Zuboff viewed the new wave of computerization as containing two intertwined qualities. As work processes were automated by computerized technologies, they followed the dictates of classical Taylorism: management used machines to reorganize parts of the labor process where workers had accumulated a modicum of control. But at the same time it was automated, work was also "informated" by computers, which produced a real-time record of the labor process in the form of data. As she observed, "The programmable controller not only tells the machine what to do — imposing information that guides operating equipment — but also tells what the machine has done — translating the production process and making it visible."


In turn, the informating of the labor process had two related effects. On the one hand, it altered the texture of work itself: its embodied qualities, its entire phenomenology for the worker. Skill in labor had previously been defined by the wisdom earned through physical repetition, a kind of tacit knowledge rooted in accumulated bodily practice that was impossible for many workers to verbalize, no matter how experienced. However, with the introduction of machines, and especially computer interfaces, to replace worker skill, jobs became a set of abstract instructions that workers had to cognitively interpret and understand, rather than a set of embodied tasks.


Because the texture of these new labor processes was mental rather than physical, workers in such environments could not be managed using the old methods — ones that had depended on the discipline of their bodies, their physical movements. In order to manage workers in the informated workplace, managers needed to discipline *minds*, so that workers' values and desires better aligned with the needs of the company. "As the work that people do becomes more abstract," Zuboff noted, "the need for positive motivation and internal commitment becomes all the more crucial." How would workers internalize management prerogatives in such a way? Her perceptive conclusion was that the very same computer also provided a detailed account of each worker's behavior and the overall labor process to management, fulfilling the wildest fantasies of past generations of scientific managers. And more than a management tool, computerization transformed the work-place into a panopticon as described by Michel Foucault, where an environment of total surveillance impels the internalization of the dictates of power. Rendered as data, power becomes objective, a fact no one can argue with. As one worker told Zuboff,


> With these systems, there is no doubt. The results are the truth. They bring the truth to management. This means managers can really see what is happening, and they have to buckle down and focus on problems. It creates joint awareness. We end up working together more than fighting over what really happened.


Yet while computers restructured many of the affective flash points of workplace clashes, they could not completely eradicate conflict between workers and management. In the digital panoptic workplace, struggle took on a subversive character, what workers termed "passive resistance," centered around obfuscation, invisibility, and, above all, manipulation of the computer. With the right passwords, numbers could be fudged; even when slacking could be detected, blaming computer error became a popular and effective technique.


Processing the World


Scattered moments of passive resistance to computers like the ones Zuboff observed were the direct concern of the idiosyncratic Bay Area magazine "Processed World." Situated in the heart of the IT revolution of the early 1980s, "Processed World" sought to sharpen the ambivalence about new technology, to hone it into antagonism. According to its editors, the publication's two goals were "to serve as a contact point and forum for malcontent office workers (and wage workers in general), and to provide a creative outlet for people whose talents were blocked by what they were doing for money." Accordingly, "Processed World" featured an increasingly lively letters-to-the-editor section, along with comics, parodies of advertisements, poetry, and an overall wry and ironic tone. Its emphasis on bottom-up resistance and grassroots creativity was inspired by the Situationist International, and followed their model of widening the critique of capitalism from traditional work-place struggles to the "banality, hypocrisy, conformism, and dullness everywhere" — a critique not simply of work, but of everyday life.


Founded in 1981, when "Silicon Valley" evoked microchips instead of apps, "Processed World" was still rooted in the tribulations of the workplace, chronicling the "day-to-day experience" of the bewildering recomposition of office labor and offering early militant inquiries into the vicissitudes of IT work. As historian Steve Wright frames it, "Processed World" offered lengthy analyses of "the labor process, culture and behaviors — in other words, the class composition — of employees engaged with work with information and information technology." By the authors' own admission, class consciousness in this milieu was low, and labor organization practically nonexistent. The goal of "Processed World" was, therefore, to investigate what tendencies existed, and, when possible, to spark the initial fires of worker resistance through a mixture of irreverent humor and detailed analysis of the experience of work. By providing a forum, one of "Processed World's" most important interventions was simply to alert atomized would-be agitators of one another's existence. Said one anonymous letter writer: "I don't think I've been this grateful since I was first taught how to read!"


"Processed World's" most notorious essay hearkened back to the controversial IWW tracts of the 1930s. "Sabotage: The Ultimate Video Game," written by an office worker under the nom de plume "Gidget Digit," extols the virtues of machine breaking. "The urge to sabotage the work environment," she muses, "is probably as old as wage-labor itself, perhaps older." Digit proceeds to connect this archaic desire with the new technological apparatus of the office, its "new breakable gadgets" of computer terminals and fax machines: "Designed for control and surveillance, they often appear as the immediate source of our frustration. Damaging them is a quick way to vent anger or to gain a few extra minutes of 'downtime.'" As a grateful reader subsequently wrote to the magazine, "I will leave it to the theoreticians to argue about the dialectical nuances of sabotage. Basically, there is one overwhelming reason to do it: it makes you FEEL GOOD."


But Digit goes beyond describing such vandalism as a momentary atavism, however pleasurable it may be to the smasher. She theorizes it as one component of the continuous struggle in the office; in this sense, machine breaking is an ingredient in the roiling stew of class composition:


> Sabotage is more than an inescapable desire to bash calculators. It is neither a simple manifestation of machine-hatred nor is it a new phenomenon that has appeared only with the introduction of computer technology. Its forms are largely shaped by the setting in which they take place. The sabotage of new office technology takes place within the larger context of the modern office, a context which includes working conditions, conflict between management and workers, dramatic changes in the work process itself and, finally, relationships among clerical workers themselves.


This context was, according to Digit, part of a restructuring of work away from manufacturing to "the dazzling information sector." Even before personal computers had entered homes, Digit recognized the snake oil sold by techno-optimists about the flexible work routines of the digital future, seeing, like Zuboff, the surveillance potential of the new work technologies. "In fact," she observes, "rather than freeing clerks from the gaze of their supervisors, the management statistics programs that many new systems provide will allow the careful scrutiny of each worker's output regardless of where the work is done."


Digit follows a Bravermanian line: computers, as the latest feature of automation, fragment and reorganize work to reassert managerial control. In other words, they will decompose "the type of work cultures … that contribute to the low productivity of office workers" by undermining insubordinate practices like personal use of copiers and phones, coming in late, goofing off on the clock, and playing pranks. Digit predicts this control will extend into everyday life as information technology penetrates leisure through video games, home shopping, and cable television, providing expanded options at the expense of autonomous and creative free time. In her words, "The inhabitants of this electronic village will be allowed total autonomy within their personal 'user ID's,' but they are systematically excluded from taking part in 'programming' the 'operating system.'"


Digit's politicization of technology exposed a rift among the editors at "Processed World" regarding automation. Tom Athanasiou took a line very close to today's Full Automators: "Though automation threatened livelihoods by eliminating degrading jobs, there is nothing inherently bad about computer technology, in a different society, it could be used to improve our lives in all kinds of ways." Athanasiou went so far as to sketch a communist utopia where "people would work, study, create, travel and share their lives because they wanted to, for themselves and for others." With a nod to Chile's abortive Cybersyn economic management project, Athanasiou argued that "computers could match needs to resources and pinpoint potential surpluses and shortfalls," a logistical underpinning for a society of abundance freed of markets. Editor Maxine Holz dissented from this position. While acknowledging positive aspects of computers, she argued that "the immediate results of widespread implementation of much of modern technology are disadvantageous to workers and others directly affected. I think it is important not to lose sight of the current reality of conditions created by these tools." The role of "Processed World" was not to sketch utopias or possible futures. It was to document, and thereby coalesce, the actually existing struggles in the IT sector.


In spite of her diagnosis, in light of the widespread sabotage of office technology, of "a common desire to resist changes that are being introduced without our consent," Digit opposed Luddite destruction by groups such as the French Committee to Liquidate or Divert Computers. Instead, she argued a Situationist line of technological *detournement*, toward "the more positive aim of subverting computers." We might see Digit's response as an early endorsement of the hacker way: resistance within and through technology, rather than purely against it.


Hacker cultures have been ascribed all manner of politics, from liberal to libertarian, radical to reactionary. And to be sure, hackers have participated in political projects from all of these quarters. But the technological politics of hackers are complicated and, somewhat surprisingly, often quite Luddish. To understand why this is, we have to look at one of the earliest examples of struggles for technological control undertaken by hackers: the free software movement.


High-Tech Luddites


It might seem counterintuitive, even paradoxical, to associate some of the most enthusiastic and skilled users of technology with weavers wielding large hammers against comparatively rudimentary machines. Popular representations of hackers, whether it's a young Matthew Broderick using a computer and telephone modem to alter his biology grade in 1982's "War Games" or the Guy Fawkes-mask-wearing hordes of Anonymous, portray digital devices and technical know-how as the source of their power. And so the image of the misfit cybercriminal has been gentrified into the eccentric Silicon Valley entrepreneur who unleashes his (yes, his) technological mastery to forever alter society. Rather than smash machines, hackers embrace them. And so they should be some of the least Luddish figures on the planet.


And yet, if we look at the content of hacker politics as practiced by actually existing hackers, a different picture emerges. Far from a celebration of technology, hackers are often some of its most critical users, and they regularly deploy their skills to subvert measures by corporations to rationalize and control computer user behavior. They are often Luddites to the core.


One of the earliest and most influential examples of hacker organization of Luddite resistance is the free software movement, led by maverick programmer Richard Stallman. As Stallman tells it, programmers routinely shared code in the early days when software had to be designed from scratch. "Whenever people from another university or a company wanted to port and use a program, we gladly let them," he explains in his manifesto "Free Software, Free Society." "If you saw someone using an unfamiliar and interesting program, you could always ask to see the source code, so that you could read it, change it, or cannibalize parts of it to make a new program." Sharing and copying code became an essential practice within the nascent computer hacker culture, and one that promoted pedagogy, autonomy, and productivity.


The growth of personal computing led to an increased need for software, leading to the emergence of software companies who sought to meet this demand by turning software into a commodity to be bought and sold, not crafted. However, the plans of entrepreneurs to sell individual copies of software to each user collided with the widespread practice of copying and sharing code, at that point already deeply embedded in the hobbyist cultures of early computing. In 1976, one of those entrepreneurs, Bill Gates, wrote a scathing letter to the hobbyist community: "Most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?"


Under pressure from the software industry, the US Supreme Court ruled that computer code was subject to copyright, a decision that threatened the working conditions of programmers like Stallman, who regularly copied code in order to tinker with it. Suddenly, copying was a crime. In response, Stallman formulated a set of alternative software licenses, so-called copyleft, designed to protect the open sharing of source code. When a piece of software incorporates Stallman's GNU General Public license, it adopts two fundamental precepts: users are permitted to view and modify the code, and any new programs built with that code must, in turn, follow the GNU license. Propelled by a dedicated base of hackers and tinkerers, Stallman's licenses contributed to the growth of a large and successful — even commercially so — ecosystem of so-called free software.


Free software is an example of a Luddite technology: an innovation in the interest of the preservation of practitioners' autonomy against the imposition of control over the labor process by capitalists. By "breaking" software copyright and challenging closed and proprietary business models connected to it, free and open-source software has helped preserve independent and craft-like working conditions for programmers for decades. In addition to launching important software projects, like the operating system Linux, the free software movement was instrumental in establishing nonproprietary coding languages as standard in the industry, which meant that skill development, rather than being controlled exclusively by large corporations, could be done through open community involvement.


Free software's successful struggles also helped to politicize an antipathy for intellectual property rights that continues to mark digital culture. Why stop at code? Hackers moved from liberating software to doing so for all forms of media content, from games to music to films. In both their technologies and their social practices, these digital pirates have often embraced older ways of doing things, rather than technological accelerationism. Even well into the BitTorrent era, the loose confederation of elite digital piracy crews labeled "The Scene" still operated via File Transfer Protocol (FTP) servers, a technology that predates even the World Wide Web. When Napster emerged, making file sharing a major concern for the culture industries, careful observers noted that its genius stemmed, not from technological wizardry, but from its throw-back architecture:


> Napster is, in some ways, something of a regression to the old days of the Internet. Mass usage of the Internet has meant that servers have to be used to house information. Napster, on the other hand, relies on communication between the personal computers of the members of the Napster community.


In other words, Napster's peer-to-peer architecture more resembles the distributed computing of the bulletin board system–era of the 1980s than "the cloud" of today that tethers us to the servers of massive tech companies.


Many of the conflicts that have shaped the history of the internet — over intellectual property, over privacy and surveillance, over corporate control — are usefully conceived as struggles over subsumption, similar in structure to the struggles of the weavers of the early 1800s. Time after time, skilled computer users have mobilized to protect their established autonomous ways of crafting, exchanging, and using digital artifacts, rather than behaving as corporations such as Microsoft would have wished. In the 1990s, the web was a province of amateurs and hobbyists. Firms generated revenue from providing access to the internet, but once there, user behavior ran free through what were mostly noncommercial spaces: unofficial fan web-pages, lightly regulated forums, troves of freely available games and software. This was a period of formal subsumption, where general capitalist imperatives of commodity exchange reigned, but individual user behavior was uncontrolled. As the Brazilian left communist group Humanaesfera notes, "the physical infrastructure was privately owned," but "the social content which emerged from this physical infrastructure was beyond the reach of capital."


This idyllic time came to an abrupt end when, after a period of rampant speculation, the so-called dotcom bubble burst, taking down trillions of dollars in wealth along with a host of early online business models. At the time, some commentators even mused that the internet itself was nothing but hype. But digital capitalism would not be stopped so easily. A new technique of networked accumulation emerged as surviving techies reorganized digital interfaces and infrastructures into what entrepreneur Tim O'Reilly branded as "Web 2.0." O'Reilly realized that the replication of the storefront, à la infamous bubble victim Pets.com, was a failing strategy. Instead, Web 2.0 would leverage "architectures of participation," whereby "users pursuing their own 'selfish' interests build collective value as an automatic byproduct." In other words, activity would produce data. This occurred through a restructuring of computing itself through the internet. O'Reilly identified Google as the epitome of this transformation. Rather than offer software for users to install on their personal computers, Google provided services remotely while its software ran on its own servers: "a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see," or what we now call "the cloud." In this way, Google could act as the "middleman between the user and his or her online experience," all the while collecting data from the users of its services.


O'Reilly's rosy notion of "participation" captured the imagination of techno-optimist intellectuals like Clay Shirky and Henry Jenkins, who heralded the rise of a democratic popular culture of "participatory media" online in the form of blogs, remixes, and creative fandoms. But the real story was not so much the participation as that "automatic byproduct": the user data, which could be fed back into the system, fine-tuning it. The data could be used to rationalize online behavior, extending the duration of activity within a platform and rendering it ever more productive and valuable. This was not democracy, but the transformation of the web into a distributed machine for the capitalist production of value. For critics such as media theorist Mark Andrejevic, the "architectures of participation" of Web 2.0 were better understood as "digital enclosures" for data extraction. Today, for most of us, all our online activity — from scrolling through social media to watching old music videos to shopping for socks — is tracked by dozens, sometimes hundreds, of companies, who then squeeze this activity into marketing data.


Shoshana Zuboff, the ethnographer who made such detailed observations of the rise of computation in the workplace, has moved to studying this "new logic of accumulation," what she calls "surveillance capitalism." For Zuboff, surveillance capitalism (whose paradigmatic example is, like O'Reilly's, Google) unfolds from the dynamics at work in the informatization of the workplace, which has served not only to automate labor processes but to continually generate data about them. With Google, all user behavior — every word in an email, every search, every mapped commute — becomes information that further improves the system, which is ultimately focused on selling advertising. This data is the property of Google, hoarded by the higher-ups in a company and used to extract value, with nothing delivered to those who produced the data but Google's services. This asymmetry is at the heart of what Zuboff classifies as the extractive logic of surveillance capitalism: "the absence of structural reciprocities between the firm and its populations."


Zuboff describes all this as a sharp break with prior arrangements of profit making. But surveillance capitalism is better understood as the deepening penetration of commodity exchange and work relations into everyday life. It is not simply that the relations between tech companies and the rest of us are extractive, rendering these corporations indifferent to our fates. Marxists, among others, have rarely thought it sufficient to rely on the good graces of corporations. They suggest, rather, that it is the *restructuring* of behavior in directions directly productive of commodities — data — combined with the inexorable pressures of austerity toward new varieties of "monetization," that we must confront. And rather than submit to the metaphor of a natural resource, which treats data as existing in a state of nature, we must contend with how it is produced by human activity that has been locked within technological apparatuses, governed by opaque contracts. The way users produce data thus begins to resemble capitalist labor relations.


How have hackers responded to this situation? For one, they have battled surveillance capitalism by developing technologies that enhance and protect user privacy. Since the real subsumption of user behavior relies on surveillance and tracking, these privacy applications are another kind of Luddite technology, one that attempts to return the web to its formally subsumed state of relatively autonomous creative activity. Social scientist Maxigas documents the use by hackers of one such instance, a browser extension called RequestPolicy. RequestPolicy blocks so-called cross-site requests: instances, such as with Google's embedded ads or data analytics tracking, where the website you're visiting incorporates content from another site without alerting you. Because contemporary websites are rife with such third-party content, RequestPolicy effectively smashes the web, making normal commercial pages impossible to navigate, what Maxigas terms "a retrograde attempt to rewind web history: a Luddite machine that, as they say, 'breaks' the essential mechanisms of websites."


The so-called dark web, where you can buy everything from a hit of acid to a hit on your neighbor, is a perfect encapsulation of the Luddite ethos in high-tech cultures. The dark web runs on the Tor protocol, which obfuscates user activity by routing it through an anonymous distributed architecture. By shielding user activity from the gaze of tracking technologies, Tor prevents this activity from being vacuumed up as data, and thus decommodifies it. Of course, the dark web is not itself free of commodification: in fact, the purchase and sale of goods and services, often of the illegal variety, runs rampant. Thus the dark web is not a space without market exchange, but a space without surveillance capitalism. Petty commodity exchange, especially of the kind idealized by libertarian economic theories, carries on, but it does so in isolation from the mechanisms of consolidation that have affected the fate of the rest of the web.


Tor sites themselves bear an uncanny resemblance to the early web of the 1990s. Pages are individually maintained, crashing frequently. Old lore of the bulletin board system — hacking tutorials, drug stories, text files of the "Anarchist Cookbook" replete with dodgy recipes for homebrewed intoxicants and explosives — are everywhere. Search engines work poorly, if at all, and so navigation relies more heavily on crosslinking and word of mouth. Tor's use of proxy means that page loading times hearken back to dial-up speeds, thus favoring the simple and straightforward HTML web design of the era of noisy modems. With Tor, Luddite technologies lead to Luddite aesthetics.


Rather than leave technological development to entrepreneurs on the assumption that it can be taken over wholesale by leftist political formations, Richard Stallman and other Luddite technologists recognize that technology is itself a flash point of current struggles. Further, they demonstrate not only that struggles against technologies of subsumption can win, but that they can also illuminate a path of alternative technological development. Yet hacker culture is often suffused with elitism, an unfortunate side effect often found in meritocratic craft cultures. Few everyday users will have the technical understanding to use Tor; fewer still will deign to suffer through RequestPolicy's upending of the really subsumed web.


There is an urgent need to think about how Luddite technologies could reach and impact more internet users, beyond the most skilled enthusiasts, especially as more and more activity — in particular, work — moves into online spaces subject to the dynamics of surveillance capitalism.


The New Digital Automation


We are at a critical moment, one when digital technologies of automation, often referred to with buzzy vocabulary like "algorithms" and "AI," are poised to transform both work and governance — or so we are told. Drawing from the massive glut of digitized information, the "big data" produced from a generation of informated activities, these technologies promise to transform all manner of employment, impacting the careers of even highly trained professionals in fields such as law and medicine. While white-collar jobs in "knowledge work" were once the promise of a comfortable future, now technologists such as Kai-Fu Lee, former president of Google China, promise that white-collar jobs will go first. According to Lee, "The white collar jobs are easier to take because they're pure a quantitative analytical process. Reporters, traders, telemarketing, telesales, customer service, [and] analysts, they can all be replaced by a software."


Of course, the "replacement" of workers by software has happened for decades, and continues apace, often with dismal results. Government bureaucracies were some of the first places where computers promised to increase efficiency and cut costs, which is why the objects on your computer hard drive are "files" organized into "folders." Today, as state facilities respond to cost-cutting neoliberal austerity with algorithms and software packages, the results are nothing short of devastating. Political scientist Virginia Eubanks details how the incorporation of "cost-saving" software packages into public assistance offices has created what she calls the "digital poorhouse":


> The digital poorhouse deters the poor from accessing public resources; polices their labor, spending, sexuality, and parenting; tries to predict their future behavior; and punishes and criminalizes those who do not comply with its dictates. In the process, it creates ever-finer moral distinctions between the "deserving" and "undeserving" poor, categorizations that rationalize our national failure to care for one another.


Rather than revolutionize government bureaucracies, she observes, "automated decision-making in our current welfare system acts a lot like older, atavistic forms of punishment and containment. It filters and diverts. It is a gatekeeper, not a facilitator." Even well-meaning government employees succumb to a system that fragments and rationalizes their labor process. Where social workers once tracked individual cases, familiarizing themselves with their charges and gaining valuable context for judging courses of action, automated systems fragment cases into tasks to be handled, bereft of history or context. The result is, as one caseworker puts it, dehumanizing: "If I wanted to work in a factory, I would have worked in a factory."


Similar systems are proliferating throughout governments. According to law professor Frank Pasquale, the impact of automation on the legal system is potentially catastrophic: the effacement of human judgement by algorithms means the end of the rule of law. For example, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm uses "predictive analytics" to provide sentencing guidelines for judges, a kind of risk assessment of defendants' likelihood to commit future crimes. Because the algorithm is a trade secret, its inner workings are closed off to the public: part of what Pasquale calls the looming "black box society." COMPAS essentially provides secret evidence not subject to challenge or cross-examination, completely upending due process. As demonstrated by Cathy O'Neil, a mathematician turned social critic, tools such as predictive policing (PredPol) rely on data sets produced from woefully discriminatory law enforcement and justice systems, thereby entrenching existing biases and inequalities. In favoring efficiency over other values, such as fairness, algorithmic law and order amounts to the "industrial production of *unfairness*."


But as we have seen, and as many workers have taken pains to articulate, automation never completely erases human labor. Kai-Fu Lee's prediction of total replacement of knowledge work is thus a profound exaggeration, as a report by the think tank Data & Society makes plain: "AI technologies reconfigure work practices rather than replace workers," while at the same time, "automated and AI technologies tend to mask the human labor that allows them to be fully integrated into a social context while profoundly changing the conditions and quality of labor that is at stake." Investigating grocery store self-checkout, researchers found that Luddish customers hated and avoided the technology. In response, management cut staff to make lines so unbearable that customers gave up and used the machines instead. Even then, cashiers were still required to assist and monitor transactions; rather than reduce workload, the technologies were "intensifying the work of customer service and creating new challenges." This is an example of what technology journalist Brian Merchant calls "shitty automation":


> If some enterprise solutions pitchman or government contractor can sell the top brass on the idea that a half-baked bit of automation will save it some money, the cashier, clerk, call center employee might be replaced by ill-functioning machinery, or see their hours cut to make space for it, the users will be made to suffer through garbage interfaces that waste hours of their day or make them want to hellscream into the receiver — and no one wins.


Store customers understand that self-checkouts mean tasks have been sloughed off on them, what media scholar Michael Palm classifies as "consumer labor." And so they take revenge, rebelling against the technological imposition of work. Theft is rampant at self-checkouts. Bandits share techniques on forums like Reddit: hit "Pay" to disable the bag scale and then bag more items; always punch in the code for the cheapest produce (usually bananas); when in doubt, just throw it in your bag and walk out. They also offer justification: "There is NO MORAL ISSUE with stealing from a store that forces you to use self checkout, period. THEY ARE CHARGING YOU TO WORK AT THEIR STORE."


Consumer labor in self-checkouts is an example of how rather than abolishing work, automation merely proliferates it. By isolating tasks and redistributing them to others expected to do it for free, digital technologies contribute to overwork. Writer Craig Lambert uses the term "shadow work" to describe this common experience with digital systems. The term derives from Ivan Illich, who used it to describe the devalued but necessary activities often performed by women, from housework to shopping to commuting. For Lambert, digital technology also intensifies shadow work in the waged portion of life. When new technologies "automate" positions away, remaining workers often feel the brunt of new tasks. He describes the "job-description creep" facilitated by new software packages. Where administrative staff may have once kept track of bureaucratic matters such as employees calling off work, now "absence management" software requires workers to handle it themselves. "I am not sure why it has become my responsibility to do data entry for any time away from the office," a software developer tells Lambert. "Frankly, I have enough to do writing code. Why am I doing HR's job?"


Atul Gawande writes evocatively of the effect of digital shadow work on the medical profession. After the introduction of a new software system for tracking patients, Gawande, invoking the specter of Taylorism, describes the painful restructuring of his work, away from patients and toward more structured interactions with computers. "I've come to feel that a system that promised to increase my mastery over my work has, instead, increased my work's mastery over me," he writes. "All of us hunched over our screens, spending more time dealing with constraints on how we do our jobs and less time simply doing them." But fighting against this bureaucratization-by-software, leads, he argues, to escalating burnout rates in the medical profession, the prevalence of which strongly correlates to how much time one spends in front of a computer. And Gawande's specialization, surgery, is part of another technologically mediated crisis: as more daily activities revolve around typing and swiping, manual dexterity has declined: future surgeons have lost their ability to cut and stitch patients.


As technology critic Jathan Sadowski argues, much of what is hyped as a system of autonomous machines is actually "Potemkin AI": "services that purport to be powered by sophisticated software, but actually rely on humans acting like robots." From audio transcription services disguising human workers as "advanced speech recognition software" to "autonomous" cars run by remote control, claims of advanced machine intelligence not only amount to venture capital– chasing hype, but actively obfuscate labor relations inside their firms. As writer and filmmaker Astra Taylor argues, such "fauxtomation" "reinforces the perception that work has no value if it is unpaid and acclimates us to the idea that one day we won't be needed."


While artificial intelligence is frequently likened to magic, it regularly fails at tasks simple for a human being, such as recognizing street signs — something rather important for self-driving cars. But even successful cases of AI require massive amounts of human labor backing them up. Machine learning algorithms must be "trained" through data sets where thousands of images are manually identified by human eyes. Clever tech companies have used the unpaid activity of users for years to do this: whenever you solve a ReCaptcha, one of those image identification puzzles to prove you're not a bot, you are helping to train AI — likely designed by the Google service's inventor, computer scientist Luis von Ahn, who came up with the idea through a practically Taylorist obsession with the unproductive use of time: "We're reusing wasted human cycles."


But free labor only goes so far in the current AI boom, and more reliable professionalized workers are needed to surmount what anthropologist Mary L. Gray and computer scientist Siddharth Suri describe as "automation's last mile." Getting AI systems to function smoothly requires astonishing amounts of "ghost work": tasks performed by human workers who are kept away from the eyes of users, and off the company books. Ghost work is "taskified" — broken down into small discrete activities, "digital piecework" that can be performed by anyone, anywhere for a tiny fee.


The labor pool for ghost work is truly global, and tech companies have been eager to exploit it. Samasource, which specializes in training AI, specifically targets the world's slum dwellers as a cheap solution for the "boring, repetitive, never-ending work" of feeding information into machine learning systems. The company's workers are poorly paid, though it justifies this through the compulsory rhetoric of humanitarianism that proliferates in Silicon Valley. Samasource's late CEO, Leila Janah, admits that employing low-wage workers from


Kibera, Kenya — Africa's largest slum — is a profitable strategy. But it is, she claims, also the moral choice, so as not to upset the equilibrium of their impoverished surroundings:


> But one thing that's critical in our line of work is to not pay wages that would distort local labour markets. If we were to pay people substantially more than that, we would throw everything off. That would have a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.


Janah's humanitarian efforts notwithstanding, Samasource's business model reveals the real impact of networked digital technologies on the world of work. Even in a world of resurgent nationalism and hardening borders, the internet has created a massive globalized reservoir of human labor power for companies to tap into, as much or as little as needed: the "human cloud." In this cloud, no far-flung locale need remain independent from the world's most powerful corporations, and with intense competition, you have to be quick and compliant even to snatch a gig at all. And no moment may be left unproductive: jobs can be sliced down to microtasks, paid as piecework, or "gamified" so they aren't paid at all. This potential future of work has nothing to do with expanding leisure from "full automation." Quite the contrary: in this future, work seeps into every nook and cranny of human existence via capitalist technologies, accompanied by the erosion of wages and free time.


The ghost work of the human cloud may give the impression that low-wage gig workers are alleviating the burdens of the lucky few who manage to snag a comfortable career. But computer-facilitated taskification comes for us all. The fumbling medical students provide a dramatic example of what the saturation of everyday life with digital technology has wrought: the deskilling of everyday life. Ian Bogost, a media scholar and video game designer, observes that the proliferation of automated technologies, from self-flushing toilets to autocorrecting text messages, accelerate feelings of precarity and unpredictability. This is because rather than serve human needs, they force people to adapt to unpredictable and uncontrollable machine logic: "The more technology multiplies, the more it amplifies instability." In response, we develop arcane rituals that make the toilet flush at the right time, or muddle through another "autocorrected" message full of typos. It is not simply a romantic critique that technology separates us from the sensuality of the world (though, humorously, Bogost relishes a physical paper towel over a sensor-triggered air dryer). It is a practical one: the supposed convenience of automated everyday life is undercut by our lack of control, our confusion, and the passivity to which technology conditions us. "Like people ignorant of the plight of ants," he writes, "and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans — but not necessarily in the service of human ends." This is precisely what philosopher Nolen Gertz describes as the "in-order-to mindset":


> Modern technologies appear to function not by helping us achieve our ends but instead by determining ends for us, by providing us with ends that we must help technologies achieve. Thus the Roomba owner must organize their home in accordance with the maneuvering needs of the Roomba, just as the smartphone owner must organize their activities in accordance with the power and data consumption needs of the smartphone. Surely we buy such devices to serve our needs but, once bought, we become so fascinated with the devices that we develop new needs, such as the need to keep the device working so that the device can keep us fascinated.


Some scholars of contemporary technologies describe them in terms of older needs — or rather, older compulsions. For instance, social psychologist Jeanette Purvis notes that Tinder, the dating platform that ranks among the most popular apps of all time, works through an interface that uses "the same reward system used in slot machines, video games and even during animal experiments where researchers train pigeons to continuously peck at a light on the wall." Users swipe through an endless supply of randomized potential mates, an infinitude that results in an incredible churn — 1.4 billion swipes a day — and an overall lower satisfaction with dates. So desperately hooked to swiping are Tinder users that competing services like Coffee Meets Bagel market themselves on providing *fewer* options. And as a kind of artistic immanent critique, some wags have started selling "The Tinda Finger," a disembodied rubber digit that spins on a motor attached to one's phone, thus automating the swiping process. "The idea is to maximize the potential for matches while you can spend your time focusing on other things": automation to spare us from the "convenience" of automation.


Breaking Machines


The Tinda Finger gestures toward a widespread popular discontent with our algorithmic, automated world, and an impulse to mock, challenge, and even damage the most palpable symbols of our dazzling digital economy. This "techlash" — one of 2018's defining words according to the "Financial Times" — has reached the upper echelons of the political and intellectual elite, who were eager for social media, hackers, and memes to shoulder the blame for atavisms like Trump and Brexit. But the high-level discontent has been simmering for a while: plaintive nostalgics like Sherry Turkle and Nicholas Carr have been popular in Silicon Valley for years. Their books about the internet making us stupid and uncommunicative sit comfortably on the shelves of tech executives who ferociously limit their children's screen time and send them to elite tech-free schools, even as they push Chromebooks and course-management software on the rest of us.


In an effort to ride the crest of this negative ferment, a number of prominent Silicon Valley entrepreneurs and designers have issued their own mea culpas at the Frankenstein's monsters they've helped to create. Some have even reached for that favorite solution of the tech bourgeoisie: founding a nonprofit. Ex-Googler Tristan Harris's Center for Humane Technology, which promises to "to reverse human downgrading by inspiring a new race to the top and realigning technology with humanity," has pride of place among them. Rather than hook users for as much time as possible, activity on digital platforms should be, as the slogan goes, "time well spent." But, as Ben Tarnoff and Moira Weigel caution in a 2018 essay in the "Guardian," such endeavors leave unchallenged the massive power and wealth of companies like Facebook, and more perversely, they may even represent a new direction for their business:


> In other words, "time well spent" means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction — it also acknowledges certain basic limits to Facebook's current growth model. There are only so many hours in the day. Facebook can't keep prioritising total time spent — it has to extract more value from less time.


In other words, this is the transition from absolute surplus value, based on the extension of time spent producing data for Facebook, to relative surplus value, where time spent on Facebook is rendered more productive. Tech humanism is not about liberating people from digital capitalism, but about extending its reach, impelling users into more profitable activity: quality time to turn into quantitative value.


Unfortunately, the romantic nostalgia for face-to-face conversations or a retreat to humanist values and meaningful behavior, so often prescribed as a way out, lacks both the antagonism and the generalizability that will be needed if we are to break out of digital dystopias. The strategy of refusal pursued by the industrial workers of old might be a more promising technique against the depression engines of social media; as media scholar and activist Trebor Scholz notes, organized boycotts of Facebook date back to 2010. Polling research indicates massive dissatisfaction with the platform, with over 40 percent of Facebook users taking extended breaks or quitting. A final break is difficult to achieve: Pew found that social media use hasn't changed much, though young people are increasingly trying to escape surveillance capitalism. Marketers warn: "significant cracks are beginning to show" when it comes to Generation Z and social media. When one goes professional with their social media presence, as with the phenomenon of product-endorsing "influencers," the effects are exacerbated. YouTuber burnout videos are such a verifiable trend that YouTube commissioned one of its platform's creators to craft instructional mental health videos. But therapist Katie Morgan suffers from the same affliction she's trying to treat, the pressure to constantly perform and create: "I always feel like I should be working, or that they're counting on me."


Refusal is only one form of resistance, and more confrontational methods are in evidence. In 2013, two Canadian scientists created a hitchhiking automaton dubbed hitch-BOT. HitchBOT could not move itself; rather, programmed with a rudimentary vocabulary and LED smiley face, it relied on friendly humans to help carry it to its next destination. The robot's design, manufactured out of scrap heap items — "a plastic bucket for a body, pool noodle arms and legs, and matching rubber gloves and boots" — was an effort to charm: "The low-tech look of it was intended to signal approachability rather than suggesting a complex high-end gadget," explained creators Frauke Zeller and David Harris Smith. HitchBOT successfully trekked across Canada, was invited into homes and photographed at meals, and amassed a substantial social media following. Yet hitchBOT's inventors weren't simply staging a heartwarming bit of participatory art, although they boasted of their creation's status as the "robot in residence" at a number of museums. Their experiment had a serious utilitarian aim: in their words, "discovering how to optimally integrate human and robotic labor." And further, they argued that their project's success demonstrated the wide-ranging potential of robots in the workplace: "Robots aren't just an opportunity to make the office more efficient. Robots offer the chance to harness human creativity and to direct human attention." The team made plans for hitchBOT to attempt to cross the United States, starting on the East Coast and ending up at the heart of everything tech, San Francisco.


HitchBOT made it as far as Philadelphia, where it was dismembered by unknown assailants two weeks after its American sojourn began. The scientists, who quickly absolved the City of Brotherly Love for any special culpability, made arrangements for hitchBOT's remains to be returned to Canada; its head was never found. For an experiment to see whether robots could trust humans, it would seem an emphatic answer was provided on the sweltering summer streets of Philly.


If hitchBOT was a cartoonish vanguard for what "Financial Times" columnist Martin Wolf calls "the Rise of the Robots," then his assailants in Philadelphia also represent a kind of vanguard: a swift and forceful effort to sabotage the increasing proliferation of automatons in our lives. In San Francisco, security robots hired to harass the homeless have been repeatedly assaulted (with one, in a testament to the boundless creativity of the masses, coated in barbeque sauce). In Arizona, a testing ground for self-driving cars, AI-operated vehicles have met pitched resistance: residents have slashed tires, hurled rocks, flashed guns, and repeatedly attempted to run the cars off the road. For the Arizonans, the motive is self-preservation: in March of 2018, a driverless Uber struck and killed a woman in Tempe as she crossed the road.


When robots enter the workplace, the antipathy is just as great. Researcher Matt Beane notes that when hospitals began to replace delivery workers with robots, workers began to sabotage the devices:


> This took more violent forms — kicking them, hitting them with a baseball bat, stabbing their "faces" with pens, shoving, and punching. But much of this sabotage was more passive — hiding the robots in the basement, moving them outside their preplanned routes, obscuring sensors, walking slowly in front of them, and most of all, minimizing usage.


Beane dismisses the workers' actions as pointless: cutesy gizmos don't represent a true technological threat to livelihoods. But the sabotage is an emergent practice of machine-breaking that could develop into more concentrated workplace struggles. Beane notes that the attacks occurred amid tense negotiations for fairer work and pay.


Inside the Amazon warehouses, where "full automation" is repeatedly shown to be a dream deferred, workers direct their ire toward their robotic accessories. "We're human, not robots" has become a rallying cry for more humane, and less automated, working conditions. Other Amazon workers phrase their difficulties as a struggle against machines. "You have to beat the machine," said one temp worker at an Amazon fulfillment center. "It's like a nightmare, all these machines telling you your rate is down." When workers move too slowly, accruing more than three automated warnings, they are summarily canned, with managers hiding behind the technology. "Oh, we didn't fire you, the machine fired you because you are lower than the rate," explained warehouse temp worker Faizal Dualeh.


And yet, on the floors of Amazon's logistical nodes, where industrial capitalism and surveillance capitalism seamlessly integrate into systems of domination that would make even Fred Taylor blush, workers find means of fighting back. Journalist Sam Adler-Bell documents Amazon workers' "weapons of the weak," methods they use to subvert Amazon's "regime of total surveillance and bodily control":


> The warehouse workers I encountered play games, against themselves or their coworkers. They cheat to artificially boost their productivity numbers. They pass these tricks around in coded language. They use their scanners to find erroneously underpriced items and buy them in bulk. (Some steal outright.) They play (usually harmless) pranks on overbearing managers. And almost all of them skirt safety rules to move faster.


While Adler-Bell is cautious about the efficacy of such tactics, he believes they show potential to spread: "Small acts — especially those that involve some sort of coordinated deception — may awaken a willingness to defy that eventually enables larger, more decisive acts." His conversations with Amazon workers broach the questions: What kinds of large, decisive acts might turn the tide against the machines and the massive companies behind them? How might we restore autonomy to our jobs and daily lives?

-- Response ended

-- Page fetched on Sat May 18 14:14:32 2024