Sven Birkerts
“Beginning in the late 1970s the world began to change—and fast.” So say John Palfrey and Urs Gasser in the introduction to their Born Digital:
Understanding the First Generation of Digital Natives. Intentionally or not, the assertion echoes Virginia Woolf’s oft-cited observation about the arrival of the modern era: “On or about December 1910 human character changed.” Both statements provoke argument, expressing as they do not just a sense of a world in transformation (the world is always in transformation), but that of a radical break. Woolf’s intuitions of Modernism have been parsed by generations of grad students, of course, but Palfrey and Gasser are staking out still unfamiliar territory. They are identifying a condition of technological critical mass that, paceEcclesiastes, is something new under the sun, the effects of which are profoundly shaping what they call the generation of “Digital Natives”—those born after 1980.
Palfrey and Gasser begin their study with a slightly hyperbolic but nonetheless apt description of how these not yet thirty-something Natives inhabit the world. It is almost enough to list their devices and tools: iPods, smart phones, Facebook, Twitter, YouTube, and a host of applications—“apps”—still too new to have made it into the dictionary. Of course, many of the Natives’ parents are users as well—“Digital Immigrants” in Palfrey and Gasser’s terminology—but for them these are learned behaviors, elected adaptations. For Natives, the various interfaces and the interactions they enable are simply the given:
Unlike most Digital Immigrants, Digital Natives live much of their lives online, without distinguishing between the online and the offline. Instead of thinking of their digital identity and their real-space identity as separate things, they just have an identity (with representations in two, or three, or more different spaces). . . . Digital Natives don’t just experience friendship differently from their parents; they also relate to information differently.
At first Palfrey and Gasser appear to be taxonomizing a new species in the making. But then, even before concluding the introduction, they pull back into a flatly civic tone and propound a pragmatic perspective. “We are at a crossroads”, they write. “There are two possible paths before us—one in which we destroy what is great about the Internet and about how young people use it, and one in which we make smart choices and head toward a bright future in a digital age.” So much, then, for the idea of deep transformation and exploring its social, political and moral implications. As the chapter titles unfold (“Identities”, “Dossiers”, “Privacy”, “Safety”) the promise of the book’s opening declaration all but fades into a treatise on the outward logistics of accommodation. We get bland assertions like, “For the time being, education is the best way to help Digital Natives manage the information-quality problem.” Born Digital ends up feeling like a missed opportunity, or an apocalypse indefinitely postponed.
A very different sort of retracted doomsaying characterizes Jaron Lanier’s You Are Not a Gadget: A Manifesto, which is in every way a more searching and provocative book. Perhaps this is because Lanier is what we might call a “Digital Insider.” Composer, thinker, an original guru of virtual reality and reputed coiner of that term, Lanier begins his book in a “state of emergency” tone, with short bullet-like statements declaring that the much-touted digital revolution has turned against its founding ethos and now threatens to entrap rather than liberate individuals. It is, Lanier tells us, subjecting creativity and the free flow of information to the sclerosis of “lock-in” and the need for systemic uniformity. Indeed, the very idea of the individual is at stake. Of Silicon Valley , his home turf, he writes:
The first tenet of this new culture is that all of reality, including humans, is one big information system. . . . There’s nothing special about the place of humans in this scheme. Computers will soon get so big and fast and the net so rich with information that people will be obsolete, either left behind like characters in Rapture novels or subsumed into some cyber-superhuman something.
Though Lanier himself does not look fondly on such developments, he knows well the mentality from which they originate. He puts his finger on the crux of the issue, nothing less than the core crisis of the human: How will individuality survive technologies that impose uniformity? His title, You Are Not a Gadget, suggests a strenuous rebuttal to the key danger he identifies. But alas, Lanier, a bit like Palfrey and Gasser, doesn’t deliver the promised manifesto. His chapters, in themselves fascinating essays on consumerism, the need to safeguard artistic products, the growth of networking sites and more, refuse the implications of his opening claims. He turns away from the big-picture scope that launches the book, leaving key questions unaddressed and his assertion of crisis ultimately unsupported.
Lanier’s retreat is harder to abide than that of Palfrey and Gasser, for unlike the latter, he grasps the convergence of causes at work and makes clear what is at stake. He identifies the pachyderm in the room, the prospect that the digital revolution is not just the latest human adventure or the next transformation the species must adapt to, but a force created by humans that is in big ways rewriting the whole human script. Yet he cannot seem to bring himself to do more than offer it a bag of peanuts.
This is not to suggest that humans have not changed (evolved, to cite the more technical term) since their origins, the new forms making their way towardHomo sapiens defined by whatever artificial designations are in fashion. Modern but Pre-Digital Man was different in untold ways from his counterpart in the Athenian agora. Millennia of history had altered his psychological structure, mentality and even his neural reflexes. What Lanier raises but then ducks is the inevitable question: If change and adaptation have been a constant all along, whence this sudden urgency about a changing “us”? Why not see the digital revolution as just the latest wave of technology, no less a boon than steam power or electricity and hardly an occasion for a top-to-bottom reconsideration of all things human?
If Lanier sidesteps the question, we may at least thank him for raising it. Change may be constant, but the gradations are hugely variable, with degree at some point shading into kind. Consider that the transformations of the human to date have all been dictated by social shifts, inventions and responses to various natural givens: modifications of circumstance, in short. We have adapted over these long millennia to the organization of agriculture, the standardization of time, the growth of cities, the harnessing of electricity, the arrival of the automobile and airplane and mass-scale birth control, to name just a few developments. But the cyber-revolution is bringing about a different magnitude of change, one that marks a massive discontinuity. Indeed, the aforementioned Pre-Digital Man has more in common with his counterpart in the agora than he will with a Digital Native of the year 2050. By this I refer not to cultural or social references but to core phenomenological understandings. I mean perceptions of the most fundamental terms of reality: the natural givens, the pre-virtual presence of fellow humans, the premises of social relationships. We have seen more pressure applied to species transformation in the past century than in the previous several thousand years.
What is driving the change is obviously complex, and determining its scale is difficult. Crucial is the fact that up until the time of the shift, the critical post-1980 saturation that Palfrey and Gasser cite as creating the Digital Native, we mostly lived our lives with reference to an “out there”, a tactile, sensory, natural world and the habitations that we fashioned within it. Now, however, we increasingly live in formative relation to a world of signals and symbols. These are signals and symbols we have created ourselves, but they are so vivid and complete as to have the consistency of a world. The old planet still exists, of course, though increasingly we heed its reality through the lens of natural catastrophes, news of which reaches our screens through the airwaves—all of which brings me to Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains.
The Shallows is a serious expansion of Carr’s widely cited Atlantic essay, “Is Google Making Us Stupid?” Carr began the essay by questioning his own growing difficulty with sustaining focus on longer texts. Easily distracted, impatient, he found himself unable to concentrate as he had before, and he suspected that the reason was his ever-growing daily reliance on screen reading and his use of the Internet. He was led to ask the obvious question, which is his book’s subtitle slightly modified: What is the Internet doing to our brains?
The Shallows offers the alarming answer: a great deal. Our consuming involvement with digital media is significantly altering our mental processes. And insofar as we are our mental processes (are we not?), this involvement is changing who we are. Our circuit-based ways of living are refashioning us into that yet elusive creature, the Digital Native (though that label confers a deceptively atavistic solidity on a vaporous and provisional-seeming entity).
The first half of The Shallows goes directly to neurophysiology, arguing in essence that our brains are highly plastic organs that morph their essential structure in response to what they are asked to do. And they do this morphing quite rapidly. Carr cites, for example, research done on the brains of London cabbies, pointing to a direct correlation between their daily work and an enlarged hippocampus, that being the part of the brain that “plays a key role in storing and manipulating spatial representations of a person’s surroundings.” The hypertrophy was matched by proportionately smaller anterior hippocampus, “apparently a result of the need to accommodate the enlargement of the posterior area.” And: “Further tests indicated that the shrinking . . . might have reduced the cabbies’ aptitude for certain other memorization tasks.”
Another experiment no less suggestive in its findings tested brain modifications in two sets of people who were asked to learn to play a simple, single-handed melody on the piano. One group practiced the piece on a keyboard while members of the other sat in front of the keyboard and imagined playing the song. The researcher, Alvaro Pascual-Leone, “found that the people who had only imagined playing the notes exhibited precisely the same changes in their brains as those who had actually pressed the keys.” Carr theorizes his way through a finely textured analysis to the conclusion that we “become, neurologically, what we think.” Our inner exertions have the power literally to change us. It is not just that we are what we think; we are how, with what tools, we think.
If the brain changes in specific ways based the operations it performs (Carr’s difficulty focusing on long texts, for example), then it follows that our growing engagement with the fluid, quasi-neural network that is the Internet is capable of radically modifying our cognitive make-up. If there is any truth to this, and it’s hard to think there isn’t, then we can throw all those blithe “it’s just a tool” rationalizations right out the window.
Carr’s evocatively titled chapter, “The Juggler’s Brain”, gets right to the heart of the issue:
One thing is very clear. If, knowing what we know today about the brain’s plasticity, you were to set out to invent a medium that would rewire our mental circuits as quickly and thoroughly as possible, you would probably end up designing something that looks and works a lot like the Internet.
By outlining a kind of inventory of Internet functions, Carr reveals how those functions directly influence our cognitive processes. “Our use of the Internet involves many paradoxes, but the one that promises to have the greatest long-term influence over how we think is this one: the Net seizes our attention only to scatter it.”
What are the implications here? “Just as neurons that fire together wire together”, writes Carr, “neurons that don’t fire together don’t wire together.” This is more than a catchy mantra. It’s scientific shorthand for the fact that our daily Internet immersion is bending us steadily away from our base mental orientations and aptitudes. One way to reflect on the meaning of the change is to ask what is being lost. Carr puts it this way:
The mental functions that are losing the ‘survival of the busiest’ brain cell battle are those that support calm, linear thought—the ones we use in traversing a lengthy narrative or an involved argument, the ones we draw on when we reflect on our experiences or contemplate an outward or inward phenomenon. . . . [W]e seem to be taking on the characteristics of a popular new intellectual technology.
Carr is making a gigantic—and enormously disturbing—claim, and we need to muster some calm, linear thought to contemplate it. As an editor, teacher and long-term follower of the so-called public conversation, I see the eponymous “shallows” everywhere: in a political discourse that has become jittery and insubstantial, in a public life where values and seriousness seem to have melted into thin air. Or am I just screening through my bias?
I have brooded about the rise of the Internet and the waning of the print culture for decades. I think it through this way and that—analytically, nostalgically, apocalyptically and defensively—and the question I always return to is simple as can be: What is the import of our collective love affair with keypads and screens? How is it affecting the great intangibles—our thinking, our sense of initiative, our subjective self-grounding, our formulations of private and social meaning?
I don’t think the Internet (never mind the myriad other technologies of linkage) would distress me if I had faith that it could deliver, as its loudest boosters say it does, the values and virtues encoded in the system of print. If I thought that the digital realm could foster and sustain the kind of calm, linear, reflective thinking Carr describes, I would quiet my inner Cassandra. But it is undoubtedly, as Carr suggests, a system with enormous shaping power. And if he is right about the plasticity and wiring-and-firing of our neurophysical human endowments, then we must seriously consider the possibility that we are, as users, taking on with eyes wide shut the attributes of the medium. If this is so, then it must raise questions not unlike those Paul Gauguin used as the title for his monumental triptych: “Where do we come from? What are we? Where are we going?”
Acomprehensive response to the possibilities Carr raises would require many pages (or much scrolling). One angle will have to stand for the many others that could be pursued.
Shortly after Carr’s 2008 essay appeared, influential pundit and digital-booster Clay Shirky responded in a post on the Encyclopedia Brittanica blog that is well worth our attention. Shirky begins by identifying common ground: “I think Carr’s premises are correct: the mechanisms of media affect the nature of thought.” But whereas Carr sees cause for concern, Shirky takes the transformation as an inevitable challenge that we should look forward to meeting. The Internet is “an explosion of abundance”, here to stay, and any harking back to pre-Internet consciousness is a pointless exercise in nostalgia. No, he contends, “the main event is trying to shape the greatest expansion of expressive capability the world has ever known.”
This sounds at first blush like an exalting summons that one would be hard-pressed to decline. But lingering on it even a little reveals its bedeviling circularity: If the mind is being significantly shaped by the medium, what kind of shaping will that mind impose upon the cataract of information? What terms, structures, principles and ideals? If the medium shapes the mind instead of the mind the medium, what happens to concepts of autonomy and freedom? Shirky’s optimism appears grounded in a short-circuiting of human volition itself.
Shirky also makes various unsettling claims in the main body of his short text that epitomize a way of thinking rapidly gaining ground. Carr’s essay, he writes, is “focused on a very particular kind of reading, literary reading, as a metonym for a whole way of life.” He adduces Carr’s use of literary references and in particular his singling out Tolstoy’s War and Peace, which Shirky claims represents “the height of literary ambition and of readerly devotion.” Now that he has found his target, he warms to the attack. Disparaging Tolstoy’s novel for being “too long, and not so interesting”, Shirky then adopts the “hey, let’s face it” tone of the late-night stand-up comic. No one, he asserts, reads the supposedly great works anymore; television killed that activity decades ago, even though so-called “litterateurs” were for a long time allowed to retain their cultural status. But now, with the Internet-driven resurgence of reading, Shirky claims, we find no resurgent interest in these “cultural icons”, which for him is proof of “the enormity of the historical shift away from literary culture.” He twists the knife again: “The threat isn’t that people will stop reading War and Peace. That day is long since past. The threat is that people will stop genuflecting to the idea of reading War and Peace.”
Shirky is edging up on his big point now, which is that a certain kind of personality—the “complex, dense and ‘cathedral-like’ structure of the highly educated and articulate personality” (he borrows the words of playwright Richard Foreman)—is all but extinct. That style of thinking no longer sorts with the Internet’s processes and functions. “On the network we have”, he continues,
the bazaar often works better than the cathedral. . . . Getting networked society right will mean producing work whose themes best resonate on the net, just as getting the printing press right meant perfecting printed forms.
Shirky is ushering to the side not only the literary personality but, with it, a way of thinking that is a good deal more than a foppish adoration of supposed great works. Cashiering the readers of Tolstoyan megaliths, he is also, by taking the part for the whole, dismissing the ideals and legacies of the Enlightenment. Instead, he is calling us to mold our thinking to the ways of the Internet, which has suddenly been enthroned as the new procedural norm of things. Every bit as chilling as Shirky’s argument is the blitheness with which he accepts—no, rejoices in—the withering away of a whole culture, centuries in the making.
If this were just Shirky’s view, it would not much matter. But it isn’t, so it does. His words reflect an attitude and set of assumptions that are multiplying as rapidly as they are thoughtlessly. It’s as if Shirky has appointed himself spokesman for the party of the new. His assessments jibe closely with Lanier’s characterizations of the worldview of his fellow digiterati—the impatience at and anger toward what they see as the old order and the readiness to give everything over to the Internet paradigm.
Let’s take seriously for a moment the idea that an utterly unprecedented shift is underway, one that recasts not just our behaviors but also our core assumptions. Following an electronic as opposed to mechanical paradigm, this transformation is dramatically more accelerated and more deeply formative of our way of apprehending the world than any previous technological transformation. The whole business is happening right in front of our eyes, in the space of a generation, and in so total and simultaneous a way that there is no place to stand outside of it in order to calmly assess what is happening. As Shirky writes, “the main event is trying to shape the greatest expansion of expressive capability the world has ever known.” I agree, but I cannot command his poise, for I have too many uncomfortable questions. What will be our shaping tools, what skills do we hope to command, and, given that all shaping is done with some end in mind, what do we imagine we are bringing into being? What are the consequences if it turns out that we just don’t know?
No comments:
Post a Comment