It was the beginning of Junior year, and my friend Tara was helping me unpack my junk into our off-campus house. Like the good, patient individual that she is, she did not question how I had remembered to bring 27 different notebooks and forgotten my toothbrush, but all the same, there was something in my collection even she could not overlook.
“What is this?”
I turned to see her holding my well-used stovetop tea kettle, her expression that of a baffled archeologist.
“It’s a kettle, silly. For tea.”
“The house already has two electric tea kettles, of course…”
“I know, but wouldn’t it be nice to have an old-fashioned one that whistles and has that charming, old-world aesthetic?”
My eloquence was met with a look of skepticism. “This takes three times longer than an electric kettle. You barely have time to make tea in the morning as it is.”
“Ah, but the sound it makes…”
“Will wake up all your sleeping housemates.”
“Uh…”
“Plus it takes up twice the space and holds less water.”
“Well…”
“Please take it away before you try to chop up our microwave as fuel for a hearth.”
Tara and I represent the two camps of the unofficial, ongoing technology war. She loves technology — the newer and sleeker the better. I, on the other hand, am approximately seventy-four in technology years, meaning I spend most of my time with technology grousing at it, and have a tendency to beat people over the head with Fahrenheit 451 whenever I see commercials for those horrifying virtual reality goggles. If you put us together, you get a normal person. But as much fun as we might have engaging in our polarized, often factitious arguments about technology, it can be irritating to see serious discussions on the subject become bogged down in their own ideological trenches. All too often, people either laud technology as mankind’s path to perfection, or demonize it as a great evil. There are innumerable pieces written from each extreme, some of them good, some of them not, but in the midst of all the debate and shouting, we can forget that technology itself is amoral, often in much subtler ways than one might think. For example, every technological advancement means that something about the older way is necessarily lost. Often, as in the case of my obsolete, clunky tea-kettle, the loss does not merit much lamenting. Sometimes the advancement may bring such advantage and excitement that we entirely forget — or perhaps don’t even realize — what was lost because of it. But ignoring these quiet changes can be risky, since they often bring about unforeseen consequences. Rather than view modern technology as a terrifying and destructive evil or as the most glorious advancement of mankind, a more practical, constructive viewpoint considers the world of small, nuanced trade-offs that surround each advancement in technology. By defining and examining the precise gains or losses, we can uncover the human element behind them, moving the focus from the technology itself to the elements of human nature it brings to light.
Rather than focus immediately on some of the weightier aspects of the technological debate, why not examine these ideas in the realm of a far more neutral topic, such as music? After all, music has faced immense technological change in the past 100 years. With the invention of the phonograph in 1877, songs could suddenly be recorded and particular performances could endure long past a performer’s career. This made music vastly more accessible and varied, and virtually no one, my technophobic self included, laments such a change. Nevertheless, something was lost because of it. Imagine what the experience of music must have been like before society had found the means to record it. Attending a performance was truly an event; each good performance was a unique treasure that would exist in that moment and in that moment alone. Depending on how complex the piece, you might only get to hear a favorite piece a handful of times in your life — but with the consequence that each time you heard it, it was exquisite, wondrous, delectable. There was a reason beyond mere poetic effect that Walt Whitman described music as a force which “shakes mad-sweet pangs through my belly and breast” and “wrenches such ardors from me I did not know I possess’d them.” Now, because music is so wonderfully accessible, the modern listener often finds a piece he enjoys and immediately listens to it ad nauseum, wringing out every ounce of pleasure, until the song seems annoying and dull, and is dropped from the playlist in favor of a new favorite. We listen with greater frequency, but at the expense of the extreme wonder that scarcity provided. Likewise, this consumer attitude is part of the reason we have moved from the symphony and the aria to Ed-all-my-songs-are-variations-of-each-other-Sheeran. (Nothing personal against Ed Sheeran, folks, but it’s true.) Pop music is much simpler to create, learn, and perform, and much better suited to fickle listeners looking to burn their way through another song. All of this speaks to the human tendency to give in to instant gratification and satiate our desires. This is not something new that technology has miraculously rebirthed; rather, it is a reiteration of an age-old problem. The technology is neutral, but nevertheless, it can enable a flaw in human behavior.
A similar problem arises out of music’s incredible portability. As ambient sound becomes more and more omnipresent, it is important to realize that this music is not simply filling in a void; rather, it is taking the place of silence, nature, and human voices. Once again, this is not meant in an entirely negative sense. Recorded music livens parties, brings drama to stories, and adds entertainment to the dullness of extended travel. But as much as one might love being able to turn on music in the car or slip on headphones anytime, how many important conversations have been unintentionally waylaid as a result? When we feel lonely or depressed, it is often a good consolation to put on some music to “brighten things up.” But sometimes it is during those quiet walks or still nights spent staring at our ceilings that we are forced to struggle with the things in our life that we would rather shove aside. Music might be soothing, but we should take care it doesn’t become our opiate. Though it is an extreme comparison, such examples recall an excellent scene in Aldous Huxley’s Brave New World, in which two characters take a helicopter ride on a stormy night. For both characters, the vast wildness of the natural world challenges their safe, vapid, meaningless lifestyles. But while one character welcomes this disturbance, the other is less than impressed:
“It’s horrible,” said Lenina, shrinking back from the window. She was appalled by the rushing emptiness of the night, by the black foam-flecked water heaving beneath them, by the pale face of the moon, so haggard and distracted among the hastening clouds. “Let’s turn on the radio. Quick!”
Readily-accessible music, like the other entertainments of Huxley’s pleasurable dystopia, ceases to be a simple joy, and instead becomes a means to keep one isolated in an artificial world of self.
This example of music, harmless as it is, illustrates how technological dilemmas often have less to do with technological ability and far more to do with human nature itself. But what happens when we apply this criteria of gains, losses, and human nature to something with more dire consequences? In the past decades, medical technology has improved drastically. And yet, in the shadow of its enormous benefits, it has also brought us the Pandora’s box of bioethics. If you search “bioethics topics” on your computer, prepare to confront thousands of websites, opinions, and scientific journals. Georgetown University has an entire library devoted to Bioethics Research, with approximately 85 broad categories of research, ranging from Abortion, Behavioral Research, Gene Therapy, Organ Donation, Personhood, Stem Cell Research, and Psychotherapy, to name a few. As our ability to tinker with nature becomes more complex, so do our responsibilities and ethical dilemmas. And, just as the music example illustrated, humans have the potential to abuse any advance they might create. The same ultrasound equipment that is used to aid in the safe delivery of a baby is also used as a tool in abortion clinics around the world. In other words, the specific loss in this case might be the natural protection that arose out of mankind’s inability to interfere. Consider, for example, the practice of IVF, or in vitro fertilization, in which egg and sperm cells are combined in a laboratory, and then placed inside a woman’s uterus. This technology can be a huge blessing to couples struggling with infertility, but still raises difficult questions even in its most well-intentioned uses. WebMD, on its information page for in vitro fertilization, states that “Any embryos that you do not use in your first IVF attempt can be frozen for later use. . . If you do not want your leftover embryos, you may donate them to another infertile couple, or you and your partner can ask the clinic to destroy the embryos. Both you and your partner must agree before the clinic will destroy or donate your embryos.” Why does the clinic need permission to donate or destroy the embryos? Because, technically speaking, they are the potential children of the in-vitro clients. This begs the question: if human life begins at conception, what is the rightful course of action to take with these leftover embryos? Is there any difference between a viable embryo in a lab, and a viable embryo in a womb? Or if life does not begin at conception, as pro-abortionists might try to argue, when does it suddenly “begin”? These are difficult questions, and need to be considered before one plunges blithely ahead in the name of technology. Moreover, these questions are just the ones that arise from a straightforward, infertility-combatting use of the technology. What happens if a fertile, healthy couple decides they would like to artificially select certain genetic characteristics for their unborn child?
If you think this sounds like science fiction, think again. On the 14th of February, The Times published an article entitled “Human Gene Editing Receives Science Panel’s Support”, which opens with the following sentence:
“An influential science advisory group formed by the National Academy of Sciences and the National Academy of Medicine on Tuesday lent its support to a once-unthinkable proposition: the modification of human embryos to create genetic traits that can be passed down to future generations.”
Suddenly, our ethics scenario just became very interesting. Furthermore, even though the advisory group “endorsed only alterations designed to prevent babies from acquiring genes known to cause ‘serious diseases and disability,’ and only when there is no ‘reasonable alternative’,” we are still faced with the inescapable fact that this technology, once it is properly developed, could just as easily be used in very different ways. Just as the technological advances in music have lead to abuses of its accessibility or to a more consumeristic attitude, so this new research could be used to prevent debilitating genetic diseases…or it could lead to a Gattaca-like scenario of genetic selection and caste systems. Advanced medical technology can help us harness the power of nature in order to overcome the problem of infertility or engage in disease prevention, but comes with the risk that the unborn may lose some of the protection our technological ignorance afforded them. The music example has shown us humanity’s tendency to abuse technology, and we must remain aware of this as we advance into this new world of genetic modification, taking care that it doesn’t become a Brave New World. Losing our wonder over music is lamentable, but no great tragedy. Losing our wonder over conception and human life would be a disaster of the highest degree. It is a sad truth, but the only things absolutely guaranteed protection from human abuses are those that are inaccessible to it. Though The Times article acknowledged that “It will probably be years before gene-editing techniques tested in animals can be shown to work in humans,” we need to consider these ethical questions in the here and now. We can neither hide from this technology, nor naively assume that vague prohibitions against enhancements will be enough to stem the darker side of human nature. In every case, we must evaluate what exactly being lost or traded away with each technological advancement: if those things are useless, let them fall away; if not, retain them, fight for them, even if it means placing limits on potential technological power. If we can remain aware of the human element in technology, we can avoid both naïve belief in progress, and being afraid of our own shadows.
Now, if you’ll excuse me, I think I hear my tea kettle whistling.
Katie Davenport is a junior studying English and art.