We begin with a cab driver, and let us make them the best cab driver in our city. Our driver has been driving a cab for many years and has memorized all the streets names and best routes. They take pride in knowing the fastest and best ways to travel from one part of the city to another, accounting for traffic at different times of day.
Now, along comes an app that can calculate the best way to get from point A to point B, taking account of real time traffic conditions. The app is (hypothetically, now) better at navigating the city than our cab driver… but don’t tell them that!
Luciano Floridi, best known as “Google’s Philosopher,” gives us this example to illustrate the way in which our “smart technologies” challenge us at the level of identity. (See "Lessons from Luciano Floridi, the Google philosopher") Our cab driver must face off with that app, and either accept that the app is better at navigating the streets, or else enter a state of denial.
(Traditionally, this role has been reserved for the Other in the problem of intersubjectivity - for example, the Other challenges our ordering of the objective world, its being there for us. Now it seems that “smart technology” is presenting us with a similar challenge.)
Joe Gelonesi, the Philosopher’s Zone’s presenter, remarks how: “We’re not just cogs operating the machine, we are now also its product and its messenger.” Whether the technology be the world wide web, the press, the locomotive, or fire for that matter, as Heidegger has taught us, the essence of technology is "en-framing." Historically, our frames of reference have shifted, our understanding de-centered and shaken loose. In Floridi’s terms,
First came the Copernican revolution where the earth was found to orbit around the sun, then Darwin’s theory of evolution to make monkeys of us, then Freud’s “discovery” of the unconscious. And now we have what Luciano Floridi has named the “Fourth Revolution” in his new book by the same title : the digital revolution has brought with it the need to qualitatively analyze information.
Gathering data at a scale hereto unprecedented has not made knowledge understanding easier but harder. In fact, it becomes easier to see what patterns one wants to see, to reinforce bias. According to Floridi, data is data is data, and it’s nothing without an interpretive framework. As he puts it: “If anything, big data is generating a need for more well thought out questions.” It is the questions that we put to data that come to matter.
The fear, of course, is that machines animated by data will get to be smarter than humans and come to replace or challenge us as a species. (Let it be noted that the fantasy side of this fantasy/fear rubric is that man will be able to reproduce himself outside of the physical process of biological reproduction.) This is named the “existential risk” by our interlocutors:
“If machines can outsmart us at some thought processes, is there an ‘existential risk’ that machines will take over completely? Could we lose our sense of who we are on a personal level?”
This is a problem only in so far as we humans think of ourselves as thinking things or thinking machines, but this is a relatively new way of thinking about what it means to be human, a product of modern philosophy and the industrial revolution. But what if artificial intelligence de-centers that belief, and we face the question: If not a thinking thing, what else? What else are we “humans” capable of becoming?
Since we tend to think in binary opposites in the West, the first and most obvious answer (to the question of what comes after thinking) is to say that the pendulum is swinging in the direction of feelings or emotions, and perhaps empathy would rise in prominence. But arguably, simply affirming the opposite of the rational/emotional divide only serves to reinforce the dichotomy, and keeps us locked into the old framework.
Technically speaking, however, the operational dichotomy is the Cartesian one of mind/body, and it is true that the exploration of embodiment and spatiality has been underway since the middle of the Twentieth Century. (Mind/consciousness and time are correlates of matter/body and space in Western Metaphysics.) Heidegger, Merleau-Ponty, Foucault, Lefebvre, Bachelard, and the birth of "social geography" all speak to increased interest in the study of space. Could we come to understand our human selves as embodied, spatial beings, and what how would this frame our possibilities going forward? AI is body-less, an intelligence created in an immaterial, invisible realm.
But there is a third, perhaps more radical possibility looming: that the very idea that what it means "to be" (i.e., human) can be reduced to a single framework becomes untenable. Can we (yet) imagine the situation where a common, underlying "human nature" is no more, a world in which there are different kinds of humans as well as differently embodied and disembodied forms of intelligence? Does disembodied intelligence (such as the super-computer that Google is rumored to be working on) have being, count as a being?
But Floridi's point is that this kind of challenge is a long way off, and that entertaining our fears about AI and computers taking over the world is a way of distancing ourselves from the real ways in which our smart technologies are already challenging our identities.
What do you think? We invite your comments below.