10
min read

AI in L&D: Concerned about Technology

Looking into my AI learning speedrun with a dialectical lens
Written by
Coleman Numbers
Published on
July 17, 2024

In my last post, I reflected on the mixed results of an attempt to do an “educational speedrun” with publicly available AI tools. I speculated towards the end of that post that my failure to learn anything meaningful stemmed from a particular philosophical relationship with technology. Specifically, I had an implicit but unwarranted trust in the native power of AI tools to achieve my ends.

I’d like to sketch out that type of relationship, which I believe is common, more fully. I believe this is necessary because, if we are to successfully implement artificial intelligence in human learning processes, we need to be clear on how we relate to AI. And that is going to require some more foundational thinking about the irrevocable pervasion of technology in everyday life.

Be ye warned, kids: if this were a Robert Rodriguez film, this is where big, flashing neon block letters would fill the screen: PHILOSOPHY GLASSES ON.

“Substackers Against the Machine”: A Digital Watchtower

In recent years, there’s been a growing movement of religiously subversive writers who resist the secular world’s blithe acceptance of technological pervasion. These writers are typified by such thinkers as environmentalist-turned-Orthodox Christian Paul Kingsnorth, who I discussed in a previous blog post.

Though they vary by faith, politics, nationality, and everything else, these writers might productively be labeled by the affectionate, somewhat ironic nickname given by Christian writer Meg Mittelstedt: “Substackers Against the Machine.”

“Substackers,” of course, refers to the platform where this enclave gathers. “Machine” refers to, in Mittelstedt’s apocalyptic cast, the technological Moloch to which we sacrifice our “time…money…adoration…focus…” and all else. More precisely, Mittelstedt writes:

“The ‘Machine’ is a way that we talk about the powerful system—or force—behind our technology. It’s a way to talk about the principality or power of our technology: the system it has created that is too big to fail. I don’t conflate the Machine with super AIs or transhumanism. These are just extreme examples of the influence the principality or power of the Machine can or may wield.”

Kingsnorth, Mittelstedt, and others see “the Machine,” then, not so much as a grand conspiracy between global capitalists, turnkey totalitarians, and Ray Kurzweil’s yacht club, but rather as the confluence of bureaucratic, economic, cultural, and technological forces which drive civilization to integrate ever more deeply with technology. There need not be coordinated intention behind this process for it to be dangerous, the Substackers Against the Machine (SAMs) believe. All that needs to happen is for humanity to surrender itself to Machine-worship, one web app and smart appliance at a time.

“It’s a question of what you create,” Kingsnorth explained in a recent podcast, “and what you create is what you serve.”

In other words, the Substackers fight (and write) against the very attitude that drove me, in my last post, to depend on Microsoft Copilot as the be-all-end-all of learning. Certainly my uncritical use of that technology was ill-conceived, at best.

Before we dive further into that attitude, though, it’s worth investigating the other side of the SAMs’ apocalyptic coin: transhumanism.

Transhumanism: A Synthetic Messianism

Transhumanism is a much broader, more diffuse movement than the SAMs. It’s probably inaccurate to classify the phenomenon as one movement at all. There are Russian transhumanists, American transhumanists, libertarian transhumanists, feminist transhumanists, secular transhumanists, Buddhist transhumanists, (and to my particular delight) Mormon transhumanists, etc. And of course, there are intersections and branches and echoes of all these movements in popular culture.

With this level of diffusion, it can be difficult to pin down a precise definition for transhumanism. But the philosopher Nick Bostrom’s is one I see cited fairly often:

“‘Transhumanism’ holds that current human nature is improvable through the use of applied science and other rational methods, which may make it possible to increase human health-span, extend our intellectual and physical capacities, and give us increased control over our own mental states and moods.[1] Technologies of concern include not only current ones, like genetic engineering and information technology, but also anticipated future developments such as fully immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.”

The movement is a refutation of SAMs’ concern that technology will destroy, subjugate, or otherwise diminish human flourishing. Instead, technology—especially the heady, sci-fi sounding augmentations to human nature—become the basis of future human flourishing.

The fundamental difference between the SAMs (and the many, many people around the world who, for various reasons, align with their perspective) and the transhumanists seems to be, then, a difference in how they see the positioning of technology and humanity. For SAMs, technology—and, more immanently, AI in particular—means danger. For transhumanists, AI and other technologies are the key to growth.

My experience with  Copilot—along with my recent exposure to both the SAMs and transhumanism—has gotten me thinking: what is the proper relationship we should have to our technology? If it isn’t slavish independence, is it religious-apocalyptic suspicion?

Should we do everything we can to dismantle the Machine—bomb data centers, emigrate en masse to rural communes, outlaw thinking machines a la Dune? Or, as a member of homo sapiens that is-- with reference to most other homo sapiens who have lived already--deeply transhuman, should I place implicit trust in technological tools that have, by all accounts, increased lifespans, lowered infant mortality rates, and already made many humans more free?

As you can probably sense, there’s part of me that’s naturally sympathetic to the transhumanist perspective. But I’m also sensitive to the concerns that the SAMs raise, and I think both views have direct ramifications for the way we incorporate AI in L&D. The problem of what technology means, and how we use it, therefore, is worth a closer look.

Concerning Technology

Luckily, someone far smarter and more German than me already spent quite a bit of time thinking through this problem.

In 1949, the philosopher Martin Heidegger[1] delivered four talks in Bremen, Germany. The second of these would become a landmark essay—“The Question Concerning Technology,” which Claremont professor of philosophy Mark Blitz describes as Heidegger’s “[attempt] to show a way out” of uncritical immersion in a technological sphere—“a way to think about technology that is not itself beholden to technology.”

This way out consists of elucidating what Heidegger calls “the essence of technology.” Vexingly, Heidegger emphasizes that this essence is not found in the devices, methods, or even scientific concepts underlying modern technology. Instead, the essence of technology is a way of being and moving through the world. It’s an orientation that informs the way humans deal with nature.

“Technology,” Heidegger explains, “is a way of revealing.” Namely, modern technology lays out the world in a way that reveals it to be susceptible to extraction and compartmentalization by human beings. The Rhine becomes a potential for hydroelectric energy; the earth becomes so much coal and ore. And, while Heidegger doesn’t mention this example, it should be clear to us today: human creativity and language become training data for large language models.

Heidegger spends most of the essay strenuously outlining this “mode of revealing. In the 1977 English translation by William Lovitt, the technological orientation towards the world is also called “enframing.”

Modern technology is an “enframing” because, in addition to organizing our perceptual world in a certain industrialized way, it “challenges [human beings] forth, to reveal the actual, in the mode of ordering, as standing-reserve.” (Standing-reserve, here, is another word for this reduction of nature to potential extraction.) Modern technology is the motivating force behind all of our technological activities—it is the process by which humans begin to think of themselves as nodes in a vast machine designed to process and subdivide the planet.

We can see why Heidegger might be read here as deeply sympathetic to the SAMs. In fact, later in the essay, he explicitly calls the essence of modern technology a “danger.” Again, the reasons for this are nuanced, but in effect Heidegger worries that, as we become more deeply enmeshed with this “enframing” process, we will lose sight of a certain essential humanness, a core identity that opens us up to other ways of viewing the world. We will, philosophically if not materially, be fused to the essence of technology. Sound familiar?

This is, in some ways, the dangerous attitude that propelled me to approach my educational speedrun the way I did. I saw Copilot—an LLM—as already so much an extension of my own cognition. I took it for granted that I could draw on the aggregated wealth of human language and harvest it for my own intellectual ends. I was disappointed as a result. I succumbed, though minorly, to the danger that Heidegger points out.

That said, there’s also a decidedly transhumanist bent to Heidegger’s view of technology. “The essence of technology must harbor in itself the growth of the saving power” (emphasis added). Heidegger isn’t precise on what “saving power” is—but he is explicit about how the exploitative, extractive, even coercive essence of modern technology is involved:

“It is precisely in this extreme danger [of technology] that the innermost indestructible belongingness of man…may come to light, provided that we, for our part, begin to pay heed to the essence of technology.”

In other words, recognizing the true nature of technology—an extractive, systematizing world orientation—helps us awaken to the contrasting possibilities native to humanity. We are endowed with powers and resources and modes of revealing outside the purview granted by technology. We can see the world in ways colored by our instinct for beauty, compassion, and love. Crucially, for Heidegger, the “saving power” in recognizing technology’s essence leads us straight back to art, to poetry, to humanistic endeavors. The revelations of the aesthetic sphere can equip us to view technology clearly—but we only achieve this clarity if we engage with and confront the essence of technology head on.

Moving Forward

Advancements in AI are not slowing down, and neither will its implementation in L&D or any other field. This is why I see the project of the SAMs as ultimately myopic and unproductive.

At the same time, we don’t need to look far to see the consequences of nonreflectivity with respect to technology. A couple weeks ago, the Los Angeles school board voted to ban cell phones for the entirety of the school day, and U.S. Surgeon General Vivek Murthy floated warning labels for social media platforms directed at parents of teens. Collectively, we’re waking up to the dangers of unfettered technology access—namely, technology’s unfettered access to our minds and our lives. A radical, transhumanist-style embracing of technology isn’t helpful, either.

Instead, we need to attend to the essence of technology. How does use of chatbots in your learning systems, for example, shape the perceptions of our learners? Does remote, on-demand training impede or enhance a sense of employee belonging? What changes about how employees approach their work when every part of their job is fragmented across one or more CMSs?

Additionally, there is ample room to bring more humanistic thought and strategy into the workplace. Some of the phase shifts in compliance and HR practice, like Diversity Equity and Inclusion policies, emerged just as much out of academic critical theory as from immanent social challenges. Attending to these ideas actively—not merely absorbing them through cultural osmosis—is the difference between a middling and a successful organization.

And, of course, this synthesis must extend to the personal. What assumptions about consciousness, intelligence, and creativity do you import into your interactions with AI? How does instantaneous generation of visual, textual, and audio content influence the way you think about your own generative process? What does a relationship with a functionally omniscient personal assistant do to your relationships with limited, but much more complex and interesting, human beings?

All these questions, and more, await us. As Heidegger writes: “Yet the more questioningly we ponder the essence of technology, the more mysterious the essence of art becomes.”

[1] It’s worth mentioning that Heidegger was, in fact, a member of the Nazi party. We’ll treat the ideas here separately from the man who expounded them.

AI in Learning Newsletter
Keep up to date on the cutting edge technologies that are changing the way people learn and instruct.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.