Recently, I’ve been thinking about what it means to be free. This is for the predictable reasons: the United States’ Independence Day was at the beginning of this month and the country is hurtling towards a fraught presidential election.
But I’ve also been thinking a lot about what freedom means with respect to artificial intelligence. When social media algorithms monopolize attention, when e-commerce algorithms overdetermine shopping patterns, when suites of generative AI tools reduce creative cognitive tasks to a cleverly-worded prompt, what does it mean to be a conscious human agent? How do I negotiate my choices in a world where synthetic decision-making systems, whether they be Instagram’s explore function or Claude 3’s helpful suggestions about possible blog posts, seem to constrain those choices to curated, pre-realized options?
And, in a bigger sense, how do I preserve my humanity in an environment that’s inhumanely systematized?
These aren’t new questions, but doubtless they’ve become more germane in the age of AI. And, for me, they’ve been reiterated by a wonderful recent book, I, Human, by Tomas Chamorro-Premuzic. Chamorro-Premuzic (or “Dr. Tomas” as he refers to himself on his website) is a researcher and prolific communicator in business psychology and a professor at Columbia University and University College London. His focus is talent management and people analytics as well as how AI affects human performance.
In I, Human, Dr. Tomas distills insights from psychological and behavioral science literature—as well as plenty of data about how AI is shaping us at work, home, and everywhere else—into an eminently readable, illuminating profile of homo sapiens in the unfolding era of machine intelligence.
Dr. Tomas, while cautiously optimistic about AI, spends most of his pages highlighting how we’ve relinquished control to algorithmic systems, from the ways AI-driven social media platforms sap us of useful attention to how an ecosystem of instantaneous content makes us more narcissistic, more prejudiced, and more biased.
With this blog’s recent focus on the intertwinement between technology and humanity, I figured it would be helpful to turn to a psychologist for how to address some of these clear and present dangers. Dr. Tomas’ recommendation?
“We must demand that AI plays a bigger, more impactful role than it has played so far. If we can use technology to increase self-awareness and provide us with better including the things we may not like so much about ourselves— and highlight a gap between the person we are and the person we’d like to be, then there is clearly an opportunity to turn AI into a self-improvement tool and partner.”[1]
With this in mind, I wanted to share a few key insights from Dr. Tomas’ book about how we can mitigate the negative psychological effects of AI-inundation. These are broad interventions applicable to every domain of life, to be sure, but their ramifications for learning in the workplace should be clear.
At this point, it’s a truism that the digital age is a distracted age. But Dr. Tomas shares some figures that alarmed and awakened me to the predicament modern humans are in—especially those of us that make our living by sitting in front of screens:
“Knowledge workers…waste an estimated 25 percent of their time dealing with digital distractions,” a deficit that, per the Economist, has cost the US up to $650 billion a year.[2]
Dr. Tomas’ fix is less about spurning smartphones altogether, though, and more about seeking out that which has always drawn the most intent and productive human focus: meaning. This necessarily entails spending a certain amount of time away from the “endless stream of great TV,” to quote the Arctic Monkeys, but this is an asceticism with a promise: that in the quiet moments we carve out for ourselves, we’ll find beauty that entices us to return, again and again, to silence rather than to LinkedIn doomscrolling. And this, as a happy accident, will mean greater focus and more productivity.
Dr. Tomas identifies patience—and, by extension, self-control more generally—as another major casualty of the AI age. Digital content has evolved to hijack our natural impulses. As a consequence, we’re flooded with micro-decisions about what to watch, what to listen to, what posts to like or comment on, etc. that tax our daily reserve of impulse control—which in turn makes us less likely to “inspect, analyze, or vet” information in a way that promotes real learning and resistance to false or misleading data.[3]
To fill this daily reserve of patience and self-control, we need to care for the bodily system that undergirds our cognitive functions. Unsurprisingly, this means more sleep and more exercise. Both, Dr. Tomas explains, are linked to greater self-control.[4]
The digital revolution has come about largely because we’ve built machine intelligences that in increasing measure resemble human cognition. The surprising, but related outcome of that revolution is that we have come, through repeated exposure, to resemble our machines. And with that resemblance comes the flattening of human experience.
“Sometimes,” Dr. Tomas writes, “it seems as if we’re all actors, performing the same role and reciting the same lines, night after night. When we work and live in a digital world, more and more deprived of proper analogue experiences, we are forced to remain constantly in [the same] role: we browse, click, and react; we forward, classify, and ignore. In the process, we risk ignoring life as it once was, simultaneously simpler and richer, slower and faster, serendipitous yet certain.”[5]
Dr. Tomas suggests that we should conduct our lives so that they are less predictable to statistical analysis. This requires…well, creativity. It might mean taking up new hobbies, exploring one’s city or town to find new bars, bookstores, or other haunts that might not show up on our regular routes. It might mean using algorithmic services in unpredictable ways, making choices that seem counter to the algorithmic model that you’ve allowed a company to create. The upshot of all this unpredictability is an exposure to new ideas, experiences, and people—just the material needed to jumpstart creative processes in your work and personal life.
In the learning context, specifically, unpredictability might mean going to unexpected sources to illustrate points, or using unorthodox methods to help learners achieve a certain competency. Are you trying to explain the importance of weekly in-person meetings in onboarding training for a hybrid position? Consider drawing on the poetry of 19th-century American poet Walt Whitman, whose interest in embodiment and the potency of the human body as an agent for connection and progress set him apart as a radical artistic voice.
Or maybe you’re trying to help a team understand the fundamentals of the much-vaunted “agile workflow.” Might the fundamentals of parkour—another type of agile process—be a helpful athletic analogy for what a team needs to accomplish?
Maybe this all sounds a bit silly to you. But maybe that’s the point: the economic and technological processes that dictate the conventions of modern professionalism are making us more stale, more predictable, less susceptible to the directed chaos that breeds innovation. Perhaps it’s time to reinject some of that into the learning experience.
I’ll leave you with those three takeaways from Dr. Tomas, dear reader. The rest of the book is phenomenal, and a pretty even-handed look at how we interact with AI for better and for worse. If you’re at all interested in what it looks like to be a mindful, humanistic, but nevertheless engaged citizen of the digital age, then I, Human is a must-read.
[1] I, Human, 152.
[2] Ibid., 39.
[3] Ibid., 54.
[4] Ibid., 58.
[5] Ibid., 112.