7
min read

Educational Speedrun with AI

Or "How I Failed to Learn with AI"
Written by
Coleman Numbers
Published on
June 24, 2024

This blog post was originally titled “Doing an Educational Speedrun with AI.” I have since appended a subtitle to that title: “How I Failed to Learn with AI.”

The premise of my failed—which is to say, informative—quasi-experiment rested on the often-touted idea that generative AI has the power to radically augment and boost human productivity. There are many policy papers and think-pieces that explore this idea. And, consequently, there’s been a growing tide of skepticism towards this proposition.

Productivity optimists like the ones who wrote the policy paper linked above argue that, despite the obvious inequalities and displacement that might result from AI integration, the technology also has the potential to be “human-complementary”: to create new productive tasks and freeing up people’s brain space for “expert work.” These positive outcomes are usually predicated on broad policy recommendations.

This is a seductive vision of the future, and one that I generally find plausible, but I’ve noticed one thing about these papers: they often lack specifics. Probably by design, these relatively conservative studies keep themselves from speculating about the nitty-gritty when it comes to how, exactly, generative AI is to become a valuable tool in onboarding, training, and upskilling. This vagueness has also, admittedly, been a feature of my own writing on the subject.

Which isn’t to say holistic thinking isn’t valuable. But I’ve become more and more curious about the how of AI—how do you prompt LLM-based systems effectively? What should the role of third-party LLMs be in an organization’s training and learning apparatus?

To that end, I set up a little test for myself—the aforementioned “educational speedrun.” The test is pretty straightforward. I was partially inspired by Ethan Mollick’s blog post about how he put together a product launch in under 60 seconds. But my own experiment was slightly different, both because I wanted to focus on learning and because I’m not the prompt wizard that Mollick is.

Methods

For my test, I wanted to see how much I could learn about a random topic within an hour. The thrust of this test was less about understanding AI’s integration in a particular training modality, like an e-learning course or an organized seminar, and more about the tool’s general capacity to augment human learning. To that end, I tried to simulate working conditions at an actual organization: I mainly used the Enterprise version of Microsoft Copilot, provided by my organization (in this case, my university) as well as the free version of ChatGPT.

I didn’t spend much time worrying about the vicissitudes of prompt engineering because, after all, I don’t think the average worker, at this stage, is going to be very well-trained in the arcana of LLM prompts.

To make the experience as “pure” as possible, I asked Microsoft Copilot to give me 10 random subjects that I could learn about. While #1—sharks—was extremely appealing, I decided to go with #10—money management. This was because, I reasoned, as much as sharks enthrall my 10-year-old brain, my 20-something brain was going to be a lot more engaged by the topic more immediately relevant to my day-to-day concerns (sorry, sharks).

To that end, I spent the next hour querying Microsoft Copilot about money management. What I came away with was a 7-item collection of recommendations, from daily budgeting to retirement planning. This document, collated by ChatGPT (which was a little better at retaining specifics than Copilot itself), was built from 10 or so prompts and responses with Copilot.

Initially, Copilot gave me a bulleted list of seven money management recommendations—I proceeded to ask Copilot to elaborate on each point. In return, Copilot further subdivided each point into a more detailed bulleted list of each item, with citations leading back to blog posts and articles from credible sites like Investopedia and Forbes. Here’s Copilot’s elaborated response on the topic of emergency funds, for example:

"What is an Emergency Fund?

  • An emergency fund is a dedicated cash reserve set aside for unplanned expenses or financial emergencies.
  • Examples include car repairs, medical bills, home repairs, or sudden loss of income.
  1. Why Do You Need It?some text
    • Without savings, even minor financial shocks can set you back.
    • Research shows that individuals with less savings struggle to recover from emergencies and may rely on credit cards or loans, leading to debt.
  2. How Much Do You Need?some text
    • Aim for three to six months' worth of living expenses.
    • Start with small goals, like saving $5 a day, and gradually increase your savings².
  3. How to Build It?some text
    • Create a savings habit: Consistently put money away.
    • Set up direct deposits or automatic transfers.
    • Save unexpected income (e.g., tax refunds).
    • Keep saving even after reaching your goal² ⁴.

Remember, even a small emergency fund provides financial security and peace of mind! 😊🌟

Source: Conversation with Copilot, 6/14/2024 (1) Emergency Fund: What It Is And How To Start One | Bankrate. https://www.bankrate.com/banking/savings/starting-an-emergency-fund/. (2) How to Build an Emergency Fund - The Wall Street Journal. https://www.wsj.com/buyside/personal-finance/how-to-build-an-emergency-fund-01673022469. (3) An essential guide to building an emergency fund. https://www.consumerfinance.gov/an-essential-guide-to-building-an-emergency-fund/. (4) Emergency Fund - Investopedia. https://www.investopedia.com/terms/e/emergency_fund.asp."

After running through the same procedure for all 7 items, I asked ChatGPT to put this information all together in a single document (this, admittedly, happened after the hour of “speedrunning” had elapsed).

Results

While in theory Copilot gave me a wealth of helpful, concise information on the topic, in practice I don’t know if I learned very much about money management from the exercise. At least, I don’t know if I retained much.

This was largely because of environment and context. As interested as I was in learning about money management, I didn’t have any immediate way to apply this information. I think Copilot’s preferred format—bulleted lists leading back to links—made it easy for me to skate over the actual content of its findings altogether. For most of the hour, I spent time perusing the various articles that Copilot had referenced. These were helpful, but again, without an actionable thrust to the activity, I wasn’t sure what to do with this information.

I was encouraged by the various resources that Copilot led me to. I think each of the articles it referenced were reliable, informative, and probably could’ve led me to performing some money management action given more time.

From this experience with the tool, I’ve learned that Copilot is more useful as a search function than as an instructor or tutor. Which makes Copilot somewhat redundant, since I’m not sure my experience would’ve been significantly different had I just identified a few key words and fed them to Google or Bing. That said, Copilot did offer a helpful if probably reductive framework for understanding the problem of money management—a framework I wouldn’t have thought to include if I’d been using a traditional search engine.

There were other factors at play that interfered with meaningful learning, too—like the horde of teenage summer campers that swarmed through the university’s student center where I worked remotely, or the fact that I was in the middle of a religious fast that made my head dizzy.

The failure of this learning experiment, then, was more methodological than technical. Which, on its face, makes this whole endeavor sound kind of silly and useless. But, for me, this was an extremely helpful lesson: the context in which AI is deployed for learning matters.

Insights

Based on my (very subjective, nonrepeatable) test, I’d like to offer some characteristics of a meaningful application of AI in learning:

  • Operational: In whatever learning context generative AI is deployed, it should be used with a specific action or outcome in mind. What should the learner be able to do better because of a session with AI?
  • Focused: The leading LLMs command a base of knowledge broad enough to encompass most human expertise. Prompts and queries to AI should be designed to bring out a specific type of expertise. Otherwise, users might only encounter unhelpful generalities.
  • Motivated: In addition to being centered around actions and outcomes, AI deployment should account for relevance to a specific learner’s desires and needs. Can the learner themselves make it clear to the AI what they most care about? Can they connect with the way AI presents content on an emotional level?
  • Frontier-based: As aforementioned prompt-wizard Ethan Mollick has often noted, to get the most out of AI models, learners need to be working with the leading edge of the technology. This is not something I did in my experiment, and I noticed a marked difference in the detail, nuance, and helpfulness of the AI generations.

Conclusion

If you’re even a mildly resistant reader, you probably noticed that three of the four characteristics I just pointed out are…well…obvious priorities for working any technology into a learning process. I think this, in itself, is significant.

The fundamental principles of learning don’t change when AI enters the fray. Scale may change, adaptability may change, analytics may change—but if ye have not good pedagogy, ye are nothing. I entered this experiment under the ill-formed assumption that AI would magically augment my brain—would be magically “human-complementary” by virtue of mere interaction with it. This was a deeply mistaken assumption, and I think it derives from a very ingrained, very concerning philosophical approach. But I’ll have to write about that elsewhere.

For now, don’t repeat my mistake—recognize, perhaps, that effective learning can’t be a speedrun at all. If we can start there, by going back to fundamentals, then we’ll be in good shape.

AI in Learning Newsletter
Keep up to date on the cutting edge technologies that are changing the way people learn and instruct.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.