top of page

What Teaching AI Agents Taught Me About Teaching Humans

The surprising parallels between teaching a machine and designing learning experiences reveal the fundamentals we've been forgetting about human development.



Over the last couple of months, I've been deep in the process of setting up and experimenting with AI agents. Not coding them (that's a bit outside of my skill set), but teaching them. Designing instructions. Defining tasks. Clarifying rules. Giving context. Testing results. Iterating.


And something interesting happened along the way.


The process of teaching an AI agent to perform a task had forced me to think more carefully about most of the onboarding programmes, training workshops, and academic curricula I'd built over my career.


Humans learn skills in the same way as machines.


Not because humans are machines, but because the discipline required to train AI forces us back to the fundamentals of skill acquisition. Ironically, those fundamentals are often missing in modern education.


Let me explain...



Start With the Task, Not the Skill


When you configure an AI agent, you don't begin by listing competencies. You begin with a task: What, precisely, does this agent need to accomplish?

This sounds obvious. It isn't. Most learning design starts and often ends with skills. We build competency frameworks, define learning objectives, and map curricula to behaviours. But skills exist in service of tasks, and tasks exist in service of outcomes. We frequently mix up the order.


A skill is not the same as a capability. Capability is the ability to perform a task in a real context, and it requires far more than skill alone. When I set up an AI agent, I quickly discovered that technical ability was only one part of the puzzle. The agent also needed:

 

•       the right resources: data, tools, systems it could actually access

•       authority: permissions to take certain actions

•       guardrails: boundaries defining what it should not do

•       context: background knowledge about the environment it was operating in

•       feedback loops: ways to know whether it was succeeding

 

Strip any of these away, and even a highly 'skilled' agent fails. The same is true of people. We send employees to training courses that develop skills in isolation, then wonder why performance doesn't change. The missing ingredient is usually not more training. It's one of the other four elements above.

 

The Lesson for Learning Design

Before you design any learning experience, map the full performance ecosystem: What resources does the learner need? What authority do they need to act? What guardrails keep them and others safe? What context do they need to understand? Without these, skill development is necessary but not sufficient.

 

Write Clear Instructions. Don't Assume.


If you've ever written a prompt for an AI and been baffled by a nonsensical output, you've experienced what every learner feels when instructions are ambiguous. AI makes this visible in a way that human learners often don't. A person will nod along, not wanting to appear confused, and attempt the task anyway. The AI just produces something wrong.


When writing agent instructions, I learned to ask myself: have I defined what 'done' looks like? Have I named the format of the output, the constraints that apply, the sequence that matters? Have I accounted for edge cases?

These are questions every trainer and manager should be asking before any task is delegated or any learning experience is launched. Research consistently supports this: studies on worked examples and goal-free problem solving (Sweller, 1988; Kirschner et al., 2006) suggest that reducing ambiguity in early instruction significantly accelerates learning. Assumptions are the enemy of effective teaching.

The uncomfortable truth is that vague instructions often reflect unclear thinking on the part of the instructor.

 

If you can't write down instructions precisely, you may not understand the task well enough to teach it.

 

Assess Before You Teach


One of the most useful things you can do when teaching a capable AI agent is to ask it what it already knows. You can literally write: 'Before I give you instructions, tell me what you already understand about this domain.' The response shapes everything that follows. You can skip what's already known and focus on what isn't.

We've known the human equivalent of this for decades. Ausubel's famous assertion ("find out what the learner already knows and teach them accordingly") was made in 1968. Constructivist learning theory, diagnostic assessment, and the science of prior knowledge activation all reinforce the same point. Yet in practice, most training still begins at page one, regardless of who is in the room.


AI agents force you to think about this because a mismatch between assumed baseline and actual baseline is immediately costly in time, tokens, and outputs. The cost of this mismatch with human learners is less visible but equally real: disengaged participants, wasted time, and the patronising effect of over-explaining what people already know. Just a few weeks ago, I found myself in an embarrassingly patronising mode when "explaining" to execs of a tech company what a Business Requirements Definition Document was!


Adaptive assessment doesn't require sophisticated technology. It can be as simple as asking a team at the start of a workshop what they already know, using a pre-read to surface baseline knowledge, or offering different entry points to a learning pathway. The principle is the same: close the gap between assumed and actual starting points.

 

Practical Implication

Before every learning experience build in a prior knowledge check. Make it psychologically safe, not evaluative. Frame it as 'help me make this useful for you' rather than 'prove what you know'. The information you gather will transform the quality of what follows.

 

Guardrails Are Part Of Learning


One of the most counterintuitive things about AI agent design is how much time you spend on constraints. What should the agent not do? Where should it stop and check back? What boundaries protect the user, the organisation, and the agent itself from runaway action?


These may come across as limitations on capability, but they're what make capability trustworthy and deployable. An agent with no guardrails isn't more powerful; it's more dangerous.


The same is true in human development. Clear boundaries reduce cognitive load because they eliminate an entire category of decisions. They create the conditions for confident action. Yet many managers and educators treat guardrails as bureaucratic overhead; policies to be minimised rather than scaffolds to be designed. The result is learners who are technically capable but unsure when, whether, and how to act.

 

The Analogy and Its Limits


It would be remiss not to acknowledge where this analogy breaks down and falls short of fully solving human development practices.


Human learners are not AI agents. People bring emotions, motivations, social identities, and prior experiences that no prompt can fully account for. Intrinsic motivation, belonging, and purpose are not configurable parameters. The relationship between a teacher and learner is not the relationship between a developer and a model.

There is also a risk in the analogy itself: that it tempts us to treat people as systems to be optimised rather than individuals to be developed. The best learning relationships are not transactional. They involve curiosity, improvisation, and genuine human connection.


The value of this AI teaching analogy is that it strips teaching back to its structural basics (task, instruction, resources, authority, context, feedback) without any of the humanising warmth that should sit on top.


Teaching AI agents well is, in many ways, an exercise in going back to basics. And it turns out the basics are harder, more important, and more neglected than most of us realised.


The machines, it seems, have something to teach us after all.

 

 

References & Further Reading

Ausubel, D.P. (1968). Educational Psychology: A Cognitive View. Holt, Rinehart & Winston.

Edmondson, A.C. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44(2), 350–383.

Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work. Educational Psychologist, 41(2), 75–86.

Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12(2), 257–285.

 
 
 

Comments


Stay informed, join our newsletter

Thank you for subscribing!

bottom of page