Learning Organizations

Here are my notes on building better learning organizations, from Laszlo Bock’s Work Rules, former HR Manager at Google.

Learning Organizations

From the moment we are born, human beings are designed to learn. But we rarely think about how to learn most effectively.

As a pragmatic matter, you can accelerate the rate of learning in your organization or team by breaking skills down into smaller components and providing prompt, specific feedback. Too many organizations try to teach skills that are too broad, too quickly. And measuring the results of training, rather than how much people liked it, will tell you very clearly (over time!) if what you’re doing is working.

But we don’t just want to learn. We also delight in teaching. Every parent teaches, and every child learns. And if you’re a parent, you appreciate that often the child is the teacher, and you are the one who learns.

The best way to learn is to teach. Because to teach well, you really have to think about your context. You need mastery of your subject and an elegant way to convey it to someone else. But there’s a deeper reason to have employee-teachers. Giving employees the opportunity to teach gives them purpose. Even if they don’t find meaning in their regular jobs, passing on knowledge is both inspiring and inspirational.

A learning organization starts with a recognition that all of us want to grow and to help others grow. Yet in many organizations, employees are taught and professionals do the teaching. Why not let people do both?


You learn the best when you learn the least

I have been struggling to find ways and means to motivate my team to work on their learning goals for the year. This is some food for thought.

An average US employee receives 31 hrs of training every year. That’s 30 minutes each week. Most of that money and time is wasted. Not because the training is necessarily bad, but because there’s no measure of what is actually learned and what behaviors change as a result. Most corporate learning is insufficiently targeted, delivered by the wrong people, and measured incorrectly.

Conventional wisdom is that it takes ten thousand hours of effort to become an expert. However, it’s not about how much time you spend learning, but rather how you spend that time. People who attain mastery of a field, whether they are violinists, surgeons, athletes, approach learning in a different way than the rest of us. They shard their activities into tiny actions, like hitting the same golf shot in the rain for hours, and repeat them relentlessly. Each time, they observe what happens, make minor — almost imperceptible adjustments, and improve. This is a deliberate practice: intentional repetitions of similar, small tasks with immediate feedback, correction and experimentation. Simple practice, without feedback and experimentation, is insufficient.

At McKinsey Engagement Leadership Workshop, one of the skills we were taught was how to respond to a furious client. First, the instructors gave us the principles (don’t panic, give them time to vent their emotions, etc.), then we role-played the situation, and then we discussed it. Afterward, they gave us a videotape of the role play so we could see exactly what we had done. And we repeated the process again and again. It was a very labor intensive way of providing the training, but it worked.

Building this kind of repetition and focus into training might seem costly, but it’s not. Most organisations measure training based on the time spent, not on the behaviors changed. It’s a better investment to deliver less content and have people retain it, than it is to deliver more hours of “learning” that is quickly forgotten.

Unless your job is changing rapidly, it’s difficult to keep learning and stay motivated when the road stretching ahead of you looks exactly like the road behind you. You can keep your team members’ learning from shutting down with a very simple but practical habit.

Author gives below example to show this habit.

In the minutes before every client meeting, my mentor used to ask me questions: “What are your goals for this meeting?” “How do you think each client will respond?” “How do you plan to introduce a difficult topic?” We’d conduct the meeting, and on the drive back to our office he would again ask questions that forced me to learn: “How did your approach work out?” “What did you learn?” “What do you want to try differently next time?” I would also ask questions. I shared responsibility with him for ensuring I was improving.

Every meeting ended up with immediate feedback and a plan for what to continue to do or change for next time. I’m no longer a consultant, but I often go through Frank’s exercise before and after meetings that my team has with other Googlers. It’s an almost magical way to continuously improve your team’s performance, and it takes just a few minutes and no preparation. It also trains your people to use themselves as their own experiments, asking questions, trying new approaches, observing what happens, and then trying again.


Measuring Learning Programs

Yet to see a training program that engages attendees through all the levels described here. Brilliant, yet obvious?

It’s easy to measure how training funds and time are spent, but far more rare and difficult to measure the effect of training. It’s convenient hand-waving that allows HR departments to say that people are learning without proving that they are.

There is a model that prescribes four levels of measurement in learning programs: reaction, learning, behavior, and results.

Level One — Reaction — asks the student for her reaction to the training. A professor at Stanford once told me the secret to high student evaluation — tell lots of jokes and lots of stories. He went on to explain that it’s a constant trade-off between being engaging and imparting knowledge. Stories key into a human hunger for narrative, rooted in wisdom that’s passed from generation to generation through myths and folklore. They are an essential part of effective teaching. But how students feel about your class tells you nothing about whether they have learned anything. Moreover, the students themselves are often unqualified to provide feedback on the quality of the course.

Level Two — Learning — assesses the change in the student’s knowledge or attitude, typically through a test or survey at the end of the program. Now we are looking at the effect of the class in an objective way. The drawback is that it’s hard to retain newly acquired lessons over time. Worse, if the environment you are returning to is unchanged, the new knowledge will be extinguished.

Level Three — Behavior — ask to what extent the participants changed their behavior as a result of the training. A few very clever notions are embedded in this simple concept. Assessing behavior change requires waiting for some time after the learning experience, ensuring lessons have been integrated into long-term memory, rather than hastily memorized for tomorrow’s exam and then forgotten. It also relies on sustained external validation. The ideal way of assessing behavioral change is not just to ask the student, but to ask the team around them. Seeking external perspectives both provides a more comprehensive view of the student’s behavior and subtly encourages him to assess his own performance more objectively.

Level Four — Results — looks at the actual results of the training program. Do you sell more? Are you a better leader? Is the code you write more elegant?

Your training programs may work, or it my not. The only way to know for sure is to try it on one group and compare it to another group.