This past Thursday I was lucky enough to get to present at the NASAGA conference in Baltimore. I’ve been involved with NASAGA since 2004 when I got introduced to this wonderful group of trainers, teachers, and designers. I’d really suggest attending next year’s conference in Seattle if you are wanting an infusion of creativity and inspiration.
This presentation came from seeing the disconnect between activities, learning objectives and evaluation in much of the training that I see. I want to see better matches between what we are teaching and how we are evaluating learner’s performance.
While these are presented as 7 discrete steps, it’s likely that a designer would need to revisit earlier stages to make adjustments as needed.
Determine the Goal
This is the big picture goal. Examples include: Students will develop their reading skills or The team will build stronger relationships. These fall within a continuum that could be goals for 6 or 60 year olds.
Identify Required Prior Knowledge/Analyze the Learners
First the designer must determine where on that continuum the learner lies. Letter Name recognition or college-level comprehension? Are these sales associates just beginning their careers, or are these executives selling at the enterprise level? There are tons of cultural, cognitive, language, developmental factors that you need to be aware of as you design effective learning activities. So who is our audience? And what do they know?
identify Realistic Objectives for the Length of Session and Audience
This is where the designer decides what specific things our participants will be learning or doing. Objectives are performance based and verb-oriented. Some examples of objectives are:
- Recall and state the 50 state capitals of the USA, or recall and state the features of a Ford 150 pick up truck
- Compare and contrast the features of two competing products
- Use proper placement and form while running
- Create a personal goal statement
- Improve confidence in ability to perform
Choose Performance Benchmarks
How do we “grade” our learners? Too often I see scores of multiple choice being the only way we measure learning, when in actuality we really want to know that they can perform tasks in the workplace, or that they are building foundational skills. We do need numbers, or qualitative rubrics, or rated rubrics. I’m enough of a pragmatist to know that we need reportable outcomes to funders– whether they’re our state governments, or donor or our managers. Here are some ways that we can describe performance benchmarks:
- Correctly spell 8 out of 10 words.
- List at least 5 differences between 2 products
- Match at customer attributes to the best available product
- Run a mile in 10 minutes or less
- Create one software routine that can correctly manage user input
Scale Story and Time to Create a Metaphor
The most effective training takes an authentic real-life performance task and scales to the “classroom” so that the class performance most closely relates to what the learner will need to do out in the world. The prime example of this kind of emulation is a simulation– Military video games, or NASAGA member Chuck Petranek’s drinking game. While a video game can give us amazing immersive graphics, it can be as simple as having a chip represent an alcoholic beverage. Sometimes metaphors can become belabored and unwieldy– and the focus becomes on replacing the relationships. We want to have stories that are simple and support the learning objectives– It certainly can be tempting to provide overly complex stories, but scaling back and focusing on the learning objectives helps focus on the important stuff, not building overly elaborate fancy worlds.
Develop Participant Activities
Activities can be individual, team, face to face, online, merged, games. We have whole arsenals available to us. We can do a better job of convincing our stakeholders to use a variety of activities when we can clearly connect them to learning outcomes.
Select Debrief Techniques
How many times have evaluation at the end of a course devolved into a certificate of completion for the student, and a survey for the instructor? That’s not enough .One of the ways that gamification can really be effective is providing a more compelling feedback loop– badges can be great motivators, and can track varying levels of mastery. Points boards can be used effectively (and carefully) to foster competition. BUT, this is only the beginning. If I had my druthers, we would see more integration on training and feedback into the workplace, or school setting. People often see training outside of their workplace performance, and really, it would be good to tie annual reviews/reflection all of the rest into our training.