The world of work has changed, and learning and development (L&D) leaders are being called upon to future proof their organizations.
This rapid progression and added influence puts pressure on L&D leaders to further demonstrate ROI, reigniting conversations around the best L&D metrics to measure.
In 2023, L&D pros will be encouraged to overcome the attribution challenge and get closer to people analytics to measure impact.
Below, we’ll take a look at how senior L&Ds are currently measuring success and how those metrics might evolve in tandem with the industry.
Historically, the metrics L&D leaders use are based on completion and satisfaction rates.
The majority of corporate learning is carried out via an LMS (learning management system), and progress is measured in the moment, in a snapshot of time, which isn’t representative of an employee’s learning journey.
Typically, L&D leaders have relied on metrics such as:
The fundamental challenge with focusing solely on this type of metric is that they don't map to how employees apply new information in their day-to-day activities.
Through talking with thousands of L&D leaders, there is a clear disconnect between how learning is measured, and the resulting work behavior that is changed.
One of the key reasons organizations invest in learning and development programs is to upskill their employees to improve efficiency and effectiveness. And the spotlight on this is ever brighter in recent years.
To achieve those outcomes, the employee needs to apply new information frequently to build familiarity and confidence. It becomes difficult to track the ‘information application’ journey if L&D metrics are based on capturing one-off, disconnected feedback.
Many L&D leaders are aware of the Kirkpatrick model below. The challenge with how L&D is currently being measured is people are trying to leap from reaction to results.
The completion-based metrics mentioned above do not explore behavioral change past outside of the context of a classroom. Meanwhile, the upper stages of the model occur within the workplace.
Employees themselves also commonly leap from reaction to results without understanding the relevant behavior change.
💡 For example, Jessica wants to become a manager, she recognizes the need for training on how to manage people, she searches for managerial courses and completes four modules across a three week period. Her certificate of completion is then presented to her manager as evidence of her competency in managing people.
The challenge here is the training has been completed in a vacuum, with minimal touchpoints to measure how the course information has changed Jessica’s actions to validate her readiness for promotion.
Learning leaders can present a much stronger case for L&D ROI to the c-suite if attribution is mapped linearly across each of the four stages: reaction, retention, behavior and results.
Though most L&D heads will concur; this is easier said than done, and why is it any different today?
In recent years, there’s been a shift in how L&D is packaged and delivered. The concept of “learning in the flow of work” focusses on intertwining learning with how a person goes about their work day, with the intention of delivering continuous learning in an organic way.
This continuous learning style has opened up new tactics for effectively measuring behavior change and L&D success. More touch points for feedback means more opportunities to track incremental behavioral change.
Let’s take a look at the L&D metrics enabled by an agile learning culture:
How does the employee gauge their confidence level to do a specific task?
If you’re collecting this data one hour after an employee has completed the course, it’s useless. The person hasn’t had a chance to apply their newfound knowledge to their job.
But if you’re measuring this daily, or every other day during a three week course, collecting qualitative data intermittently over that time period, you can start to build a clear picture of how that person is actively applying information learned on the course.
This is where new eLearning methodologies and agile learning culture makes it possible. If you’re meeting the employee where they reside, such as SMS, WhatsApp, Slack or Microsoft Teams, it’s easier to collect regular feedback via short surveys and quizzes.
Here’s a brief example of how measuring confidence lift might look:
💡 Sarah took a three week sales techniques course on social selling. On day five she received an automated prompt via Slack asking her about how confident she feels about a specific technique taught on the course. She felt slightly more confident in her ability to generate leads via social listening.
By day ten, she was feeling more confident and applying the techniques more frequently, listing some of the actions she had taken in her qualitative response. Sarah applied the information throughout and was taking action while she was still on the course.
One-click confidence lift in Arist courses
Learning teams can monitor confidence lift at scale directly within messaging apps with Arist's new confidence lift survey block.
From within the course builder, simply add a new block to your existing lessons and select "Confidence Lift Survey."
Confidence lift survey questions will be delivered after the first and last lesson of your course. You can edit the prompts and responses to prompts for both, or use default language.
As learners progress through your course, confidence lift is automatically surfaced in analytics, allowing a near real time view of new skill application.
How often are you applying the information you have learned on the course? Can you give examples?
If the learning occurs in isolation to the work-based actions the disconnect in attribution occurs. It’s important to know how often and in what scenarios the knowledge is applied, as leaders will be able to see a clear correlation between the frequency of application, and key L&D KPIs such as retention and internal progression.
One-click frequency of application in Arist courses
Learning teams can monitor frequency of application at scale directly within messaging apps with Arist's new custom rating question block.
When editing an individual lesson, simply add a new question of the type "Custom Rating Question".
Discern how you want to phrase your prompt to ask for how often a new skill or technique is being applied.
Note that the following default text will follow any prompt that you provide "Please reply with a number between 0 and 10."
Unlike confidence lift surveys, which are by default placed after the first and last lessons of a course, custom rating questions can be placed anywhere throughout course. And can thus be used to track progress across lessons or poll on the progression of multiple skills.
To explore your learner progress related to frequency of application (or any other custom rating question), simply head to analytics, select your course, and "export course responses to CSV".
How are you putting steps in place to use the knowledge gained? What do you want to achieve with this knowledge?
A direct analysis of behavior change. The evidence of how well the learning changed the behavior of the employee, will be visible in how they intend to modify their actions.
Learning leaders could take a random sampling of 50 people out of 500 who took the course, and analyze how their goals developed at the start of the course, compared to the end.
💡 For example, John went on a two week soft skills course on goal setting. He was asked on day one what his goals were. They were heavily outcome based:
On day five he was asked again how he thinks about goals based on the knowledge gained on the course:
One day ten, John was asked what his goals are now:
From analyzing the qualitative application of knowledge, we can see a clear shift in John’s mindset based on the information learned on the course. If we can monitor John’s productivity and strategic acumen to understand the effectiveness of the course.
Going further, L&D departments can stack metrics to make attribution even clearer.
💡 Sample 100 employees who took the course and identify the top 50, who displayed the highest level of qualitative knowledge application. Of those 50, measure the % of employee retention against 50 other employees who did not take the course. Or, of those 20 high learners, how many were either promoted internally, or actively engaged in new responsibilities / tasks outside of their regular actions.
How much do your learners enjoy your learning? Such questions may not tie directly to business impact. But they help learning teams to discern what learning learners find valuable. Motivation is one of the three primary building blocks within the Fogg Behavior Change model, and discerning what learning, at what time, and for whom that learning is motivating is an important intermediary step in supporting impact.
Net promoter scores are often built around a prompt such as "How likely is it that you would recommend our company/product/service to a friend or colleague?"
Responses are grouped into three segments:
Individuals who reply with a passive response are ignored for reporting purposes, and you score is calculated by subtracting the percentage of detractors by the percentage of promoters among responses.
It's worth noting that there are a range of factors that come into play for net promoter scores including: the platform used for delivery, the topic and relevance of programming, the curriculum content, and the time of delivery. With this said, Arist Microlearning courses have an average net promoter score of 82/100. Meanwhile, LMS net promoter scores tend to range between -57 and 9. The average for all SAAS products is 41.
With more continuous course delivery paired with regular net promoter score polling, learning teams can experiment by adjusting the elements above and gauging changes to the net promoter score.
Thinking back to the Kirkpatrick model, if you’ve attributed metrics to the reaction, retention and behavior stages, it’s much easier to accredit behavior change to business results.
L&D leaders can make a simple correlation between people rated the highest in terms of behavior change, and people who had the highest internal progression or career jump during that same period.
For example, let’s say Paul is a manager who achieved the highest level of behavior change on a “career conversations” course. His organization can now ask these measurement questions:
Reduced turnover is another important metric at the business results stage of the Kirkpatrick model. However, like the internal mobility metric, it’s important not to measure this without the context of the knowledge retention and behavior stages of the pyramid.
For example, Paul works a 40-hour week, and has one hour per week for learning. He completed four videos on the LMS library, but that shouldn’t be isolated as a contributing factor to him staying with the company.
Jumping from the reaction stage to the results stage without the context of knowledge retention and behavior change isn’t an effective measurement of employee retention. Employee retention could be attributed to a myriad of factors, including culture, benefits, etc.
It’s better to measure reduced turnover overall, and then in those who engaged in the learning and demonstrated a higher rate in behavioral change.
This provides a better solution to some of the attribution challenges that currently exist.
There is a tendency in L&D to over inundate someone with resources, but under inundate them with guidance.
Employees tend to use LMS as a place to stack skills in isolation; separate from how they do their jobs. This creates a disconnect between skills and the behavioral change required to apply the learning to their daily activities.
The key is understanding the goals of the training, where the employee wants to get to, and for managers to curate the right content and create a guided learning plan.
Combine this with a continuous learning model and L&D will be able to accurately measure attribution and ROI using some of the L&D metrics above.
The reason why these metrics are so valuable is because measurement has now become continuous. You’re measuring in a way that is conducive with how people learn: iteratively, over time.