Calibrations help your team deliver consistency in their Quality Management evaluation process. Not only do they measure and identify areas that can be improved, but they also create alignment within your company ensuring that common goals are being met.

Here is a quick summary of the process:

  1. A customer interaction is selected for evaluation

  2. An expert (also referred to as the standard) is selected. All participants in the calibration will have their scoring compared to that of the standard.

  3. Multiple participants are identified. These participants can be members of your quality team, executives from throughout your contact center or even your own agents.

  4. You may make the choice to score fail all and/or fail section questions

Note: By default, the option to score fail all and/or fail section questions will be disabled. In calibrations, if a participant scores a fail all question differently from the expert, then the score will be calculated as “0”. If a participant scores a fail section question differently from the expert, then the total number of questions in the section will be subtracted from the final calibrated calculation score.

Learn more about fail questions and sections.

5. Team members are invited to participate

6. Scoring results and comments are compared and shared

Tip: Plan what you will discuss regarding the results of the calibration by focusing on the areas that had the highest level of deviation (or difference) from the standard. This will ensure that you can create a targeted discussion that will bring everyone closer to scoring with the same methodology.

7. Outliers are identified and changes to scorecards/training methods are made

accordingly

Creating a Calibration

Below you will find three ways to start a calibration. All three ways will direct you to this same window (pictured below):

We’ll get into these specifics later on in this article. For now, let’s check out how to get there.

1. From the Interactions Dashboard.

Select a customer interaction and click Calibrate at the bottom right.

2. From the Evaluations Dashboard

Set your filters for a specific agent or quality analyst. Click the Interaction ID to select a customer interaction already evaluated.

Click the Actions button and select Start calibration from the drop-down menu.

3. From the Calibrations Dashboard.

Open Quality<Calibrations. You will see a list of historical calibrations if conducted. To add a new calibration, click on Start calibration in the upper right corner of the screen.

Get Ready, Get Set...Calibrate!

As we mentioned earlier, all three ways of starting a calibration will direct you to the same window. All you have to do is fill in the remaining fields!

Note: Starting a calibration from an existing interaction or evaluation will auto-populate the interaction identification number. If the interaction resides on a platform that is not integrated with Playvox, the identification number will have to be manually typed into the field. Also, remember the Score fail questions option is disabled by default.

After you’ve filled in the necessary information, click on Start calibration in the bottom right corner.

This will create your calibration record for reporting. It will also send an email notification to each participant inviting them to conduct the evaluation as part of a calibration event.

Reviewing the Results

When you run a calibration, you compare an expert against a specific analyst of your team using the same scorecard for the same user interaction. Playvox will consolidate the results of each of these evaluations for comparison and review. As the Calibrations initiator, you’ll now have a historical record of calibrations, status of each evaluation, and results.

You’ll also be able to apply filter parameters where you can easily find calibrations by:

  • Scorecard

  • Expert

  • Analyst

  • Status

  • Date Range

  • Category

  • Calibration # (note: using this option will ignore other filters)

Calibration managers can:

  • View the progress of analysts participating in the calibration

  • View the overall average score by section

  • View average score by each user per section

  • Quickly identify a participant's fail questions

  • Drill into each evaluation for the status of participants and completions

  • Delete a participant if necessary to finalize the calibration averages for all participants (Note: this is only possible if the calibration is still “pending”).

  • Review / compare scoring data of participants as they complete their evaluations.

Remember - reviewing helps to identify outlier evaluations and for evaluators to compare their own assessments against those of experts.

Calibration Categories

Get even more organized with Calibration Categories! Head on over to Quality under the Settings icon.

Click the Calibration tab. From here, you'll be able to view all the previously created categories and as well as edit and delete if necessary. To create a new category, simply click on the green New button and fill in a title and description. Hit Save when finished.

Calibrating is key to Quality Assurance. The more you calibrate, the more alignment you’ll achieve within your company AND your team. Happy Calibrating!

Other articles you may find interesting:

Understanding Scorecards

Creating Scorecards: Getting Started

Creating Scorecards: Adding Questions and Scoring Factors

Managing Scorecards

Scorecard Rules

Bonus Sections in Scorecards

Best Practices: Creating Scorecards

Calibrations Reports

Did this answer your question?