The process is simple:

  1. Select a customer interaction to be evaluated 

  2. Select an expert against whom other evaluations will be compared

  3. Identify multiple quality evaluators for a calibration 

  4. Select if you’d like to score fail all and fail section questions in the calibration.

FYI: By default this option will be disabled. 

What does this option mean? 

Fail all: if a participant scores a fail all question differently from the expert, then the score will be calculated in 0. 

Fail section: if a participant scores a fail section question differently from the expert, then the total number of questions in the section will be subtracted form the final calibration calculation score. 

Learn more about fail questions and sections.

     5. Invite everyone to participate in a calibration
     6. Compare and share scoring results and comments
     7. Identify outliers and train on delta responses

Let's get started:

Let's review the process for creating a calibration event, then we'll explain how to review the results. 

There are three ways to start a calibration. The process is the same for each starting point:

1. From the Interactions Dashboard. 

Select a customer interaction and click the blue Start Calibrating button at the bottom right.

2. From the Evaluations Dashboard. 

Set your filters for a specific agent or quality analyst. Click the View button to select a customer interaction already evaluated. Click the blue Actions button. And select Start calibration.

3. From the Calibrations Dashboard. 

Open Calibrations by clicking the Quality menu and then Click on Calibrations submenu. You will see a list of historical calibrations if conducted. To add a new calibration, click on the green Start calibration botton in the upper right corner of the screen.

Setting up a Calibration:

All 3 ways of starting a calibration will direct you to the same window. 

Fill in remaining fields.

Note: Starting a calibration from an existing interaction or evaluation will auto populate the interaction identification number. If the interaction resides on a platform that is not integrated with PlayVox, the identification number will have to be manually typed into the field. 

*Remember the Score fail questions option is disabled by default.*

Complete the information and click on the Start calibration button in the bottom right corner. This will create a calibration record for reporting.

And it will send an email notification to each participant inviting them to conduct the evaluation as part of a calibration event.  

Reviewing a Calibration's Results 

Each participant will open an evaluation for the same customer interaction and agent. They will be asked to complete the evaluation. And PlayVox will consolidate the results of the multiple evaluations for comparison and review.     

Individuals initiating the calibration will have a historical record of calibrations, status of each evaluation, and results. 

They will also be able to apply filter parameters similar to evaluations'; with them they'll be able to easily find calibrations by:

  • Scorecards

  • Teams

  • Experts

  • Analysts

  • Status

  • Date Range

  • Categories

Calibration managers can:

  • view the progress of analysts participating in the calibration

  • view the overall average score by section

  • view average score by each user per section

  • quickly identify a participant's fail questions

  • drill into each evaluation for the status of participants and completions

  • delete a participant if necessary to finalize the calibration averages for all participants

  • review / compare scoring data of participants as they complete their evaluations.

The goal from this analysis is to identify outlier evaluations and for evaluators to compare their own assessments against those of experts.

Quality analysts or evaluators can review their scoring data compared to peers participating in the calibration study.  

Managing Calibration Categories

What do we mean by Calibrations’ categories? 

Categories is a way in which you can organize and segment your Calibration sessions. i.e.: Junior Analysts' Calibrations, Senior Analysts' Calibration, etc. 

To create Calibration Categories you must go to the platform's settings Icon ⚙and then select Quality. Once there, you'll see the Calibrations' Categories tab. After you click on it, you'll be able to see all the previously created categories and manage them Yo create a new category, simply click on the green ➕ icon and fill in the required information.

Now just go on and try Calibrations! You'll see how much it will impact positively your team's performance.

Last Edited: 03.15.2019

Did this answer your question?