The process is simple:

  1. Select a customer interaction to be evaluated 
  2. Select an expert against whom other evaluations will be compared
  3. Identify multiple quality evaluators for a calibration 
  4. Select if you’d like to score fail all and fail section questions in the calibration.

FYI: By default this option will be disabled. 

What does this option mean? 

Fail all: if a participant scores a fail all question differently from the expert, then the score will be calculated in 0. 

Fail section: if a participant scores a fail section question differently from the expert, then the total number of questions in the section will be subtracted form the final calibration calculation score. 

Learn more about fail questions and sections.

     5. Invite everyone to participate in a calibration
     6. Compare and share scoring results and comments
     7. Identify outliers and train on delta responses

Let's get started:

Let's review the process for creating a calibration event and explain how to review the results. There are three ways to start a calibration. The process is the same for each starting point:

From the Interactions Dashboard. 

Select a customer interaction and click the blue Start Calibrating button at the bottom right.

From the Evaluations Dashboard. 

Set your filters for a specific agent or quality analyst. Click the View button to select a customer interaction already evaluated. Click the blue Actions button. And select Start calibration.

From the Calibrations Dashboard. 

Open Calibrations by clicking the Quality menu and then Click on Calibrations submenu. You will see a list of historical calibrations if conducted. To add a new calibration, click on the green Create a new Calibration "+" botton in the upper right corner of the screen.

Start a Calibration:

There is one simple form to start a calibration. Some information will be automatically populated depending on where you start the calibration. Fill in remaining fields.

Note: Starting a calibration from an existing interaction or evaluation will auto populate the interaction identification number. If the interaction resides on a platform that is not integrated with PlayVox, the identification number will have to be manually typed into the field. 

*Remember the Score fail questions option is disabled by default.*

Complete the information and click on the Start calibration button in the bottom right corner. This will create a calibration record for reporting.

And it will send an email notification to each participant inviting them to conduct the evaluation as part of a calibration event.  

Reviewing Results of a Calibration

Each participant will open an evaluation for the same customer interaction and agent. They will be asked to complete the evaluation. And PlayVox will consolidate the results of the multiple evaluations for comparison and review.     

Individuals initiating the calibration will have a historical record of calibrations, status of each evaluation, and results. 

Calibration managers can:

  • view the progress of analysts participating in the calibration
  • view the overall average score by section
  • view average score by each user per section
  • quickly identify a participant's fail questions
  • drill into each evaluation for the status of participants and completions
  • delete a participant if necessary to finalize the calibration averages for all participants
  • review / compare scoring data of participants as they complete their evaluations.

The goal from this analysis is to identify outlier evaluations and for evaluators to compare their own assessments against those of experts.

Quality analysts or evaluators can review their scoring data compared to peers participating in the calibration study.   

Did this answer your question?