Agents aren’t the only ones who should be evaluated in the QA field. According to DMG Consulting, “Quality assurance analysts should be evaluated on a consistent basis to monitor their performance and ensure that they are meeting their goals by completing the appropriate number of QA reviews and coaching sessions on time each month. It’s also essential to ensure that the QA evaluations are being completed professionally, in compliance with the guidelines established by the department, and that they are calibrated to ensure fairness.”
That’s why Playvox gives you the option to Evaluate the Analyst. We’ll show you how to do that right here.
Creating a Workload
If you’re reading this and it sounds familiar, it probably is! The beginning of this process is the same as if you were Evaluating the Agent.
Go to Quality > Workloads
You will land on the General View page that will list your workloads as you create them. On this list, you’ll be able to see the workload name, status information, when the workload was created and who created it, and last assignment date, and assignment status.
Under actions, you may run reports, edit, or delete a workload.
Note: Under the reports icon, you may also reassign a workload. Check out this article to read more!
To create a workload, make sure the Analyst mode is Off as shown in the image above. This special feature is for Admins who are part of a workload as an analyst and would like to easily jump from one view to the other.
Note: Once you've turned the option On or Off, the platform will save that setting even if you log out from it. Also, the mode you are in determines the permissions or restrictions that apply.
2. Click the New workload button in the upper right.
3. Select Evaluate the Analysts.
Evaluate the Analysts
From here, you’ll go through a 3-step process that includes Setup, Sampling, and Analyst’s Workload.
Name your Workload.
2. Determine your schedule for the workload by clicking Edit.
A new window will appear where you can specify the
Send Start Date and Time
Frequency of the workload (daily, weekly, or monthly)
In Pull Data From, specify the period from which you want to pull the data.
Choose whether to pull data from the last X days, or check the second box to restrict the data pull to a specific start and end date.
3. Add Filter Criteria. Perhaps the evaluation field is still too wide, even with the
limited dates you’ve chosen to Pull Data From in your Schedule. That’s why
Playvox gives you the option to filter the evaluations that come into your
You may add the following filters:
Scorecard - Select one or more scorecards
Team - Select one or more Teams
Note: Though Teams tend to be associated with specific evaluations (and therefore specific agents), this option would be beneficial if you always have a certain analyst that evaluates a specific team’s (or teams’) work.
E.g. Aanya is an Analyst. She always evaluates agents from Team Chameleon. In this case, it may be a good idea to create a workload filtering by Team Chameleon, knowing that Aanya (the analyst) is the “regular” evaluator.
QA score - Choose evaluations that are “greater than” or “less than” a specific percentage.
Note: Depending on your company’s preferences, you may choose to enable all filters (Scorecard, Team and QA Score), some filters (e.g. Team and QA Score), one filter, or none at all. Simply select what you’d like from the dropdown menu.
Build your sample.
Select By Analyst or Percentage or Exact Number
If you choose to sample By Analyst:
Start typing the name of the Original Analyst(s) in the search field provided. Playvox will automatically generate results as you type.
Once you have selected your analyst(s) to evaluate, you may adjust the number of evaluations to be pulled in the random sampling.
If you choose to sample by Percentage or Exact Number:
Choose Quantity or Percentage.
Then, specify the exact number or percentage of Total Evaluations you would like to be pulled.
2. Click Next after building your sample.
Much like a Quality Calibration, Evaluating the Analyst helps bring consistency into your workloads. It is in this part of the process that you’ll be able to select the Expert Analyst(s) that you would like to evaluate against the Original Analyst. Each Expert Analyst selected will evaluate against the same agent and scorecard as the Original Analyst.
Note: The Expert Analyst will not be able to see which Original Analyst is associated with each workload, but the Agent’s name will be visible.
Select the Expert Analyst(s) from the search field/dropdown menu.
2. Assign the percentage of workload you’d like each Expert Analyst to evaluate.
The number of evaluations per analyst will be in the right column.
Note: If you built your sampling based on percentages, the following message will appear instead of a number under the evaluations per analyst column
Because the sample is pulled by percentage and not a set number, the number of evaluations may change each time the workload runs.
3. Click Publish.
When published, the workload will automatically begin to select and distribute evaluations to Expert Analysts based on the workload parameters that you set. Notifications will be sent to Expert Analysts in the same way they are sent to Analysts evaluating an agent interaction.
Remember: Evaluations are selected randomly in order to fairly balance Expert Analyst workloads and avoid differences in workload difficulty. A workload resets
automatically day-over-day, week-over-week, or month-over-month, depending on your schedule settings. If edits are made to a workload, interaction assignments will be reflected at the start of the next scheduled processing time.
An Expert Analyst may turn on the Analyst Mode to see Active or Previous Workloads, Progress, Type, and Creation Date if - and only if - they have access to create and participate in workloads. Clicking a workload that has been started will bring them to the Expert Analyst’s progress page where they may continue the evaluation process. Managers may click a workload from the list to view reports and Expert Analyst progress at any time.
For Expert Analysts that only participate in (and do not create) workloads, the default page will be a list of items to complete (as shown below).
Important: Each time an Expert Analyst finishes an evaluation, it will be stored under the Calibrations tab, not the Evaluations tab.