Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Contact Us
English (US)
US English (US)
CO Spanish (Colombia)
  • Home
  • wvx Control Interactions

What is the "2. Result by Evaluation by Agent" Report in Quality under wolkvox Manager

Written by Jhon Bairon Figueroa

Updated at April 9th, 2026

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • wvx Digital Interaction
  • wvx Voice Interaction
  • wvx Conversational AI
  • wvx Control Interactions
  • wvx CRM
  • wvx Agent
  • wvx Studio
  • Cibersecurity and Compliance
  • Release
    wolkvox Manager release wolkvox Agent release wolkvox CRM release
+ More

Table of Contents

Introduction Report Information

Introduction

The "2. Result by Evaluation by Agent" report, available in the 'Reports' > 'Quality' section of wolkvox Manager, allows you to view the details of each evaluation performed on agents during the selected period. Unlike the consolidated report by agent, this report shows each individual evaluation and also presents a TOTAL row per agent, with a summary of their results.

This report is useful for reviewing in greater detail how each interaction was scored, which quality matrix was used for evaluation, the result regarding critical errors, the percentage score obtained, the channel through which the interaction occurred, and the classification of the conversation.

 

 

Report Information

The columns in this report include the following information:

  • CONN_ID: Identification number of the evaluated conversation or interaction.
  • AGENT_ID: Agent's extension number within the wolkvox system.
  • AGENT_NAME: Name of the evaluated agent.
  • COMMENTS: Comments associated with the evaluation. This column shows observations recorded during the scoring process.
  • PRECISION_UNIT_CRITICAL_ERROR: Result of the evaluation regarding unit or business critical errors configured in the quality matrix. In an individual row, this indicator reflects whether the evaluation had this type of critical error. It can show values such as:
    • 0.00%: The evaluation had a unit critical error.
    • 100.00%: The evaluation did not have a unit critical error.
    • In the TOTAL row, this field shows the agent's consolidated result for the evaluations included in the report.
  • PRECISION_OPPORTUNITY_CRITICAL_ERROR: Average percentage of the agent's compliance with critical attributes evaluated in the quality matrix. This metric is interpreted at the level of opportunities or critical attributes reviewed, reflecting what proportion of these critical criteria the agent managed to fulfill without error. Its practical interpretation can be understood as follows:
    • 100.00%: The agent did not make errors in the evaluated critical attributes.
    • 0.00%: The agent failed in all considered critical attributes.
    • An intermediate value: The agent partially met the evaluated critical attributes.
  • ACCURACY: Percentage score obtained in the evaluation. In an individual row, it corresponds to the final result of that specific evaluation. In the TOTAL row, it shows the agent's consolidated average score within the queried period.
  • DATE: Date and time when the evaluation was performed.
  • SURVEY: Name of the quality matrix with which the agent's interaction was evaluated.
  • COD_ACT: Activity code with which the result of the conversation was classified. This column can show:
    • TIMEOUTACW: Indicates that the agent did not manage to code the interaction within the allowed time. This applies when the time limit configuration for classification is active.
    • TIMEOUTCHAT: Indicates that the customer did not respond again, and therefore it was not possible to classify the conversation with an activity code.
    • The code configured in the operation: It can be numeric or alphanumeric, according to the parameterization of the activity codes.
  • CHANNEL: Channel through which the evaluated interaction originated. This column can show the following values:
    • chat-facebook: Chat interaction from Facebook.
    • chat-instagram: Chat interaction from Instagram.
    • chat-sms: Chat interaction from SMS.
    • chat-web: Chat interaction from the web widget.
    • chat-whatsapp: Chat interaction from WhatsApp.
    • voice: Indicates that the interaction corresponds to a call.
  • FEEDBACK: Feedback recorded for the evaluation, either by a human quality analyst or by AutoQAi.
  • SKILL_ID: Identification number of the agent queue associated with the evaluated interaction.

 

Important Consideration About the TOTAL Row

Within the report, evaluations are grouped by agent, and at the end of each group, a TOTAL row is presented. This row summarizes the results of all the evaluations of that agent included in the queried date range. Therefore, in fields such as PRECISION_UNIT_CRITICAL_ERROR, PRECISION_OPPORTUNITY_CRITICAL_ERROR, and ACCURACY, the TOTAL row shows a consolidated value and not the result of a single evaluation.

This report is especially useful for auditing evaluations one by one, comparing the details of quality results by agent, and understanding how the final consolidated result is built from each individual evaluation.

 

 

 

evaluation quality

Was this article helpful?

Yes
No
Give feedback about this article

Related Articles

  • How can I monitor agent performance in chat management?
  • What Is the '1. Calls and Service Level by Skill/Service' Report in Skills & Services in wolkvox Manager?

2026 Wolkvox

Information security policy | Privacy Policy

Expand