|
<< Click to Display Table of Contents >> Monitoring AI Metrics |
This section in the Monitoring Center Home provides at-a-glance metrics on AI operations, particularly those from of AI Agents, Ask Ada, and AI Workers. Key metrics are displayed, with links to detailed dashboards for deeper analysis. Upon clicking the AI Metrics button, a detailed AI Metrics Dashboard window opens.

AI Agents Used
This widget displays the total number of unique AI agents that have been utilized within the specified timeframe. Each AI agent is counted only once, regardless of how many times it was used during this period. This metric provides insight into the diversity of AI resources employed over time.
Ask Ada Chats
A chat is defined as the entire session of interactions that a user has with the Ask Ada assistant before the conversation context is lost; The conversation maintains its context as long as the user continues the interaction without leaving the assistant or the application. During a single chat session, a user may ask multiple questions and receive multiple answers, all within the same continuous interaction. When the end user exits the Ask Ada interface or the application and then returns later, the previous context is considered lost. Upon returning, any new interactions begin a new chat session, which is counted separately in the chart. This widget displays the total number of chat interactions handled by Ask Ada within the specified period. It calculates this metric by filtering distinct chat sessions, providing an overview of user engagement.
% Accuracy Score of Workers
This metric reflects the reliability of tasks powered by AI Workers within each process. It measures how accurately AI Workers complete Form-based tasks by analyzing the extent of human corrections made after the AI Worker has finished its work. The accuracy score is calculated based on the number of manual edits applied to Form fields post-AI Worker completion. Fewer human corrections indicate higher accuracy, while frequent interventions suggest areas where the AI Worker may need further training. This score provides valuable insights into the performance and training effectiveness of AI Workers. It helps you pinpoint opportunities for optimization, improve automation quality, and build trust in AI-Worker driven processes. For detailed information check the AI Metrics Dashboard - AI Workers Accuracy documentation.
% Enabled Workers
This widget shows the percentage of AI Workers enabled for all tasks in the selected environment. A higher percentage indicates broader adoption of AI capabilities, reflecting increased usage of intelligent automation to support task execution. For detailed information check the AI Metrics Dashboard - AI Workers distribution documentation.
% Successful Autonomous Workers
This chart shows the percentage of successful completions by autonomous AI Workers. It reflects how many of these workers are completing tasks satisfactorily. A higher percentage indicates better performance and reliability of autonomous AI execution. For detailed information check the AI Metrics Dashboard - Autonomous Execution Rate documentation.
Distribution of Worker Types
This section shows the percentage distribution between the three types of AI Workers (Autonomous, with Autonomy Rule and Supervised). For detailed information check the AI Metrics Dashboard - AI Workers distribution documentation.
Workers' Feedback
This sections shows the percentage distribution of AI Worker feedback, indicating whether it was positive, negative, or no feedback was received. It reflects user ratings of AI performance, helping you assess how well the AI Workers are supporting task execution. For detailed information check the AI Metrics Dashboard - AI Worker Feedback documentation.
|
To access AI Metrics Trends and Forecast (AI- Operations Per Hour) go to the AI Metrics details section. |
Last Updated 11/19/2025 11:02:11 AM