Using Bizagi Diagnostics

<< Click to Display Table of Contents >>

Navigation:  Automation Server > Automation Server configuration and administration guide > System maintenance and monitoring > Monitoring > Bizagi Diagnostics >

Using Bizagi Diagnostics

 

Overview

Bizagi Diagnostics is a toolkit that provides monitoring options for Automation Server operations in a Test or Production environment. The purpose of the dashboard is using data aggregation to guide you to performance hotspots.

The following section illustrates how to use Bizagi Diagnostics once it has been set up (as described at Setting up Bizagi Diagnostics).

 

Using Bizagi Diagnostics

Get started by opening a browser and logging into the URL of the installed Bizagi Diagnostics web application.

 

http://[your_server]:3000/login

 

This loads the Bizagi Diagnostics web application's overview page (this may take a minute).

Use admin as user and password for the log in page.

After the first log in, Grafana will request to change the default password for the admin user. Nevertheless, you can set admin again.

 

Diagnostics_start_new

 

Once you log in, Click the Home option located at the top left corner

 

Diagnostics_app01

 

In the view opened, select the dashboard named Discover.

 

Diagnostics_app02

 

This is the starting point where the information to diagnose your environment is presented.

 

How to Diagnose performance issues with Bizagi Diagnostics?

It is important to emphasize that Bizagi Diagnostics is mainly used to find performance issues in your environment. The stored traces show information from the time that you enable Bizagi Diagnostics in your Scheduler service.

 

The following gives a brief explanation about how to find issues according to the data displayed by Bizagi Diagnostics. Nevertheless, the diagnosis of performance issues is not a step by step recipe, you must change the filters and navigate through information displayed.

 

What you need to do?

1. Define an ideal performance time

The first step is to define a breaking point in which, the performance starts to be affected. For example, the database transactions should not take more than one minute.

 

2. Locate the time frame you want to diagnose

Without using any filter, locate the time frame in which you find performance peaks in the Stacked Time graph.

Not all the peaks show a performance issue; the graph will show as a peak any high value, but the time frame on the vertical axis is dynamically refreshed according to the values displayed. Thus, you can find a point displayed as a peak, but it may be below the ideal performance time.

 

Diagnostics_usage01

 

Diagnostics_usage02

 

Once you have found a time frame to diagnose, delimit the graph by drawing a rectangle over the peak. This action changes the horizontal axis.

 

Repeat this action until you find more readable values

 

Diagnostics_usage03

 

The graph shows the selected metric of the following values:

TimeBA: time it took to execute the Bizagi Application

TimeDB: time it took to execute the Database queries

TimeExternal: time it took to execute the external calls

 

note_pin

By default, the metric displayed is sum. Nevertheless, you can change the metric displayed by clicking the graph title and selecting Edit. Then, go to the tab Metrics and select the desired metric.

 

In our example, we have defined that our ideal performance time is three seconds, and the graph shows that all the values are higher than three seconds.

 

Diagnostics_usage04

 

This is the time frame where we are going to perform the analysis.

 

3. Aggregate data through the dashboard filters

Once you have identified the peaks where the ideal performance time is exceeded, it is time to find what is the root cause of these problems.

 

First change the values of the filters starting by the Stack by control. When you select any value of this control, the detailed data will be updated. To see the available values for this control, refer to Bizagi Diagnostics Interface explained.

 

In our example, we group by process by selecting the ProcessName value.

 

Diagnostics_usage05

 

Once this value in chosen, we can see in the Detailed Time shows that the maximum time of the OMultra process is almost eight seconds, which is greater than the other three processes. This let us know that the performance issues are related with the OMultra process.

 

Diagnostics_usage06

 

Now, we can filter the results by the given process, selecting the process in the ProcessName filter at the top.

 

Diagnostics_usage07

 

Once the filter is selected, all the graphs are refreshed. If graphs are empty the selected process is not presenting issue during the time frame.

 

Diagnostics_usage08

 

To know what operation is the one which is taking more time, we can change the grouping to Operation

 

Diagnostics_usage09

 

In the Detailed time, we can see that the process is taking longer during the Next operation.

 

Diagnostics_usage10

 

We will group by task name to find the task that is presenting the performance problems.

 

Diagnostics_usage11

 

When going to the detailed time, we can find that the activities Enable timer and Auto: Get 360 File are the activities with more processing time.

 

Diagnostics_usage12

 

The longer time is being presented in the TimeBA value, which means that both activities are taking a lot of time when they are completed due to performance issues in a custom rule.

 

The solution is to go to the activity and tune-up the rule, to achieve the expected performance.

 

Additional tips

The following tips let you identify the root cause of the performance issues:

If the TimeBD value takes the longer time, group the results by the Sql variable. This will show the queries that take more time to be processed in the SQL queries table of the detailed set. If the performance delay is related with the Bizagi Database and not with any external Database, please contact support team.

 

If the TimeExternal value takes the longer time, it is most likely that the performance is being affected by an external call such as invoking an external service or using a connector.

 

If your architecture is clustered, you can find the thread which takes the longest time through its ID, see which node needs to be tuned up. This information is displayed on the Top threads graph.