<< Click to Display Table of Contents >>
Navigation: Automation Server > Automation Server configuration and administration guide > System maintenance and monitoring >
An important task for project maintenance and administration of the Bizagi System is to perform regular maintenance to the database, to keep the system working at an appropriate performance level.
Each database engine by itself (SQL Server, Oracle) provides the necessary tools for such maintenance, including ways to pro-activly monitor, run diagnostics (e.g, profiler tools) and perform tuning tasks.
For more information, refer to System administration.
Monitoring and tuning guidelines
We recommend that the DBA:
1. Perform continuous monitoring over the database performance
The database engine itself provides the necessary tools and log files so you can monitor, run diagnostics, and interpret results so you can perform tuning tasks.
Through pro-active monitoring you can anticipate undesirable situations such as table scans, database locks or delays, and detect situations which may require database tuning.
For instance, a table scan may suggest that queries/statistics are not tuned, and that maintenance of indexes is needed (adding new ones, redefining existing ones).
Relevant aspects to monitor, either to review your configuration or the characteristics of the underlying hardware/infrastructure, include: potential bottlenecks that may arise due to lack of memory, from waits in disk reads and writes, or due to high network traffic
If the DBA detects that the database engine is not performing adequately (not executing queries in a timely fashion), you can scale up the database server at any time (or scale it out when using an active-active cluster scheme such as Oracle RAC).
The following metrics are recommended:
PhysicalDisk(_Total)\Avg. Disk sec/Read
PhysicalDisk(_Total)\Avg. Disk sec/Write
Average writing timers when using the Disk. It should not be greater than 20ms for hard drives or 5ms for SSD drives.
Processor(_Total)\% Processor Time
Indicates the percentage of processor use, the ideal use should be between 30% and 80%. When it is above it indicates that more processing resources are required and can lead to waiting times that can slow the operation of the application.
It indicates the memory available on the database server, if the memory level is low it indicates that the database does not have enough memory space and starts to perform operations with virtual disk memory, which makes the database slow.
Paging File(_Total)\% Usage
This indicator is linked to the available memory and the maximum memory allocated to SQL Server, and indicates whether data is being processed on disk instead of memory.
System\Processor Queue Length
Indicates if there are processes that have to wait for CPU resources, if a number greater than 0 is presented, it indicates that the processors or cores are not sufficient for the processing of the requested operations.
Network interface\Bytes total/sec
It allows to monitor the network interfaces, if it stays above 80% the applications can have issues in the network traffic.
SQLServer:Access Methods\Forwarded Records/sec
It lets you know how fragmented are the tables that do not have a clustered index (heap tables) when the fragmentation is high makes pointers are created that execute I / O operations that can slow the operation.
SQLServer:Access Methods\Page Splits/sec
It allows to know if the tables are constantly fragmented in indicates that there is division pages and constant movement of rows and this is an operation that could be optimized.
SQLServer:Buffer Manager\Buffer cache hit ratio
It is an indicator that measures the successes of the cache in data processing. A high number indicates that high memory usage is being made and I / O operations are reduced.
SQLServer:Buffer Manager\Page life expectancy
It allows monitoring if the lifetime of the pages in the cache, that is if the data in the cache is released very quickly because other data is required to be loaded. If this indicator is less than 300, there may be query problems that process a lot of data and more memory is required.
SQLServer:SQL Statistics\SQL Compilations/sec
SQLServer:SQL Statistics\SQL Re-Compilations/sec
These counters are incremented when SQL Server has to compile or recompile the query plans because the cached plan is no longer valid or there is no cached plan for this query, this increases the processor consumption of the database.
SQLServer:General Statistics\Processes blocked
It allows measuring the level of blockages that occur in the database and if this counter tends to increase with respect to a baseline it indicates that there are many escalations from row locks to page locks and can reach table locks.
SQLServer:SQL Statistics\Batch Requests/sec
It allows measuring the level of transactions that are being executed in the database. This counter should be kept in a normal range of daily operation and if variations are evidenced, it should be investigated if there are occasional processes that generate that load.
2. Carry out database tuning regularly and according to best practices
For processes expected to produce a large number of cases or process a large number of activities per day, we recommend database tuning at least once a weekl.
Best practices include performing tuning during non-working hours, and planning ahead (keeping in mind that such tasks can take some significant time), as well as following recommendations provided by the database engine vendor.
•Verifying the database integrity.
•Updating database statistics.
•Reorganizing and maintaining indexes (recreate highly fragmented indexes or reorganize them according to the order of the queried columns when using compound indexes).
•Performing shrinks to the database.
•Monitoring database filegroups so that their configuration (size, increment, maximum size, drive used, etc) is most appropriate to the way they grow.
According to the hardware characteristics of your database nodes (i.e number of hard disks, number of processors or cores), you can configure the data files and logs for enhanced performance by scaling them out.
For instance, you could benefit from parallel I/O operations when using multiple drives with a dedicated drive to separate data files and filegroups (i.e placing log files in its own volume, placing tempdb in their own volume, or having a data file with the most worked on transactional tables in its own volume the one with the best speed).
Regarding tempdb, we suggest that you configure it using multiple data files and filegroups pre-sized equally to avoid auto-growth (using one data file per CPU).
When pre-sizing, consider: disabling auto-growth for data files, having the data and log files use no more than 90% of the available disk space, having the log file twice the size of a single data file, and setting auto-growth for the log file to a specific size.
Within Bizagi's model, pay close attention to the growth behavior of the following tables:
•Those related to storage of daily work: Workitem, Workitemcl, Wfcase, and Wfcasecl.
•Additional stables, according to the characteristics of your project: Asynchwiretry and Asynchworkitem.
•Those storing logs: AttribLog, EntityLog, Wistatelog, Transitionlog, Factlog, Casestatelog, Authlog, Attribcharlog, Assignationlog, Joblog (especially if your project involves job execution in a frequent manner), Alarmjobrecipientlog (especially if your project involves alarms in a frequent manner), Wfcaseabortreason (especially if in your project cases are manually cancelled frequently) and Reassignlog (especially if in your project, cases are manually reassigned frequently).
Note that tables storing logs may be partitioned into a separate filegroup.
Include in your plans and tuning, additional business entities which are expected to store large volumes of information and to grow significantly.
According to what observe, you may choose to review the fragmentation of information (ifilegroups in SQL Server or tablespaces in Oracle).
Carry out database tuning right after executing maintenance actions by Bizagi (e.g, such as the maintenance tasks done by the Scheduler, or further options as per delivered in additional tools like Archiving).
3. Follow Bizagi guidelines on indexes creation
When creating or reorganizing indexes:
•You can create non-clustered indexes.
•You can create indexes on tables which are definitions of your project (i.e, business entities which are not actually part of Bizagi's meta model).
•As a general recommendation for databases, it is not useful to create indexes on columns which have a small set of possible values (e.g, typically such as boolean or bit type columns).
•Do create indexes in an automatic manner (without prior analysis).
This means not relying on the un-assisted index creation tasks of software such as Database Engine Tuning Advisor (SQL Server), because the DBA should evaluate such indexes adequately.
Apart from a cost-benefit analysis, you can evaluate use of indexes within the query execution plan and based on use statistics (consider index seeks, index scans, index lookups or index updates).
4. Make sure adequate concurrency settings when using SQL Server
Make sure you have the required management for concurrency enabled in Bizagi's database.
Verify that your Bizagi database uses snapshot isolation. This means:
•Allow Snapshot isolation: True (ALLOW_SNAPSHOT_ISOLATION ON)
•Is Read-committed snapshot isolation on: True (READ_COMMITTED_SNAPSHOT ON)
When using these snapshot isolation levels, it becomes even more important that you boost the resources and enforce best practices for the tempdb database (i.e, using a separate volume that guarantee high speed I/O operations).
These guidelines provide a starting point as well as useful recommendations for monitoring tasks, database tuning and diagnostics.
However, these guidelines do not cover every task a DBA needs to carry out regarding database maintenance (the information provided is illustrative and not exhaustive).
You also need to carry out additional database tuning tasks, both those encouraged by the database engine vendor, and those you identify based on use of specialized tools and reviews of logs and traces.
5. Make sure the system user is enabled
The domain\admon account is the system user employed internally by Bizagi and created by default.
Do not disable this user since it is needed to perform automatic tasks related to timers or scheduled jobs. We recommend that you check that this user is enabled in the database.
We strongly recommend that you edit this user's setting so that it does not have rights to access administration options at the Work Portal.
For this, you can exclude this user from those authorized to manage your Bizagi system (i.e this user should not be able to manage users or modify values in Bizagi such as parameter entities, abort cases, etc).
6. Make sure adequate database access rights
Make sure that credentials used for both the Work Portal and Scheduler instances are set to use credentials with adequate access rights, as instructed and configured during initial set up.
The sample image below displays those rights needed when using SQL Server.
The server roles specified for this login must include: public.
The login must include the following user mapping items for that specific database: db_datareader, db_datawriter, public, rlBA_SQL_BizagiWebApp and rlBA_SQL_ExecuteBizagiSPs.
The db_datareader, db_datawriter, public, rlBA_SQL_BizagiWebApp and rlBA_SQL_ExecuteBizagiSPs user mappings apply for your project's database. This means that you need to apply these mappings once your Bizagi production environment database has been created.
Double-check that this login is not locked out:
Updating a project
When you are updating a project additional from the access rights mentioned in the previous section, you need the following:
Set the user with privileges to backup and create the database
GRANT BACKUP DATABASE TO UserName
GRANT CREATE DATABASE TO UserName
Open the Securables section, select the Effective tab and make sure that you have the following permissions;
VIEW ANY DATABASE
Troubleshooting and diagnostics
When you need to troubleshoot and diagnose an issue occurring at the database, Rely on specialized tools of your database engine, such as the Profiler tool for SQL Server.
For recommended guidelines on its use, refer to Profiler recommendations.