Database maintenance

<< Click to Display Table of Contents >>

Navigation:  Bizagi Engine > Bizagi system administration > Maintenance and administration >

Database maintenance


An important task for project maintenance and administration of the Bizagi System, is to perform a constant maintenance to the database, so that you can look after an appropriate performance and working of the system.

Note that each database engine by itself (SQL Server, Oracle) provides the necessary tools for such maintenance, which includes possibilities of pro-active monitor, to run diagnostics (e.g, profiler tools) and to perform tuning tasks.




Monitoring and tuning guidelines

We recommend a DBA to:


1. Perform continuous monitoring over the database performance.

Note that the database engine by itself provides the necessary tools to monitor, run diagnostics and interpret results to perform tuning tasks (in addition to relevant log files).

Through pro-active monitoring you anticipate to any undesirable situation such as table scans, database locks or delays, and you detect aspects which may require database tuning.

For instance, a table scan will suggest that queries/statistics are not tuned, and that maintenance of indexes is needed (adding new ones, redefining existing ones).

Relevant aspects to monitor, either to review your configuration or the characteristics of the underlying hardware/infrastructure, include: potential bottlenecks that may arise due to lack of memory or by waits in disk reads and writes, or even due to a high network traffic



If the DBA detects that the database engine is not performing adequately (not executing queries in a timely fashion), recall that you may scale-up the database server at anytime (or scale-out when using an active-active cluster scheme such as Oracle RAC).


2. Carry out database tuning regularly and according to best practices.

For processes expected to produce a large amount of cases or expected to process a large amount of activities per day, database tuning is recommended at least on a weekly basis.

Best practices include performing tuning at non-working hours, and planning accordingly (keeping in mind that such tasks can take some significant time), as well as further recommendations as instructed by the database engine vendor.


Aspects to tune include mainly:

Verifying the database integrity.

Updating database statistics.

Reorganizing and maintaining indexes (recreate highly fragmented indexes or reorganize them according to the order of the queried columns -when using compound indexes).

Performing shrinks to the database.

Ensuring you monitor database filegroups so that their configuration (size, increment, maximum size, drive used, etc) is most adequate according to how these grow.

According to the hardware characteristics of your database nodes (i.e number of hard disks, number of processors or cores), you may configure the data files and logs for an enhanced performance (by scaling them out).

For instance and regarding the above, you could benefit from parallel I/O operations when having multiple drives and using a dedicated drive to separate data files and filegroups (i.e placing log files in its own volume, placing tempdb in its own volume, or having a data file with the most worked on transactional tables in its own volume -the one with best speed).

Regarding tempdb, it is suggested to configure it using multiple data files and filegroups pre-sized equally to avoid auto-growth (using one data file per CPU).

When pre-sizing, consider recommendations such as: Disabling auto-growth for data files, having the data and log files use no more than 90% of the available disk space, having the log file twice the size of a single data file, and setting auto-growth for the log file to a specific size.


Within Bizagi's model, consider paying close attention to the growing behavior of the following tables:

Those related to the storage of daily work: Workitem, Workitemcl, Wfcase, and Wfcasecl.

Additional ones, according to the characteristics of your project: Asynchwiretry and Asynchworkitem.

Those storing logs: AttribLog, EntityLog, Wistatelog, Transitionlog, Factlog, Casestatelog, Authlog, Attribcharlog, Assignationlog, Joblog (especially if your project involves jobs execution in a frequent manner), Alarmjobrecipientlog (especially if your project involves alarms in a frequent manner), Wfcaseabortreason (especially if in your project, cases are manually aborted frequently) and Reassignlog (especially if in your project, cases are manually reassigned frequently).

Note that tables storing logs may too be partitioned into a separate filegroup.


Other than these, include in your plans and tuning, additional business entities which are expected to store large volumes of information and to grow significantly.

According to what observed, you may also review the fragmentation of information (ifilegroups in SQL Server or tablespaces in Oracle).



Recall that you should carry out database tuning right after executing maintenance actions by Bizagi (e.g, such as the maintenance tasks done by the Scheduler, or further options as per delivered in additional tools like Archiving).


3. Follow Bizagi guidelines on indexes creation.

When creating or reorganizing indexes, consider the following aspects:

You may create non-clustered indexes.

You may create indexes on those tables which are definitions per sé of your project (i.e, business entities which are not actually part of Bizagi's meta model).

As a regular recommendation in databases, it is not useful to create indexes on columns which have a small set of possible values (e.g, typically such as boolean or bit type columns).

You should not create indexes in an automatic manner (without prior analysis).

This means not relying on the un-assisted index creation tasks of software such as Database Engine Tuning Advisor (SQL Server), because the DBA by himself/herself should evaluate such indexes adequately.

Apart from a cost-benefit analysis, you may evaluate it within the query execution plan and usage statistics (e.g. to consider index seeks, index scans, index lookups or index updates).


4. Ensure adequate concurrency settings when using SQL Server.

Monitor and watch after having the required management for concurrency enabled in Bizagi's database.

This means verifying that your Bizagi database uses snapshot isolation. This means specifically having:

Allow Snapshot isolation: True (ALLOW_SNAPSHOT_ISOLATION ON)

Is Read-committed snapshot isolation on: True (READ_COMMITTED_SNAPSHOT ON)

Note that when using the above snapshot isolation levels, it becomes even more important that you boost the resources and enforce best practices for, the tempdb database (i.e, using a separate volume that guarantee high speed I/O operations).



These guidelines provide a starting point as well as useful recommendations for monitoring tasks, database tuning and diagnostics.

However, these guidelines do not cover up exactly every task a DBA needs to carry out regarding database maintenance (the information provided below is illustrative and not exhaustive).


You will also need to carry out additional database tuning tasks, regular ones encouraged by the database engine vendor, and as identified from the use of specialized tools (from what observed through logs and traces).



5. Ensure the system user is enabled.

Recall that the domain\admon is the system user employed internally by Bizagi (created by default).

You may not disable this user since it is needed to perform automatic tasks such as timers or scheduled jobs, and it is recommended that you check that such user is enabled in the database.

Though, note that it is strongly recommended to edit this user's setting so that it is not set with rights to access administration options at the Work portal.

For this, you may exclude this user from those authorized to manage your Bizagi system (i.e this user should not be able to manage users, nor modify values in Bizagi such as parameter entities, abort cases, etc).



Basic recommendations when running diagnostics (profiler)

Whenever carrying out tasks which involve diagnosing a specific issue, and you need to use a profiler, consider these recommendations:


1. Do not execute the profiler tools from within the database server.


2. Similarly and when applicable, run a profiler trace on the database at times when the system is not extremely busy.

It may be useful to run a profiler as well at busy hours depending on the issue, but consider first running to avoid affecting the system's performance and plan this accordingly (taking necessary precautions).


3. Make sure you filter the profiler appropriately.


4. Capture information from the profiler by enabling the relevant (and only those strictly needed) events.

For instance, for SQL Server you may consider RPC:Completed and TSQL-SQL:BatchCompleted.


5. Capture information from the profiler by enabling the relevant (and only those strictly needed) columns of information.

Analyze which aspects are key to indicate database performance in your project.

For instance, for SQL Server you may want to consider: CPU usage, amount of write and amount of read instructions, duration or start and end timestamp.


6. According to the prior guideline, you may also use filters to narrow down the traces to what is really important.

For instance, you may want to record: only those transactions having a duration greater than a given threshold (i.e, 3 seconds), or only those transactions affecting a really large number of records.



Additional notes

Keep in mind that in addition to database tuning and monitoring, you should also consider monitoring tasks for other elements of your system architecture that may interfere with the adequate working of your database (such as domain configuration when used by your database cluster setup, the state of the network between your database servers and other elements of your system architecture such as the Bizagi server or the SAN, etc).

For more information refer to Monitoring.



Keep in mind that it is not recommended that an Antivirus Software scans the physical database files.

This may generate locks and delays for applications persisting information into the database.