Bizagi Artificial Intelligence

<< Click to Display Table of Contents >>

Navigation:  Cloud applications >

Bizagi Artificial Intelligence


When having a subscription in Bizagi Cloud, you are entitled to the use of different Cloud applications provided by Bizagi.

Cloud applications provided by Bizagi allow you to go beyond process execution, and work with your data by exploring fields such as artificial intelligence (predictive analysis), by integrating reporting tools of your choice, or by creating portals centered around a richer user experience.




Artificial Intelligence for Bizagi Cloud, is an application which allows you to explore machine learning capabilities directly in your Bizagi processes, by having processes rely on a predictive analysis service that is based upon the reliable data that you have in a Bizagi Dataset.

Throughout predictive analysis, you will be able to train models and carry out experiments that rely on renown machine learning algorithms to determine a given outcome based on stored data (with a given degree of certainty).

For such models, you may easily connect your Bizagi processes so that these processes present a prediction once certain data has been inputted (to either set as default value or to suggest to the end user).




You may configure all of this without being necessarily an expert data scientist.


Machine learning algorithms

Employed algorithms include:

Decision tree (C45)

Decision tree (ID3)

Linear SVM Classifier

Multiple Linear Regression

Logistic Regression


Bizagi Artificial Intelligence will choose the best algorithm for your specific use case statement, though you may choose to explicitly set which algorithm you want to employ.

For instance, when predicting attributes/variables of categorical type, the Logistic Regression algorithm it is often employed.

Similarly, Multiple Linear Regression would be commonly applied to continuous attributes/variables.


Examples of use case statements could be:

For a credit application process where a customer requests a loan:

Based on credit amount, customer type and credit type, we could want to predict beforehand if the process requires additional documentation or not (i.e predicting a categorical variable).

For a vehicle insurance underwriting process where an insurance is to be issued:

Based on insurance amount, customer type and estimated price, we could want to predict beforehand an estimated duration to approve the insurance underwriting (i.e predicting a continuous variable).

For a help desk support process where a customer reports a ticket:

Based on severity, operating system, and ticket type, we could want to predict beforehand if the ticket would be likely to be solved in first-level support, or if it needs to be escalated to second-level support, or to third-level support (i.e predicting a categorical variable).


Before you start

Artificial Intelligence relies on Bizagi Datasets as its data provider.

In other words and before moving on, ensure you are already familiar with Bizagi Datasets, as it is a requisite that you have already created a Dataset to be used by the Artificial Intelligence application.

For more information about Datasets, refer to Bizagi datasets.


Getting started with Bizagi Artificial Intelligence

The following outline of steps summarize such high-level view of how you would work with Artificial Intelligence:

1. Create an AI model and experiment and define its data source.

At this point you set it to connect to a given Dataset, already holding data.

2. Explore results of different experiments, so that you settle with one whose results are satisfactory for your specific use case statement.

At this point you evaluate possible combinations between input parameters and algorithms.

Once the experiment is satisfactory and final, you publish it.

3. Connect your Bizagi processes to consume the predictive analysis service so that each new case can make use of a preliminary prediction for an attribute/value.

You configure the invocation by means of a RESTful service and that's it.


Important recommendations

1. You should always create models and experiments based on real data (e.g, by using a Dataset of the production environment).

2. Only when an experiment is satisfactory, you should publish it.

Publishing an experiment allows you to use one service endpoint for testing, and another one for real use (processes' production environment).

For the service endpoint used in testing, you may define fixed rules so that your AI service returns predictable outputs and you can easily work in all of the different processes' paths.

3. You should ensure that you constantly train the model.

By default, new data  being continuously stored in the Dataset is not considered by an experiment, up until you decide to explicitly train the model and re-evaluate.

Once you do this, you will need to make sure you re-publish the experiment so that the same service endpoint being used by processes is updated with accuracy/standard error changes.

Note that this means that you may update an existing experiment which has been published and is under current use, while taking proper cautions.

4. Whenever an experiment in current use, changes its inputs or outputs definition, you will need to create a new experiment.

This is strongly recommended as a best practice in terms of stability of the ongoing processes.



Further information

To learn how to get started with Artificial Intelligence, refer to Creating AI models and experiments.