Rule Chains
The rule chains module allows you to define units of business logic (called nodes) that will be executed sequentially (as in a chain) with each node depending on its previous counterparts. It’s a powerful tool that will enhance your GWE experience by allowing you to integrate your workflows with other 3rd party apps and even internal ARCHIBUS apps.
Defining Rule Chains
To define a new rule chain or edit existing ones you must go to the Define Rule Chains view.
Smart Services / Workflow Engine / Define Rule Chains
You should see the following view.
The module comes with a few sample rule chains to give you an idea of the capabilities of the tool.
Define Rule Chains
To create a new rule chain click on the Add Rule
button.
To edit an existing one, click the Edit
row action.
In both cases you should see a popup like this:
Rule Chain Form
Rule Chain Fields
-
Rule Chain Name: The name of the rule chain. Keep it short and descriptive.
-
Log Level: By default set to
Errors Only
but it can also be set toDebug
andNone
modes. It dictates what will be logged when your nodes are executed. As you might expect, when set to errors only, the log will contain only details about executions that failed. When set to debug, everything will be logged (this is useful when developing the rule chain). The none option will not log anything. -
Run On Startup: By default set to
No
, can be switched toYes
if you want this rule chain to automatically run at server startup. -
On Error: Dictates how the rule chain will behave once it encounters an error. There are two possible ways: by default it’s set to
Break
mode which will stop the execution immediately. It can also be changed toContinue
if you want it to continue the execution in spite of the errors.
Importing / Exporting Rule Chains
You have the possibility to share rule chains you have created with others by exporting them to the json
format.
To do that simply check the boxes of the rule chains you want to export and click the Export
button.
This will generate a file that can be named [your rule chain's name]-export.json
if you have selected a single row or routes-export.json
if you checked multiple rows. If you open the json file you will see it contains the base configuration of the rule chain as well as the configuration for each of its nodes.
To re-import them, click on the Import
button and select the exported json file(s).
Importing rule chains with the same name will overwrite existing ones.
Defining Nodes
Clicking on any rule chain will open two panels to the right side.
In the top one you can define the nodes and decide their order of execution and in the bottom panel you can see execution logs.
You have the option to manually run or copy the selected rule chain.
To add a new node, click on the Add Node
button.
To edit an existing one, click the Edit
row action.
Rule Chain Node form
Node Fields
There are a few common fields that you can find on all node types:
- Type: As the name suggests this represents the type of the node. It can be one of the 5 nodes we support:
-
Split Result: Whether to split the result of this node into multiple subresults that are processed separately or treat the result of this node’s execution as a whole. This is designed for cases when the result of this node is an array of items. Read more about this in the Node Splitting section.
-
Is Active: An active node is executed while an inactive one is skipped when you run the rule chain. Useful for testing purposes.
-
Order: Takes an integer that represents the order of execution. For new nodes leave it empty and it will be automatically generated on save.
Tip: You can also change the node’s execution order with the arrow up and arrow down
buttons of the Rule Chain Nodes panel.
Node Types
Database Read
This node type executes SELECT queries on the database.
Let’s analyze the example in the screenshot above.
This configuration has the following fields:
- Source: The name of the table from which you want to read. In this case, it is the
cf
table. - Body: In this field you can either compose the whole SQL query manually or simply the restriction (what comes after the
WHERE
keyword). In this case we crafted only the restriction. Behind the scenes this would translate to:
SELECT cf_id, position, rate_hourly FROM cf WHERE position = 'Network Manager';
You can also parametrize the restriction with binding expressions.
With Binding Expression
For a more dynamic approach you could write the restriction like this:
position = '${position}'
In this scenario you would need to pass the position
in the data model when you run the rule chain.
When executed it will produce a result like this if multiple records are found:
If a single result is found it will look like this:
The Node Parameters panel is where we declare what fields we want to select from the table and their data type.
Parameters, just like nodes, can be made inactive. In the case of a Database Read
node, an inactive parameter will be ignored and the field will not be selected.
Database Write
This node type executes INSERT / UPDATE queries on the database.
To update a record add an SQL condition in the body to uniquely identify the record you want to change.
Updating Values
Alternatively you can pass the primary key restriction via the node parameters like so:
Restriction via Node Parameters
If a record is not found either by the body restriction or the node parameters restriction, it will be created instead.
When a record is created, the response will be an object containing only the primary key values of the newly created record.
Database Delete
This node type executes DELETE statements on the database.
You can remove one or more records from the databse by writing a restriction in the body.
e.g. Using the cf_id
s generated by the previous nodes we can compose a restriction like:
cf_id IN (${(node1?is_sequence)?then(node1?map(it -> "'" + it["cf.cf_id"] + "'")?join(", "), "'" + node1["cf.cf_id"] + "'")})
Tip: The template system is powered by Apache FreeMarker.
Read the documentation to find out more about its syntax and what you can do with it.
There is also a useful tool for testing your expressions: Online FreeMarker Template Tester.
Run Workflow Rule
This node type calls ARCHIBUS workflow rules with the required parameters.
There are two input types which you can use:
- The
Default
input type which takes a list of arguments enclosed by square brackets, like a json array. They must match the order and type of the java method’s arguments.
Leave the parameters text box empty if your method does not require any arguments.
- The
JSON
input type in case your method expects anorg.json.JSONObject
.
Note: The node parameter input_type
is auto-generated based on your selected input type.
Do not remove it. If you accidentally removed it, it will be recreated once you hit the Save
button of the form.
REST Client
This node type makes REST API calls to external services.
It supports 4 http methods: GET
, POST
, PUT
and DELETE
.
Authorization
Unlike the other node types, the REST Client supports a few authorization options.
To enable authorization change the value of the Authorization field from No
to Yes
.
This will cause the Authorization section to reveal.
In it you can configure 3 authorization types.
The Basic Auth type is a standard user & password authorization.
Besides the username and password you can configure two other things:
-
Auth Data Location: Controls where the authorization is sent.
InHeaders
mode the authorization will be added to the headers of the request.
InBody
mode the authorization will be added to the body of the request.
InQuery Params
mode the authorization will be encoded in the URL. -
Auth Key Name: Represents the name of the header / body parameter / query parameter that holds the value of the encoded user and password.
e.g. For Username = user
, Password = pass
, Auth Data Location = Headers
and Auth Key Name = Authorization
, the request sent will have the following header:
{
"Authorization": "Basic dXNlcjpwYXNz"
}
SSL Certificate
Some services require a client certificate for authentication (e.g. certSIGN).
For these services you need to store the certificate file you received from your provider somewhere on the server and then provide a relative path to the file along with the password.
e.g.
Proxy
You may find yourself in a situation where your ARCHIBUS server is hosted on premises and any external requests go through a proxy. In such a scenario we support 2 different approaches to use a proxy server based on the configuration source.
The Direct approach assumes you know the proxy configuration and can fill the connection details yourself.
e.g.
Switch the Enabled field to Yes
to use the proxy and to No
to attempt a direct connection instead.
Optionally you can provide a username and password as well as proxy authentication server and port.
Authorization node parameters are auto-generated when you change and save the Authorization section.
The same goes for the HTTP Method.
e.g. Creating a new workplace cleaning request with GWE using its REST API
Manipulating Documents
To read documents from the database we have a special function: checkoutFile
.
The function takes 3 arguments: the model containing the primary key to uniquely identify the record, the name of the table and the name of the document field.
e.g. Using the document from field doc1
of the wf_requests
table:
${checkoutFile(node1, "wf_requests", "doc1")}
Where node1
should contain the primary key values like:
{
"wf_requests.request_id": 123
}
We can send this document via REST API to an endpoint:
Notice in the screenshot above the parameter is of type Document
.
Parameter Type
Tip: You must always make sure your parameters match their data types.
When sending documents via REST API, the data will be encoded in base64
and added to the request as multipart/form-data
.
Node Splitting
Node splitting helps you take an array of items and process each item individually instead of processing the whole array.
e.g. Suppose you want to send multiple documents for electronic signing.
For this you can create a Database Read
node that returns document fields from your table with a certain restriction that returns more than one document.
In this scenario you want to send each document separately and in order to do that, you split the result of the first node by setting the field Split Result?
to Yes
and saving.
This will make the Rule Chain execute the next nodes for each record resulted from the split node’s execution.
e.g. Getting the document would look like this:
${node1["my_table.my_doc_field"]}
If set to No
, the rule chain will count the results of the node as a single result and proceed executing the next nodes.
You can still access the results but you will have to refer to them via an array index now.
e.g. Getting the document of the first result would look like this:
${node1[0]["my_table.my_doc_field"]}
Running Rule Chains Manually
You must have at least one active node to run a rule chain.
To run a rule chain, once you select it on the left side grid, click on the Run rule chain
button which will open a popup like this:
Run Rule Chain form
In this form you can populate an initial data model that can be used in the nodes via binding expressions.
You will see this default example of a data model.
{"wf_requests.request_id": "300"}
You may change it however you like as long as it’s a valid JSON or you may remove it entirely if you don’t need an initial data model for your rule chain.
In the default example we’re using a variable name that contains a dot wf_requests.request_id
but free marker (the engine that interprets the binding expressions) will interpret this as an object called wf_requests
that contains an attribute called request_id
. However that is not the case and if you were to use the binding variable like this: ${wf_requests.request_id}
, it would result in an error.
Error
RouteException: The following has evaluated to null or missing:
==> wf_requests [in template "" at line 1, column 16]
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----
----
FTL stack trace ("~" means nesting-related):
- Failed at: ${wf_requests.request_id} [in template "" at line 1, column 14]
----
To avoid this pitfall you can access variables that contain dot in their names in one of two ways:
- Using our node aliases. The initial data is always designated as
node0
.
e.g.
${node0["wf_requests.request_id"]}
- Using freemarker’s built-in special operator to access the entire data model directly.
e.g.
${.data_model["wf_requests.request_id"]}
Copying Rule Chains
If you need multiple versions of your rule chain with slight variation in nodes you have the option to clone a rule chain. To do that, click the Copy rule chain
button.
It should create a new rule chain in the left side with the following name: [original rule chain's name]-Copy
.
Attempting to copy the original rule chain again while already having a copy present with the name unchanged (-Copy) will result in an error.
e.g.
The same thing will happen when you try to create a new rule chain and give it the name of an already existing one.
Inspecting Logs
Rule chains have a logging component that helps you develop the nodes and test them to make sure they work properly.
You can see the execution logs by selecting any rule chain from the left side panel.
e.g.
You have a few actions available:
- Clear: Will remove all the execution logs of this rule chain.
- Refresh: Will refresh the grid.
- XLS: Will export the data from the grid to an
.xlsx
file.
Tip: Clicking on the rule chain will show all the logs of that rule chain while clicking on a particular node will only show the execution logs for that node.
Each line in this grid represents an execution of one of the nodes.
By default the rows are sorted descending by their date and time of creation, making it so that the latest execution is the first in the grid.
Log Fields
There are a few fields in the grid:
- HTTP Method: This is only relevant when you’re using a
REST Client
type of node. It’s the http method that you set in the node’s configuration. We support one of the four:GET
,POST
,PUT
andDELETE
. - Endpoint: Also used for the
REST Client
nodes. It saves the clean version (all binding expressions evaluated) of the endpoint you set in the node’s configuration. - Request: This holds the clean version of the payload. Depending on what you are trying to send and to which target, the request can look differently.
e.g. When trying to read a record from the database the body will contain the clean version of the restriction used to find that record:
{
"headers": {},
"body": "request_id = 45"
}
e.g. When sending a request to a 3rd party service via REST this field will contain the clean version of the request body and also the headers of the request:
- Response: The result of the execution. It can look differently depending on the node type.
e.g. When reading certain fields from a record in the database, the response will look like this:
{
"wf_requests.status": "REQUESTED",
"wf_requests.wf_type": "WORKPLACE CLEANING",
"wf_requests.request_id": 45
}
e.g. When receiving a response from a 3rd party service it might look like this:
[
{
"id": 1,
"name": "Leanne Graham",
"username": "Bret",
"email": "[email protected]",
"address": {
"street": "Kulas Light",
"suite": "Apt. 556",
"city": "Gwenborough",
"zipcode": "92998-3874",
"geo": {
"lat": "-37.3159",
"lng": "81.1496"
}
}
}
]
Tip: Clicking on the Request
or Response
value of any row of the Rule Chain Log
panel will open a popup with the values formatted in text area fields.
Useful when the either field is longer and hard to read directly from the grid.
e.g.
Calling the free Cat Facts API
- Date Created: The date the log record was created.
- Time Created: The time the log record was created.
- Status: The status of the execution. It can be either
Success
orError
.