Overview

As you develop more sophisticated Playbooks, the ability to easily debug playbooks becomes exceeding important. FortiSOAR™ has designed the Execution History to make it easier for you to see the results of your executed playbooks and for you to debug playbooks.

Use the Executed Playbook Logs icon (Executed Playbook Logs icon) that appears on the top-right corner of the FortiSOAR™ screen to view the logs and results of your executed playbooks as soon as you log on to FortiSOAR™. You can also use the executed playbook logs to debug your playbooks.

Note

FortiSOAR™ implements Playbook RBAC, which means that you can view logs of only those playbooks of which you (your team) are the owner. For more information, see the Playbooks Overview chapter.

The Execution History provides the following details:

  • Playbooks have been organized by the parent-child relationship.
  • Playbooks have a console using which you can see debug messages with more significant details.
  • Playbook designer includes the playbook execution history option.
  • Playbooks can be filtered by Playbook Name or Record IRI, user, status, or last run.
  • Playbook Execution History contains details of the playbook result, including information about the environment and the playbook steps, including which steps were completed, which steps are awaiting some action, which steps were failed, and which steps were skipped.

The Executed Playbook Logs do not display the Trace information from the error message so that the readability of the Executed Playbook Logs is enhanced since the clutter in the error details screen is reduced and you can directly view the exact error. The Trace information is yet present in the playbook logs.

FortiSOAR™ also contains enhanced the error messages that are precise and detailed making it easier for you to debug playbook issues. For information about the common playbook error messages and how to debug them, see the Debugging common playbook and connector errors article present on the Fortinet Support Site. You must log onto the support site to view this information.

You can access the playbook execution history as follows:

  • Clicking the Executed Playbook Logs icon (Executed Playbook Logs icon) in the upper right corner of the FortiSOAR™ screen.
    You have an option of purging executed playbooks logs from the Executed Playbooks Log dialog. For more information see Purging Executed Playbook Logs.
  • Clicking Tools > Execution History in the playbook designer to view the execution history associated with that particular playbook.
    Execution History option in the Playbook Designer
  • Clicking the Executed Playbook Logs icon in the detail view of a record such as an alert record to view the playbooks that have been executed on that particular record in a flowchart format. This makes it easier for users to view the flow of playbooks, especially useful for viewing the parallel execution paths in playbooks.
    Execution History option in a record

Playbook Execution History

Click the Executed Playbook Logs icon in the upper-right corner of FortiSOAR™ to view the logs and results of your executed playbook. Clicking the Executed Playbook Logs icon displays the Executed Playbook Logs dialog as shown in the following image:

Executed Playbook Logs Dialog

FortiSOAR™ 6.0.0 has enhanced the Executed Playbook Logs to display the executed playbooks in the flowchart format, as is displayed in the playbook designer. This makes it easier for users to view the flow of playbooks, especially useful for viewing the parallel execution paths in playbooks.

You can click the down arrow in the Choose Playbook field to display a list of the most recently run Playbooks sorted by chronological datetime, with the playbook that was executed last being displayed first. All playbooks are displayed with 10 playbooks being displayed per page. Click a playbook in the list to display it in the flowchart format and also see the details of the playbook result, the environment and the playbook steps, including which steps are completed, failed, awaiting or skipped.

The Execution Playbook Log dialog also displays a count of the total playbooks executed, the date time of when the playbook was executed, and the time taken for executing the playbook.

You can toggle the ENV button to toggle between the environment in which the playbook was executed and the steps of the playbook. You can also copy the environment, error, and step details to the clipboard by clicking the Copy 'Env' to Clipboard or Copy 'OUTPUT' to Clipboard button.

You can also open the playbook directly in the playbook designer from the Executed Playbook Logs dialog by clicking the Edit Playbook button that appears in the right section of the dialog.

Click the filter icon to filter logs associated with a playbook using the following options:

  • Playbook Name: In the Search by Playbook Name or Record IRI field, filter the log associated with a particular playbook, based on the playbook name or the record IRI associated with the playbook.
    Example of filtering logs using the Record IRI: /alerts/bd4bf0a6-b023-4bd7-a182-f6938fa37ada.
  • Run By: From the Run By drop-down list, filter the log associated with a particular playbook, based on the user who has run the playbook.
  • Last Run: From the Last Run drop-down list, filter the log associated with a particular playbook, based on the when the playbook was executed. You can choose from relative time range options such as, Last 15 mins, Last hour, Last 24 hours, Last 7 days, Last 15 days, Last year, etc
  • Status: From the Status drop-down list, filter the log associated with a particular playbook, based on the status of the playbook execution. You can choose from the following options: Incipient, Active, Awaiting, Paused, Failed, Finished, or Finished with error.

You can click the Refresh icon appearing alongside the Edit Playbook button to refresh the listings displayed in the dialog.

To purge Executed Playbook Logs, click the Settings icon on the top-right of the Executed Playbook Logs dialog and select the Purge Logs option. For more information, see Purging Executed Playbook Logs.

To terminate a playbook that are in the Active, Incipient, or Awaiting state, click the Terminate button. To terminate all running instances of a particular type, click the Settings icon and select the Terminate Running Instances option. For more information, see Terminating playbooks.

Environment

Click Env to view the complete environmental context in which the playbook was executed, including the input-output and computed variables across all steps in the playbook.

Executed Playbook History: Step Output Tab

You can toggle the ENV button to toggle between the environment in which the playbook was executed and the steps of the playbook. You can also copy the environment, error, and step details to the clipboard by clicking the Copy 'ENV' to Clipboard or Copy 'OUTPUT' to Clipboard button.

Executed Playbook History: ENV

Playbook Steps

The Playbook Steps section lists all the steps that were part of the playbook and displays the status of each step using icons. The icons indicate whether the step was completed (green tick), skipped (grey no symbol), awaiting some action (orange pause symbol) or failed (red cross).

For example, if a playbook is awaiting some action, such as waiting for approvals from a person or team who are specified as approvers, then the state of such playbooks is displayed as Awaiting.

Executed Playbook Logs Dialog - Awaiting Playbooks

The status of the playbook will display as Awaiting till the action for which the playbook execution halted is completed, after which the playbook will move ahead with the workflow as per the specified sequence.

You can click on a playbook step for which you want to view the details, and you will see tabs associated with the playbook step: Input, Pending Inputs (if the playbook is in the awaiting state), Output (if the playbook finishes) or Error (if the playbook fails), and Config.

Input Tab

The input tab displays data, in the case of the first step of the playbook such as the Start step, input arguments and evaluated arguments. The data displays the trigger information for the playbook. The input_args displays the input in the jinja format that the user has entered for this step. The evaluated_args displays what the user input was evaluated by the playbook once the step gets executed.

Executed Playbook History: Playbook Steps - Input Tab

Pending Inputs Tab

If a playbook is in an "Awaiting" state, i.e., it requires some input or decision from users to continue with its workflow, then the Pending Inputs tab is displayed:

Executed Playbook Logs - Manual Inputs > Pending Inputs Tab

Once the user provides the required inputs and submits their action, the playbook continues its execution as per the defined workflow.

Output or Error Tab

If the playbook step finishes, then the Output tab displays the result/output of the playbook step.

Executed Playbook History: Playbook Steps - Output Tab

If the playbook step fails, then the Error tab displays the Error message for that step. Click the step that has the error (step with a red cross icon) to view the error message, so that it becomes easier for you to know the cause of the error and debug the cause of the playbook failure.

Executed Playbook History: Playbook Steps - Error Tab

FortiSOAR™ has enhanced error messages by making them more precise and thereby making it easier for you to debug the issues. Also, the Trace information has been removed from the executed playbook log to reduce the clutter in the error details screen and directly display the exact error. The Trace information will be present in the product logs located at:

  • For Playbook runtime issues: /var/log/cyops/cyops-workflow/celeryd.log
  • For connector issues in cases where playbooks have connectors: /var/log/cyops/cyops-integrations/connectors.log

For information about the common playbook error messages and how to debug them, see the Debugging common playbook and connector errors article present on the support site.

FortiSOAR™ also provides you with the option to resume the same running instance of a failed playbook from the step at which the playbook step failed, by clicking the Rerun From Last Failed Step button. This is useful in cases where the connector is not configured or you have network issues that causes the playbook to fail, since you can resume the same running instance of the playbook once you have configured the connector or resolved the network issues. However, if you change something in the playbook steps, then that would be a rerun of the playbook and not a resume or retry of that playbook.

Users who have Execute and Read permissions on the Playbooks module can rerun playbooks in his own instance. Administrative users who have Read permissions on the Security module and Execute and Read permissions on the Playbooks module can rerun their own playbooks and also playbooks belonging to users of the same team.

Notes:

  • If you have upgraded your FortiSOAR™ system, then you can resume only those playbooks that were run after the upgrade.
  • If you have a playbook that had failed before you upgrade your FortiSOAR™ system, and post-upgrade you try to resume the execution of that playbook, then that playbook fails to resume its execution.

To resume the running instance of a failed playbook, do the following:

  1. Open the Executed Playbook Logs dialog.
  2. Click the failed playbook that you want to resume, and then click the Rerun From Last Failed Step button.
    FortiSOAR™ displays the Retry triggered successfully! message and the failed playbook resumes from the failed step:
    Playbook that has been resumed with the Retry triggered successfully! message
    A playbook that has been rerun will display Retriggered as shown in the following image:
    A retrigged playbook

Config Tab

The Config tab displays the step variables detailed entered by the user for the particular step and also includes information about whether other variables, such as ignore_errors, MockOutputUsed, or the when condition have been used (true/false) in the playbook step.

Playbooks have been organized by the parent-child relationship. The Parent playbook displays a link that lists the number of child playbook(s) associated with the parent playbook. Clicking the link displays the execution history for the child playbook(s).

The Executed Playbook Logs displays the execution history of the child playbooks, i.e., you can search for the child playbook in Executed Playbooks Logs and the search results will display the child playbook and its execution history and you can also use the Load Env JSON feature in the Jinja Editor making debugging of the child playbooks easier.

If the parent playbook has a number of child playbooks, you can also search for child playbooks, by clicking the search icon that is present beside the child playbook link and then entering the name of the playbook in the Search by Playbook Name field. You can also filter the child playbooks on its running status, such as Incipient, Active, Awaiting, etc. by selecting the status from the All Status drop-down list.

For example, in the following image, the Alert > Notify Updation (System) playbook has 1 child playbook: Alert > Notify Creation (Email). You can click the Alert > Notify Creation (Email) playbook to view its execution history:

Executed Playbook History: Child Playbook History

Purging Executed Playbook Logs

You can purge Executed Playbook Logs by clicking the Settings icon on the top-right of the Executed Playbook Logs dialog, and then selecting the Purge Logs option. Purging executed playbook logs allows you to permanently delete old playbook history logs that you do not require and frees up space on your FortiSOAR™ instance. You can also schedule purging, on a global level, for both audit logs and executed playbook logs. For information on scheduling Audit Logs and Executed Playbook Logs, see the Scheduling purging of audit logs and executed playbook logs topic in the System Configuration chapter of the "Administration Guide."

To purge Executed Playbook Logs, you must be assigned a role that has a minimum of Read permission on the Security module and Delete permissions on the Playbooks module.

To purge Executed Playbook Logs, click the Settings icon and select the Purge Logs option, which displays the Purge Playbook Execution Logs dialog:

Purge Playbook Execution Logs Dialog

In the Purge All logs before, field, select the time frame (using the calendar widget) before which you want to clear all the audit logs. For example, if you want to clear all audit logs before December 01st, 2019, 9:00 AM, then select this date and time using the calendar widget.

Purge Playbook Execution Logs Dialog - Date and Time Specified

Click the Exclude Awaiting Playbooks checkbox (default) to exclude the playbooks that are in the "Awaiting" state from the purging process.

To purge the logs, click the Purge Logs button, which displays a warning as shown in the following image:

Purge Playbook Execution Logs Dialog - Warning

Click the I Have Read the warning - Purge Logs to continue the purging process.

Filtering playbook logs by tags

You can filter playbook execution logs by tags or keywords that you have added in your playbooks.

A user who has a role with a minimum of Update permission on the Security module can save tags, which will be applied as a default filter for playbook execution logs to all other user. A user who does not have such a role can add a tag to filter playbook execution logs and view the filtered playbook execution logs but cannot save that filter.

Click the Settings icon on the top-right of the Executed Playbook Logs dialog to view tags that have been added by default to filter the playbook execution logs. You will see a message 1 Tags Excluded, which means that playbook logs with one specific tag is being excluded by default.

Executed Playbooks Log Settings: Filter Logs by Tags

You can either click the 1 Tags Excluded link or the Filter Logs By Tags option to open the Filter Logs by Tags popup as shown in the following image:

Filter Logs by Tags - Default options

To filter playbook logs based on tags, add a comma-separated list of tags in the Tags field.

In the Mode section, choose Exclude to exclude playbook logs with the specified tags. You will observe that the #system tag is already added as a tag in the Exclude mode, which means the any playbook with the #system tag will be excluded from the playbook logs. To include only those playbook logs with the specified tags, click Only Include. For example, if you only want to view the logs of phishing playbooks, i.e., logs of playbooks that have #phishing tag, click Only Include and type #phishing in the Tags field. You must also remove the #system tag from the Only Include mode, since otherwise playbook logs with both the #phishing and #system tags will be included.

Important

You can specify a comma-separated list to Include all tags or Exclude all tags. You cannot have a mix of Include and Exclude tags.

Filters will apply from the time you have set the filter, i.e., if you have added a #phishing tag in the Exclude list at 16/05/2019 17:00 hours, then the filter will apply only from this time. The historical logs, logs before 16/05/2019 17:00 hours will continue to display in the Executed Playbooks Logs.

An example of excluding playbook logs by tag follows:
If you have added tags such as #dataIngestion in your playbooks, then you can filter out the data ingestion logs by clicking Exclude and typing #dataIngestion in the Tags field. If an administrator with Update rights on the Security module wants this filter to be visible to all users, then the administrator can save this filter as a default for all users, by clicking the Set as default filter for all users checkbox and then clicking the Save & Apply Filter button. If you do not have appropriate rights, you can apply the filter for only yourself by clicking the Apply Filter button and view the filtered playbook executed logs.

Filter Logs by Tags with multiple tags

This applies the filter and displays text such as 2 Tags Excluded on the top-right corner of the Executed Playbook Logs dialog. Now, the Executed Playbook Logs will not display logs for any system playbook or for any data ingestion playbook.

Filtered Playbooks view in the Executed Playbook Logs

Users (without administrative rights) can remove filters by clicking Settings > Filter Logs by Tags or clicking the <number of Tags included> link to display the Filter Logs By Tags dialog and click Clear All Tags to remove the tags added and add their own tags. However, these changes will only be applicable till that instance of the log window is open. If the page refreshes or the window reloads, then the tags specified by the administrator will again be applied.

Terminating playbooks

You can terminate playbooks that are in the Active, Incipient, or Awaiting state. Users who have Read and Delete permissions on the Playbooks module can terminate a running instance of their own playbook instance. Administrators who have Read permissions on the Security module and Delete permissions on the Playbooks module can terminate running instances of any playbook.

To terminate a running playbook instance, open the Executed Playbook Logs and click the instance that you want to terminate and click Terminate as shown in the following image:

Executed Playbooks Logs - Terminate Button

Once you click Terminate, the Terminate Execution dialog is displayed in which you can choose to either terminate only the particular running instance, by clicking Terminate Current Instance Only or terminate all running instances, by clicking Terminate All Running Instances.

Terminate Playbook options

If you click Terminate Running Instance Only, then the state of that playbook changes to Terminated:

Executed Playbook Logs - Terminated Playbook

You can also choose to terminate the running instances of all playbooks that are in the Active, Incipient, or Awaiting state.

To terminate the running instances of all playbooks based on the status of the playbooks, do the following:

  1. Click the Settings icon on the top-right of the Executed Playbook Logs dialog.
  2. Select the Terminate Running Instances option, which displays the Terminate Running Instances dialog.
  3. In the Terminate Running Instances dialog, select the status (Active, Incipient, or Awaiting) whose running instances of Playbooks you want to terminate, and click Terminate.
    Terminate Running Instances dialog
    You can rerun the playbook from the step it was terminated by clicking the Rerun Pending Steps button on the terminated playbook.

Setting up auto-cleanup of workflow execution history

Workflow Execution history is extensively persisted in the database for debugging and validating the input and output of playbooks at each step. A very large execution history, however, causes overhead regarding consumption of extra disk space, increase in the time required for upgrading FortiSOAR™, etc. Therefore, it highly recommended to set up an auto-cleanup of the workflow execution history using a weekly cron schedule.

To delete the workflow run history keeping the last 'X' entries, ssh to your FortiSOAR™ appliance as root and run the following command:

# /opt/cyops-workflow/.env/bin/python /opt/cyops-workflow/sealab/manage.py cleandb --keep X
For example, to delete all workflow run history, apart from the last 1000 entries, use the following command:
# /opt/cyops-workflow/.env/bin/python /opt/cyops-workflow/sealab/manage.py cleandb --keep 1000

To set up a weekly schedule delete workflow history, to the above command, add a cron expression entry in the /etc/crontab file that would schedule a workflow execution history cleanup as per your requirements. Command to edit a cron job is crontab -e.

For example, the command to add an entry in the /etc/crontab file that would schedule a workflow execution history cleanup to every Saturday night and delete all workflow run history, apart from the last 1000 entries, would be as follows:
# 0 0 * * SAT /opt/cyops-workflow/.env/bin/python /opt/cyops-workflow/sealab/manage.py cleandb --keep 1000

Note that running the above command deletes the workflow entries but does not release the disk space back to the OS, i.e., it keeps it reserved for the Postgres process. This is the desired behavior, and no further action is required if the execution history cleanup is scheduled because the Postgres process would need the freed-up disk space to store further workflows. If, however, you also wish to reclaim disk space for backup or restore or other activities, you would additionally need to run a "full vacuum" on the database after you ssh to your FortiSOAR™ appliance as root and run the following commands:

psql -U cyberpgsql sealab
psql (10.3)
Type "help" for help.

sealab=# vacuum full;
VACUUM
sealab=# \q

Known Issue: If you do not schedule the workflow execution cleanup, and you are deleting a very large set of entries in one go, then the db cleanup command might fail due to the failure in loading the large set of entries into memory. In this case, you will have to run the command in batches.
For example:
# /opt/cyops-workflow/.env/bin/python /opt/cyops-workflow/sealab/manage.py cleandb --keep 100000
# /opt/cyops-workflow/.env/bin/python /opt/cyops-workflow/sealab/manage.py cleandb --keep 90000

Optimizing Playbooks

Playbook steps that were looped (using the Loop option in the playbook step) could be run only sequentially earlier. Now, you have an option to loop playbook steps parallelly as well. For more information, see the Loop topic in the Triggers & Steps chapter.

You can tune the thread pool size and other settings for parallel execution using the settings that are mentioned in the following table:

Key name and location Description Default value
THREAD_POOL_WORKER
/opt/cyops-workflow/sealab/sealab/config.ini
The thread pool size is used for parallel execution. The THREAD_POOL_WORKER variable is used to optimize the parallel execution and enhance performance.
You can reduce the default value of the thread pool size from the default value if:
1. The number of cores on your FortiSOAR instance are lesser than the default recommended.
2. The task to be executed in the loop step is synchronous in nature and thread context switching would be an overhead.
8
SYNC_DELAY_LIMIT
/opt/cyops-workflow/sealab/sealab/config.ini
If the delay specified in the playbook step is higher than this threshold, then the loop step will be decoupled from the main playbook and run asynchronously.
For example if you set the SYNC_DELAY_LIMIT to 60, it means that a 60 seconds check is added, and after 60 seconds the playbooks should run in parallel. This works in parallel with your playbook soft limit time, CELERYD_TASK_SOFT_TIME_LIMIT parameter. The time set in the CELERYD_TASK_SOFT_TIME_LIMIT parameter must be greater than the time set in the SYNC_DELAY_LIMIT parameter.
60
CELERYD_TASK_SOFT_TIME_LIMIT
/opt/cyops-workflow/sealab/sealab/config.ini
To change the soft time limit for playbooks.
The soft time limit value is set in milliseconds (ms). The default value of 3600 is 3600 ms.
3600
CELERYD_TASK_TIME_LIMIT
/opt/cyops-workflow/sealab/sealab/config.ini
To change the time limit for playbooks.
The time limit value is set in milliseconds (ms). The default value of 3700 is 3700 ms.
Note: This value should always be higher than the SOFT_TIME_LIMIT. For more details, see the Celery 4.3.0 documentation >> User guide: task_soft_limit section.
3700
CELERYD_OPTS
/etc/celery/celeryd.conf
To optimize the parallel running of threads in celery so that your overall playbook execution time is reduced.
By default, the workflow engine spawns a separate process running a workflow. If the tasks in the workflow are asynchronous and short lived, the thread-based workers can be enabled.
For more details, see the Celery 4.3.0 documentation > User guide > Concurrency >> Concurrency with Event section.
CELERYD_OPTS=-P=eventlet -c=30"

These optimizations also help in scaling your playbooks by resolving bottleneck that slow down playbook execution and resolving internal timeout issues.

Version 5.1.0 onwards parallel branch execution of playbooks is supported. Parallel branch execution optimizes playbook execution by having the ability to execute two or more independent paths parallelly.

You can enable or disable parallel execution by changing the value (true/false) of the PARALLEL_PATH variable in the [Application] section in the /opt/cyops-workflow/sealab/sealab/config.ini file. By default, a fresh install of version 5.1.0 will have the PARALLEL_PATH variable set as true.

Troubleshooting Playbooks

Filters in running playbooks do not work after you upgrade your system in case of pre-upgrade log records

You can apply filters on running playbooks using the Executed Playbook Logs. These filters will apply to log records that are created post-upgrade and will not apply to log records that were created pre-upgrade.

For log records that were created before the upgrade to version 4.11 or later, use the playbook detail API:
GET: https://<FortiSOAR_HOSTNAME/IP>/api/wf/api/workflows/<playbook id>/?format=json

To get the playbook id, use the playbook list API:
GET: https://<FortiSOAR_HOSTNAME/IP>/api/wf/api/workflows/?depth=2&limit=30&ordering=-modified

Playbooks are failing, or you are getting a No Permission error

Resolution

When the Playbook does not have appropriate permissions, then playbooks fail. Playbook is the default appliance in FortiSOAR™ that gets included in a new team.

If you cannot access records, such as alerts, then you must ensure that you are part of the team or part of a sibling or a child team that can access the records, and you must have appropriate permissions on that module whose records you require to access or update. Only users with CRUD access to the Appliances module can update the Playbook assignment. For more information on teams and roles, see the Security Management chapter in the "Administration Guide."

Playbook fails after the ingestion is triggered

There are many reasons for a playbook failure, for example, if a required field is null in the target module record, or there are problems with the Playbook Appliance keys.

Resolution

Investigate the reason for failure using the Playbook Execution History tab (earlier known as Running Playbooks) in the Playbook Administration page. Review the step in which the failure is being generated and the result of the step, which should contain an appropriate error message with details. Once you have identified the error, and if you cannot troubleshoot the error, contact Fortinet support for further assistance.

When you are using a system playbook that sends an email, for example, when an alert is escalated to an incident, and an Incident Lead is assigned, then the system playbook sends an email to the Incident Lead specified. The email that is sent to the Incident Lead contains the link to the incident using the default hostname.

Resolution

To ensure that the correct hostname is displayed in the email, you must update the appropriate hostname as per your FortiSOAR™ instance, in the Playbook Designer as follows:

  1. Open the Playbook Designer.
  2. Click Tools > Global Variables to display a list of global variables.
  3. Click the Edit icon in the cyops_hostname global variables, and in the Field Value field add the appropriate hostname value.
  4. Click Submit.
    The system playbook will now send emails containing the updated hostname link.

Important

In the system playbook (or any playbook) that is sending an email, ensure that you have used the cyops_hostname global variable in the Send Email step.

Purging executed playbook logs issues

If you are facing issues while purging of executed playbook logs such as, the purge activity is taking a long time or the purging activity seems to be halted, then you could check if the Soft time limit (600s) exceeded for workflow.task.clean_workflow_task[<taskid>] error is present in the /var/log/cyops/cyops-workflow/celeryd.log file. The Soft time limit error might occur if the amount of playbook logs to be purged is very large.

Resolution

Increase the value set for the LOG_PURGE_CHUNK_SIZE parameter in the /opt/cyops-workflow/sealab/sealab/config.ini file.

By default, the LOG_PURGE_CHUNK_SIZE parameter is set to 1000.

Important: If your environment has been upgraded to version 4.12.1, then you will have to add the LOG_PURGE_CHUNK_SIZE parameter, along with its required value, to the [application] section, in the /opt/cyops-workflow/sealab/sealab/config.ini file.

Redis failure on your FortiSOAR™ instance due to high rate of playbooks execution

You can experience a Redis failure on your FortiSOAR™ instance experienced when the rate of playbooks execution is very high on your FortiSOAR™ instance. For example, if 100 playbooks complete their execution in FortiSOAR™ in 10 seconds, and this rate of execution continues for several hours, then you might face a Redis error as shown in the following image:

Redis Failure on your FortiSOAR™ instance

This issue occurs because the Redis cache gets full due to the high rate of playbooks execution.

Resolution

  1. Flush the Redis database using the # redis-cli -n 11 flushdb command
  2. Set the value of the CELERY_TASK_RESULT_EXPIRES parameter to 1200 (this value is in seconds) in the celeryd section of the config.ini file located at /opt/cyops-workflow/sealab/sealab/ as follows:
    CELERY_TASK_RESULT_EXPIRES:1200
    Ensure that the value of the CELERY_TASK_RESULT_EXPIRES parameter is greater than the standard soft-limit time for playbook execution, which is 600 seconds.
  3. Restart the uwsgi, celerybeatd, and celeryd services.

Note: In case you have a high rate of playbooks execution, it is recommended that you set the value of CELERY_TASK_RESULT_EXPIRES parameter to 1200 before the playbook execution starts, to avoid Redis failure.

Playbooks fails with the "Too many connections to database" error when using the "parallel" option for a loop step in Playbooks

Playbooks can fail with the Too many connections to database error when you have selected Parallel in a loop step to execute the playbook step parallelly.

Resolution

To resolve this issue, reduce the number of parallel threads. To reduce the number of parallel threads, you have to change the value of the THREAD_POOL_WORKER variable. The THREAD_POOL_WORKER variable is present in the /opt/cyops-workflow/sealab/sealab/config.ini file, and by default the value of this variable is set to 8.

Frequently Asked Questions

Q: Is there a way to force variables set in a reference playbook to carry over into the parent playbook? I rather not put a group of steps I need in the parent if I can avoid it, as I am using the child playbook as an action itself, so would it duplicate the functions?

A: In general, variables set in child playbooks do not carry over to the parent playbook. The one exception is that the Reference a Playbook step will return (in vars.result) the return value of the last executed step in the child playbook. For instance, if the last step in the child playbook is Find Record, then the Reference a Playbook step will populate vars.result with the records that have been found using the Find Record step.

If u want to define the playbooks result as a combination of results of previous steps or sub-steps, you can use the Set Variable step at the end of the playbook and define variables that would contain data that you require to be returned.

Q: How do I convert Epoch time returned by a SIEM to a datetime format?

A: If you have a playbook, which has a connector step that connects to a SIEM, such as ArcSight or QRadar, and the SIEM returns the result in Epoch time (milliseconds), then you can convert Epoch time to the datetime format using the following command:
# arrow.get(1531937147932/1000).to(Required Timezone).strftime(%Y-%m-%d %H:%M:%S %Z%z)
or
# arrow.get(1531937147932/1000).to(Required).format(YYYY-MM-DD HH:mm:ss ZZ)
For example,
# arrow.get(1531937147932/1000).to(EST).format(YYYY-MM-DD HH:mm:ss ZZ)
Will return the following output:
2018-07-18 14:05:47 EDT-0400
For more examples on dates and times used in Python, see http://arrow.readthedocs.io/en/latest/.

Q: How do I change the timeout limit for playbooks?

A: To change the time limit or soft time limit for playbooks you must edit the CELERYD_TASK_TIME_LIMIT and CELERYD_TASK_SOFT_TIME_LIMIT parameters in the /opt/cyops-workflow/sealab/sealab/config.ini file. By default, these parameters are set in milliseconds (ms), as follows:
CELERYD_TASK_TIME_LIMIT = 3700
CELERYD_TASK_SOFT_TIME_LIMIT = 3600
Once you have made the change you must restart all the FortiSOAR™ services by using csadm and running the following command as a root user: :
# csadm services --restart