Parallel Execution Settings
Configure concurrent workflow processing for Lifecycle Management
Endpoint
GET /api/private/lifecycle_management/parallel_execution_settings
PUT /api/private/lifecycle_management/parallel_execution_settingsDescription
Configure how Lifecycle Management processes workflows concurrently. These settings control the maximum number of simultaneous jobs, workflows, and access requests, as well as how paused tasks affect the running queue.
Environment variable limits apply. Parallel execution settings configured through this API are subject to environment-level hard limits. If the environment limit for a setting is lower than the API-configured value, the environment limit takes precedence. To increase environment limits, contact Veza Support or your Customer Success representative. See Understanding the details response for how to check current limits.
Use these endpoints to:
View current parallel execution configuration
Adjust concurrency limits for workflow processing
Configure
max_paused_slotsto prevent safety limit bypass from paused tasks
Get parallel execution settings
Retrieve the current parallel execution configuration.
curl -X GET "https://your-tenant.vezacloud.com/api/private/lifecycle_management/parallel_execution_settings" \
-H "Authorization: Bearer YOUR_API_TOKEN"Use the :details suffix to get additional information about the current settings, including environment-level limits:
Understanding the details response
The :details endpoint returns three values for each setting (jobs, workflows, access_requests):
current_value
The effective runtime limit, calculated as the minimum of env_value and settings_value
env_value
The hard maximum set by environment variables. This value can only be changed by the Veza Support or Customer Success team
settings_value
The value configured through this API. Only takes effect up to the env_value ceiling
The effective limit formula is: current_value = min(env_value, settings_value)
If the :details response shows that env_value is lower than your desired settings_value, the API-configured value will not take full effect. Contact Veza Support or your Customer Success representative to request an increase to the environment limits.
Recommended workflow: Before updating settings, call the :details endpoint to check env_value limits. If increases are needed, work with Veza Support before making API changes.
Update parallel execution settings
Modify the parallel execution configuration. Updated values take effect only up to the environment-level hard limits (see Understanding the details response).
Request body
jobs
integer
1
Maximum number of concurrent policy jobs that can run simultaneously
workflows
integer
1
Maximum number of concurrent workflows per job
access_requests
integer
1
Maximum number of concurrent access request workflows
max_paused_slots
integer
0
Maximum number of paused tasks that can free up running slots
When to adjust these settings
By default, Lifecycle Management processes one task at a time. These settings should only be changed when there are many LCM tasks to process and sequential execution would take too long.
Common scenarios where increasing parallelism may be appropriate:
Large-scale provisioning or deprovisioning operations across many users
Environments with many approval-based workflows where tasks frequently pause
Time-sensitive access review remediation affecting many entitlements
Increase parallelism conservatively. Start with small increments and monitor for the risks described below before increasing further. Misconfigured parallelism can cause disruption during provisioning.
Higher concurrency increases the rate of API calls to downstream integrations. This can trigger rate limits, causing provisioning tasks to fail
Safety limit checks occur when tasks start. If multiple tasks are in-flight when the safety limit is reached, those tasks will not be blocked, allowing the limit to be exceeded
Concurrent execution can make task history more difficult to follow and troubleshoot when issues arise
Understanding max_paused_slots
The max_paused_slots setting controls how paused tasks (waiting for approval, delay actions, etc.) affect the running queue.
When parallel workflow processing is enabled, a paused task frees up a slot in the running queue, allowing another task to start. However, safety limit checks only occur when tasks start—not when they resume from pause. This creates a vulnerability where the safety limit can be significantly exceeded:
Task starts and passes the safety limit check
Task pauses (waiting for approval, delay, etc.)
Pausing frees a slot, allowing a new task to start
Steps 1-3 repeat, creating many paused tasks
When paused tasks resume, they run without re-checking the safety limit
The max_paused_slots setting mitigates this:
First N paused tasks: Decrement the running count (freeing slots for new tasks)
Additional paused tasks: Remain counted as "running" (no new tasks can start)
This limits how many paused tasks can bypass the safety limit while still allowing some parallelism for workflows with pause actions.
Configuration recommendations
0 (default)
Paused tasks never free slots. Safest configuration but lowest throughput for workflows with pauses
10-50
Moderate parallelism. Good balance for most environments
100+
High throughput for environments with many approval workflows. Higher risk of safety limit overshoot
Response
Related documentation
Last updated
Was this helpful?
