Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Configure how user attributes from a source of identity are synchronized for target user accounts
Attribute synchronization ensures that identity attributes in target systems remain up to date with the corresponding attributes in the source of truth. Veza Lifecycle Management provides configuration at two levels to control how and when attributes are synchronized.
At the action level, there are two distinct options to govern provisioning and user update processes:
Create new users - When enabled, the action will create new user accounts that don't exist in the target system
Update active users - When enabled, the action can update existing user accounts with attribute changes from the source of truth
At the attribute level, there are two explicit choices that define how and when attribute values are applied to user accounts:
Set for new users only - The attribute value is set only when creating new user accounts
Set for new and existing users - The attribute value is set for new accounts and updated for existing accounts when changes are detected
You may not want to enable "Set for new and existing users" for attributes like user principal name, which may change due to marital status or legal name corrections but shouldn't be automatically updated in all systems.
Both levels must be properly configured for an attribute to be continuously synchronized. For example, to keep an employee's department updated:
Enable Update active users on the Sync Identity action
Select Set for new and existing users for the department attribute
Set for new and existing users (continuously sync attributes that change during employment):
First Name, Surname
Department
Title
Manager
Set for new users only (preserve stable identifiers):
Active Directory sAMAccountName
Email Addresses (for Email Write-Back action)
This configuration ensures that dynamic attributes remain up to date while preserving stable identifiers.
Cost Center
AD Distinguished Name (DN)
AD User Principal Name (UPN)
AD Email
Computed properties for advanced workflow triggering and conditional transformations in Lifecycle Management
System attributes are computed properties that Lifecycle Management automatically generates during identity processing. These attributes enable advanced automation scenarios by providing runtime information about identity changes and transformation results.
All system attributes follow the sys_attr__ prefix convention and cannot be manually set or modified.
sys_attr__is_moverA persistent boolean attribute that indicates whether an identity has undergone changes to monitored properties.
Type: Boolean Persistence: Stored with identity record Available in: Workflow triggers, conditions, and transformers
Configuration: Define monitored properties in the policy configuration:
Workflow Trigger Example:
Combined Condition Example:
The attribute is automatically set to true when any property in mover_properties changes during identity update. It is cleared when the identity is unchanged in an extraction cycle, and excluded from change detection to prevent recursive updates.
System attribute names are case-sensitive and must be lowercase in all expressions.
sys_attr__would_be_valueA transient attribute that provides a preview of the transformation result during conditional evaluation.
Type: String Persistence: Transient (exists only during IF statement evaluation) Available in: Conditional transformers only
Usage Example - Conditional Domain Addition:
The above transformer will check if the transformed email already contains "@", preserve existing email addresses, and add domain only when needed.
sys_attr__would_be_value_lenA transient attribute that provides the character length of the transformation result during conditional evaluation.
Type: Number Persistence: Transient (exists only during IF statement evaluation) Available in: Conditional transformers only
Usage Example - Progressive Username Truncation:
For "Leonevenkataramanathan Foster":
First check (≤30 chars): leonevenkataramanathan.foster (30 chars - passes first condition)
If >30 chars, second check (≤20 chars): leonevenkataramana.foster (25 chars - fails second condition)
If >20 chars, fallback: l.f (3 chars - always succeeds)
NEXT_NUMBERPreview attributes work with the NEXT_NUMBER transformer for generating unique alternatives:
This evaluates the base value length before applying numbering, ensuring the final result (including numbers) meets constraints.
Only one NEXT_NUMBER transformer is allowed per conditional branch.
The sys_attr__is_mover attribute supports additional trigger properties for fine-grained control:
This workflow triggers only when:
The identity is marked as a mover (department or location changed)
The identity is active
At least one of the trigger_properties has changed since last extraction
Performance Notes
Mover Detection: Comparison occurs for all properties in
mover_propertiesduring each extractionPreview Evaluation: Each IF branch with preview attributes requires transformation execution
- Complete SCIM filter syntax for workflow conditions
- Complete list of transformation functions
- Attribute transformation concepts and examples
Alternatives with NEXT_NUMBER: l.f2, l.f3, l.f4
Caching: Preview values are calculated once per condition branch and reused
{
"mover_properties": ["department", "manager_id", "title", "location"]
}sys_attr__is_mover eq truesys_attr__is_mover eq true and department eq "Engineering" and is_active eq trueIF sys_attr__would_be_value co "@"
{email | LOWER}
ELSE
{email | LOWER}@company.comIF sys_attr__would_be_value_len le 30
{first_name | LOWER}.{last_name | LOWER | NEXT_NUMBER, 2, 3}
ELSE IF sys_attr__would_be_value_len le 20
{first_name | LOWER | FIRST_N, 10}.{last_name | LOWER | NEXT_NUMBER, 2, 3}
ELSE
{first_name | LOWER | FIRST_N, 1}.{last_name | LOWER | FIRST_N, 1 | NEXT_NUMBER, 2, 3}IF sys_attr__would_be_value_len le 15
{username | NEXT_NUMBER, 2, 5}
ELSE IF sys_attr__would_be_value_len le 15
{username | FIRST_N, 13 | NEXT_NUMBER, 2, 5}{
"trigger_properties": ["department", "location"],
"trigger_string": "sys_attr__is_mover eq true and is_active eq true"
}Use lookup tables to transform identity attributes for target systems
You can use Lookup transformers to convert identity attributes from a source system into appropriate values for target systems based on CSV reference tables. This is particularly useful when mapping values between systems that use different naming conventions, codes, or formats for the same conceptual data.
For example, you might need to transform a "Location" attribute from Workday (which might be stored as location codes like "MN001") into corresponding values for country, country code, or city names in a target system.
Use Table Lookup Transformers when:
You need to map source attribute values to different values in target systems
You have standardized reference data that must be consistent across applications
You need to extract different pieces of information from a single attribute value
Geographic Information:
Transform location codes to country, region, city, or timezone information
Map office codes to physical addresses or facility types
The Table Lookup Transformer references CSV-based mappings between source and destination values. When synchronizing user attributes, Veza:
Takes the source attribute value
Looks up this value in the specified lookup table
Returns the corresponding value from the designated return column
Lookup tables are CSV files with columns that map values from a source of identity to destination values. Each row represents a mapping entry. The first row must contain the column headers.
For example, a location mapping table might look like:
To create a new lookup table:
Navigate to the Lookup Tables tab within your policy configuration
Click Edit mode to enable policy changes
Click Add New to create a new lookup table
From the Lookup Tables tab, you can:
Edit table descriptions or upload a new CSV
Delete tables that are no longer needed
To use a Table Lookup Transformer in a common or action-synced attribute:
In Destination Attribute, choose the attribute on the target entity that will be updated
In Formatter, choose the source attribute to transform
In Pipeline Functions, specify the lookup table name, the column to match against, and the column containing values to return.
The full syntax for using lookup table transformers is:
Where:
<value> is the source attribute to transform (e.g., {location})
<table_name> is the name of the lookup table to use
<column_name>
Assuming a user has "location": "IL001" and a lookup table named locationTable structured as shown earlier:
You can combine lookup transformations with other transformation functions in a pipeline:
This would look up the state_code corresponding to the location value and convert it to lowercase.
When a lookup value is not found in the table, the transformation will fail for that specific attribute.
For full coverage, ensure your lookup table includes entries for all possible source values that may be encountered during provisioning.
To ensure robust provisioning workflows, it's important to include all expected values in your lookup table, validate source data before implementing lookup transformations, and test transformations with representative data sets.
Lookup tables are immutable and automatically deleted when no longer referenced by any policy version
Multiple policy versions can reference the same lookup table (e.g., an active version and a draft version)
Lookup tables are defined at the policy level and can be referenced by any transformer within the policy
Standardize Naming: To use a lookup-based transformer, you will reference the table by file name. Apply consistent conventions for both the table and columns.
Document Mappings: Add descriptions for each lookup table to explain its purpose
Validate Data: Ensure lookup tables are complete and accurate before using them in transformers. Consider how lookup tables will be maintained over time, especially for values expected to change.
Convert department codes to department names or business units
Map cost centers to budget codes or accounting categories
System-Specific Configurations:
Transform job titles to role designations in target systems
Convert skill codes to certification requirements or training needs
Drag a CSV file or click Browse to upload your reference data
Review the automatically detected column names
Click Save to store the lookup table
<return_column_name> is the column containing the value to return
{location} | LOOKUP locationTable, location_code, city
"Chicago"
{location} | LOOKUP locationTable, location_code, state
"Illinois"
{location} | LOOKUP locationTable, location_code, state_code
"IL"
Value not found in lookup table
Add the missing mapping to the lookup table with the correct source value
Incorrect column name referenced
Check the column names in your lookup table (they are case-sensitive)
Unexpected transformation results
Verify the lookup table content and ensure the correct columns are specified

location_code,state_code,state,city
MN001,MN,Minnesota,Minneapolis
CA001,CA,California,Los Angeles
TX001,TX,Texas,Houston
TX002,TX,Texas,Austin{<value> | LOOKUP <table_name>, <column_name>, <return_column_name>}{location | LOOKUP locationTable, location_code, state_code | LOWER}How source system properties become Veza attributes
When connecting to integrated systems (see Veza Integrations), Veza ingests properties from the source systems (e.g., Workday, Okta, Active Directory) and normalizes them into standardized attributes that appear when configuring Workflow trigger conditions, configuring Actions, and in Identities views.
While these standardized attributes are intended to ensure consistent naming across different systems, it is important to understand that some attributes may appear differently than their original names in the source system.
You can retrieve the original attribute names for enabled Lifecycle Management integrations using the API.
Veza normalizes all property names for consistency:
The following normalization rules typically apply:
Source properties are converted to lowercase
Any spaces and hyphens become underscores
Special characters removed
The following sections include some examples of how Veza handles attributes from common integrations.
Veza recognizes and standardizes many common attributes across source systems:
Veza will make conversions to some attribute names from the source integration. For example, sAMAccountName in Microsoft Active Directory is shown as account_name for Active Directory Users in Veza Access Graph.
Workday → Veza
Okta → Veza
Active Directory → Veza
Some integrations support custom property extraction for organization-specific fields from custom reports or extended schemas:
Always prefixed with customprop_
Automatically discovered during extraction once enabled
Follow standard normalization rules (lowercase, underscores)
Examples:
customprop_department_code - Custom department identifier
customprop_employeeou - Organizational unit
customprop_region - Geographic region
Some entity attributes are computed by Veza, and not derived from source data:
sys_attr__is_mover - Identity has changed monitored properties
sys_attr__would_be_value - Preview value in conditional transformers
sys_attr__would_be_value_len - Preview value length in conditional transformers
See for details.
When configuring a Workflow trigger condition or an action that syncs attributes, you can choose from available attributes using a dropdown menu.
Primary Source - Attributes from the main identity source appear without prefixes:
Secondary Sources - Attributes from additional sources are prefixed with the entity type:
Lifecycle Management uses two different expression syntaxes depending on the context:
In Workflow Conditions (SCIM Filter Syntax):
Trigger conditions use SCIM filter syntax to evaluate boolean expressions. See for complete documentation.
In Transformers (Formatter/Pipeline Syntax):
Attribute transformers use curly braces and pipes to produce output values. See for complete documentation.
Important: These syntaxes cannot be interchanged. Use SCIM filter syntax only in condition fields, and formatter syntax only in attribute mapping fields.
With Secondary Sources (in Conditions):
- Computed attributes for advanced scenarios
- Modifying and combining attribute values
- Using attributes in workflow conditions
Custom fields are identified with a customprop_ prefix
System-computed fields are identified with the sys_attr__ prefix
title
string
Job title
Senior Engineer
business_title
string
Business position
Senior Engineer
manager
string
Manager reference
managers
list
List of managers
[John Smith]
is_active
boolean
Active status
true
hire_date
date
Employment start date
2024-01-15
cost_center
string
Financial allocation
CC-1000
Employee Type
employee_types
List (e.g., Full Time)
Manager
managers
List of manager names
manager
manager
Manager's email/ID
department
department
Department name
title
title
Job title
customprop_project_code - Project allocation
Employee ID
employee_id
Spaces → underscores
BusinessTitle
business_title
CamelCase → snake_case
Cost-Center
cost_center
Special chars removed
Department Code
customprop_department_code
employee_id
string
Employee identifier
E-98765
email
string
Primary email
department
string
Department name
Worker ID
workday_id
Unique worker identifier
Employee ID
employee_id
Employee number
Business Title
business_title
Job position
Cost Center
cost_center
login
login
Username
email
Primary email
status
status
ACTIVE, SUSPENDED, etc.
department
department
sAMAccountName
account_name
Pre-Windows 2000 login
distinguishedName
distinguished_name
Full LDAP path
userPrincipalName
user_principal_name
user@domain format
memberOf
member_of

Custom fields prefixed
Engineering
Financial allocation
Department name
List of group DNs
workday_id
employee_id
business_title
hire_date
email
customprop_department_codeOktaUser.login
OktaUser.department
AzureADUser.job_title
ActiveDirectoryUser.distinguished_nameemployee_types co "Full Time" and department eq "Engineering"{first_name}.{last_name}@{customprop_domain}.comOktaUser.status eq "ACTIVE" and WorkdayWorker.is_active eq trueConfigure how user attributes from a source of identity are transformed for target user accounts
When creating workflows in Lifecycle Management policies to create, sync, or deprovision identities, you will use attribute transformers to specify how user attributes for target accounts should be structured.
The target attributes to create or update are typically mapped and, optionally, transformed from user metadata in the source of identity, such as an identity provider, HR system, or CSV upload. Attributes can be synchronized once or kept continuously in sync as changes occur throughout the user's employment lifecycle.
Attribute transformers are also available when mapping columns in CSV Upload integrations. This enables you to combine columns, reformat dates, standardize case, and apply other transformations during CSV import—without requiring Lifecycle Management workflows.
For example, attribute mapping and transformation can be used across Joiner, Mover, and Leaver scenarios:
Joiner: Set new Azure AD User Principal Name to {source username}@your-email-domain.com. This is an example of mapping multiple attributes and performing a transformation. More specifically, you use the attribute transformer to generate an email address for new joiners. Use the source username (user_principal_name) from the source of identity (Azure AD UPN) for the first part (user attribute), while your-email-domain.com is used for the last part (target attribute).
Mover: Always update a user’s “Manager” and “Department” attributes in Okta to match the user’s manager and department in Workday, a source of identity, whenever a department change or other employee mobility event occurs. This is an example of attribute mapping with continuous synchronization.
When synchronizing a user’s attributes, Veza may apply many transformations to convert the source attribute values into a more suitable format intended for the target application as a user account attribute.
For example, a transformer might remove the domain from an email address, replace special characters, or convert a string with uppercase letters to lowercase letters.
See for detailed information.
Common transformers define one or more rules to apply when synchronizing the attributes of a target identity. Use them at the Policy level where you want to create or update attributes using the same conventions across multiple sync or deprovision actions. When you need to configure a one-time individual action in a workflow, such as a specific attribute, then you use the transformer at the Action level.
At the Policy level, you configure a transformer with basic details, including how to source the value of each attribute:
Assign a name and description to the transformer, and specify the data source to which it applies.
Entity Type: Choose the target entity type in the destination system.
Click Add Attribute. The Destination Attribute dropdown will list available attributes for the chosen entity type.
After creating a common transformer, you can select it when editing a workflow action. You can edit or delete common transformers on the Edit Policy > Common Transformers tab. Remember that “Sync Identity” and “De-Provision Identity” actions can have action-level transformers override common transformers. If the same destination attribute is defined in both, the action-level transformer will take precedence.
Formatters specify the actual value of the attribute to synchronize. The target attribute can be set to a specific value, synchronized with a source attribute, transformed using a function, or a combination of these.
Some formatters should enable continuous synchronization for the attribute, while others should not. For example, the value of "Created By" should be immutable once a user account is provisioned. Other attributes that represent a state or status should be synchronized throughout the user's or account's lifecycle.
To create a destination attribute with a fixed value, enter the desired value when configuring the formatter. For setting the creator attribute:
For activating a re-hired employee:
To set empty values (common for de-provisioning flows):
Target attributes can be updated based on attributes belonging to the source of identity. To reference the value of a source entity attribute, use the format {<source_attribute_name>}.
Examples:
Early Access: Alias Definitions is currently in early access. Contact your Veza representative to enable this feature for your tenant.
By default, when you reference an attribute such as {department} in a formatter, the system searches through your configured identity sources in order (primary first, then secondary sources). This works well for simple policies with a single source of identity.
Aliases become useful when:
Multiple integrations share the same entity type (e.g., two Active Directory instances, or when your source of identity and sync target use the same entity type)
A policy involves multiple integrations that have attributes with the same name
You need to explicitly control which system's attribute value is used
Alias Naming: The system automatically adds a $ prefix to all alias names. For example, if you create an alias named workday in the UI, it becomes $workday when used in formatters and conditions.
Example: HR System to Directory Sync
A policy syncs identities from an HR system (Workday) to a directory (Active Directory). Both have a department attribute. Without aliases, {department} resolves from whichever system appears first in the search order. With configured aliases hr and directory (which become $hr and $directory), you can explicitly reference {$hr.department} to ensure you're using the authoritative HR value.
Aliases can be used in:
Attribute formatters
Workflow trigger conditions
Action conditions
Mover property definitions
Aliases provide shorthand references to specific integrations and entity types, making transformers and conditions more readable in complex policies.
To configure aliases, open a Lifecycle Management policy, click Edit, and navigate to the Alias Definitions tab. Each alias requires:
Alias Name: Must start with $ followed by at least one lowercase letter, number, or underscore. Additional $ characters can appear after the first character. The $ prefix is added automatically if omitted. Valid examples: $workday, $hr_system, $ad$corp. Invalid examples: $, $AD (uppercase), $$double (multiple leading $
Validation Rules: Alias names allow only lowercase alphanumeric characters, underscores, and $. They must start with exactly one $ (not zero, not multiple), and cannot be $target (reserved for action-level attribute references).
Testing and Validation: Aliases work in test formatters, allowing you to validate your formatter expressions before deployment. Additionally, alias-resolved attribute values appear in dry run results, so you can preview exactly which values will be synced.
Aliases can resolve attributes from two contexts:
Input: Values from the source system before any sync action runs (the authoritative data)
Output: Values currently in the target system (what's already provisioned)
By default, the system checks output first, then falls back to input. Use suffixes to explicitly control resolution:
When to Use Suffixes: Use $in when you need the authoritative value from the source system (e.g., HR system). Use $out when you need to compare against what's currently provisioned in the target. Without a suffix, the system checks the target first—useful when you want the most recent value regardless of source.
Use these operators in IF conditions to compare attribute values:
Combine conditions with and, or, or negate with not.
String List Attributes: For multi-value attributes (string lists), only the co (contains) operator is supported. Use it to check if the list includes a specific value, for example: IF $workday.roles co "Manager".
In attribute formatters:
In LCM Workflow Conditions (comparing source and target values):
Multiple integrations with the same entity type:
For organizations with multiple Active Directory domains (e.g., corporate employees and contractors), you can create distinct aliases to control which AD integration is used:
Configure alias corp_ad → Integration: AD_Corporate, Entity Type: ActiveDirectoryUser
Configure alias contractor_ad → Integration: AD_Contractors, Entity Type: ActiveDirectoryUser
Then explicitly reference the correct domain in your formatters:
This ensures you're using the department attribute from the corporate AD integration rather than the contractor AD integration, even though both use the same ActiveDirectoryUser entity type.
Based on the user metadata available from your source of identity (SOI), you may need to convert a full email address to a valid username, standardize a date, or generate a unique identifier for users provisioned by Veza. Suppose an attribute value needs to be altered to be compatible with the target system. In that case, you can transform the value of a source attribute or apply a range of other functions to generate the target value.
Formatter expressions use the following syntax: {<source_attribute_name> | <FUNCTION_NAME>,<param1>,<param2>}
For example:
Refer to the page for complete documentation of all supported functions, parameters, and usage examples. The reference includes:
Contact Veza if you require additional transformations for your use case.
You can pipeline multiple transformation functions together, separated by a vertical bar (|). Each will apply in sequence, allowing for complex attribute formatters that use the output of one function as the input of another.
{name | UPPER}
If name = Smith, the result is SMITH.
{first_name | SUB_STRING,0,1 | LOWER}.{last_name | LOWER}
Before deploying transformers in production policies, you can validate formatter expressions directly in the Veza UI. This allows you to verify that your transformation logic produces the expected output without affecting live data.
When adding or editing a transformer in a policy, look for the Test Formatter button next to the transformer field. Clicking it opens a test dialog:
Enter your transformer expression in the Attribute Transformer field
Click the Test Formatter button to open the test dialog
The dialog shows input fields for each attribute referenced in your expression (e.g., {first_name}, {email})
The test dialog is available wherever transformers are configured, including:
Action synced attributes
Unique identifiers
Common transformers
Date formatters
For complex pipelines, test incrementally:
Test the first function alone to verify it handles the input correctly
Add each subsequent pipe and verify intermediate results
Validate the complete pipeline produces the final expected value
This step-by-step approach helps isolate issues when a transformation doesn't produce the expected output.
The test interface uses sample data you provide. Ensure your test values accurately represent the source attribute data types and formats you'll encounter in production.
Use inline testing during transformer development, then validate the complete policy with a dry run before deploying to production.
As part of implementing Lifecycle Management (LCM) processes with Veza, you should create sets of common transformers to define how values such as username, login, or ID are sourced for each LCM Policy. These transformers can then be reused across all identity sync and deprovision policy workflows.
Create common transformers to consistently form attributes for specific entity types, and reuse them to avoid errors and save time when creating actions for that entity type. The order of common transformers matters when multiple transformers set the same destination attribute. Drag-and-drop to reorder common transformers and control precedence.
For example, defining a common synced attribute to describe how to format Azure AD account names {username}@evergreentrucks.com enables reuse across multiple workflow actions. You can also define synced attributes at the action level when they are used only once within a policy, such as setting the primary group DN and OU of de-provisioned identities to a group reserved for terminated accounts.
Common Transformer Examples:
The $target attribute transformer function is used when a value consists of one or more attributes that require an operation(s), making it too complex to transform, but it needs to be reused.
Important: The $target function can only be used within the same Action.
For example, an email address consists of [email protected]. However, you must use the format . By using the $target function, you reuse only one attribute, username, while not changing the other two attributes (firstname_lastname).
Example:
Destination Attribute
Formatter
Formatter
The Custom Attribute Transformer function allows you to define a custom transformer that acts as an alias for applying one or more transformer functions.
For example, you can define a custom function named $CLEAN, which is used as {first_name | $CLEAN}. This function can consist of a series of transformer functions such as | ASCII | LOWER | REMOVE_CHAR |.
To define a custom attribute transformer, use the following guidelines:
Policy Version Definitions
Custom functions must be defined as part of the policy version.
These definitions are structured similarly to hard-coded definitions and are returned in the same format, allowing the Veza UI to handle them without modification.
The API for updating and retrieving a policy version must also support these custom function definitions.
Naming Convention: Custom functions must be in ALL CAPS and prefixed with a $ to avoid conflicts with built-in functions.
Custom Attribute Transformer Limitations
The following custom definitions are not supported:
Transformer functions with included transformer parameters
Nested transformer functions
Transformer functions with parameters
As part of the Identity Sync action, you can append values to multi-value Active Directory attributes without replacing existing values. This ensures that existing attribute values are preserved when adding new ones.
This feature is specific to Active Directory and is not available for other integrations.
Supported Multi-Value Attributes:
Active Directory supports appending for the following multi-value attributes:
organizationalStatus, departmentNumber, employeeType
servicePrincipalName, proxyAddresses
Syntax:
Use the >> prefix before the array to append values:
Appending syntax supports two array formats:
With quotes (JSON format): >>[``"Active", "Permanent"]
Without quotes (simple format): >>[Active, Permanent]
When you use this syntax:
New values are added to the end of the existing attribute values
Duplicate values are automatically removed
The order of existing values is preserved
For example, ff an Active Directory user has:
And you apply the transformer:
The resulting value is:
Note that "Employee" was already present and not duplicated.
Setting vs. Appending:
To Replace existing values: Use [value1, value2] (without >>)
To Append to existing values: Use >>[value1, value2] (with >>)
Additional Notes:
The append prefix (>>) only works for multi-value attributes. It is ignored for single-value attributes
If the attribute has no existing values, the values are simply set (no difference from non-append behavior)
Both the appending syntax and the standard array syntax support arrays with or without quotes around values
Leaver: Move a user’s Active Directory account to an Organizational Unit (OU) reserved for terminated accounts.
Destination Attribute: Choose the attribute that Veza will create or update for the target entity.
Formatter: Choose how the destination attribute should be formatted. Specify the value, a {source_attribute}, or apply Transformation Functions.
Pipeline Functions: Combines a series of attribute formatters with the pipe ( | ) character that runs the value of an attribute in sequential order, where the output of one formatter becomes the input of the following formatter, thus the name, pipeline.
See Pipeline Functions for more examples.
Continuous Sync: Enabling this option ensures that the attribute is always synced, while applying any defined transformations. By default, attributes will not be synced if the target identity already exists.
You want to compare values between source and target systems while defining LCM Workflow Conditions
You need to detect movers based on changes in a specific system
Test formatters (for validation before deployment)
Integration: The data source containing the entity type.
Entity Type: The entity type to reference (e.g., WorkdayWorker, OktaUser).
ew
Ends with
String
lt
Less than
Number, timestamp
le
Less than or equals
Number, timestamp
gt
Greater than
Number, timestamp
ge
Greater than or equals
Number, timestamp
Date/time
DATE_FORMAT, DATE_ADJUST, DATE_ADJUST_DAY, ASSUME_TIME_ZONE, UTC_TO_TIME_ZONE, NOW
Convert and manipulate dates
Character encoding
ASCII, REMOVE_DIACRITICS
Handle international characters
Lookup
LOOKUP, FROM_ENTITY_ATTRIBUTE, FROM_MANY_ENTITIES_ATTRIBUTE
Cross-reference data from tables or entities
Generation
NEXT_NUMBER, UUID_GENERATOR, RANDOM_INTEGER, RANDOM_STRING_GENERATOR, RANDOM_ALPHANUMERIC_GENERATOR, RANDOM_NUMBER_GENERATOR
Create unique values
Domain
REMOVE_DOMAIN
Extract usernames from email addresses
Formatting
COUNTRY_CODE_ISO3166, LANGUAGE_RFC5646, PHONE_NUMBER_E164
Standardize to international formats
If first_name = John and last_name = Smith, the result is j.smith.
{email | REMOVE_DOMAIN}
If email = [email protected], the result is john.smith.
{email | REPLACE_ALL, " ", "."}
If email = john [email protected], the result is [email protected].
{location | LOOKUP locationTable, location_code, city}
If location = IL001, the result is Chicago (using a lookup table named locationTable).
{start_date | DATE_FORMAT, "01/02/2006" | UPPER}
If start_date = 2023-03-15, the result is 03/15/2023 (DATE_FORMAT doesn't typically need UPPER, but shows pipeline capability).
{hire_date | DATE_FORMAT, "Jan 2, 2006" | REPLACE_ALL, " ", "_"}
If hire_date = 2023-03-15, the result is Mar_15,_2023.
{office_code | TRIM_CHARS_LEFT, ".0" | TRIM_CHARS_RIGHT, ".USCA"}
If office_code = 000.8675309.USCA, the result is 8675309.
{username | REMOVE_CHARS, ".-_" | TRIM | UPPER}
If username = "–john.doe_–", the result is JOHNDOE.
{employee_id | REMOVE_CHARS, "#" | TRIM_CHARS, "0" | LEFT_PAD, 6, "0"}
If employee_id = "##001234##", the result is 001234.
{department | REMOVE_WHITESPACE | LOWER | REPLACE_ALL, "&", "and"}
If department = "Sales & Marketing", the result is salesandmarketing.
TEST{| RANDOM_INTEGER, 1000, 9999}
Generates test IDs like TEST4827, TEST8391 (see RANDOM_INTEGER for details).
Enter sample values for each attribute
Click Test Formatter in the dialog to evaluate the expression
View the result to verify the transformation produces expected output
Click Save to close the dialog, or Cancel to discard changes
user_principal_name
`{first_name
SUB_STRING,0,1
LOWER}.{last_name
{first_name}{last_name}@company.com
Yes
Email address
OktaAccountTransformer OktaUser
login
`{first_name
SUB_STRING,0,1
LOWER}.{last_name
{first_name}{last_name}@company.com
Yes
Email address
username_prefix
`{first_name
SUB_STRING,0,1
LOWER}.{last_name
AzureADTransformer AzureADUser
principal_name
{first_name}{last_name}
No
Primary identifier
mail_nickname
`{first_name
SUB_STRING,0,1
LOWER}{last_name
display_name
{first_name} {last_name}
Yes
Display name
GoogleAccountTransformer GoogleWorkspaceUser
{first_name}{last_name}@company.com
No
Primary email
email_addresses
{username}@company.com
No
Email list
recovery_email
{personal_email}
Yes
Backup email
ContractorTransformer ActiveDirectoryUser
account_name
c-{username}
No
Contractor prefix
distinguished_name
CN={first_name} {last_name},OU=Contractors,OU={department},DC=company,DC=local
Yes
Contractor OU
description
Contractor - {vendor_company} - Start Date: {start_date}
Yes
Metadata
RegionalEmailTransformer ExchangeUser
email_address
{username}@{region}.company.com
No
Regional email
alias
{first_name}.{last_name}@{region}.company.com
Yes
Regional alias
member, memberOf, roleOccupanturl, wWWHomePage
otherTelephone, otherMobile, otherIpPhone, otherFacsimileTelephoneNumber, otherHomePhone, otherPager, otherMailbox
And additional multi-value attributes including: objectClass, postalAddress, postOfficeBox, seeAlso, userCertificate, userSMIMECertificate, userPKCS12, securityIdentifierHistory, altSecurityIdentities, businessCategory, carLicense, homePostalAddress
Source of Identity (SOI)
The system holding authoritative user data—the "source of truth"
HR systems (Workday, BambooHR), identity providers (Azure AD, Okta), CSV uploads
Target Application
The system where user accounts are created or updated using SOI data
Active Directory, Okta, Google Workspace, SaaS applications
created_by
“Veza”
Disabled
isActive
true
Enabled
manager_id
" "
Enabled
isActive
false
Enabled
first_name
{first_name}
Enabled
last_name
{last_name}
Enabled
{first_name}.{last_name}@domain.com {first_name}_{last_name}@domain.com {last_name}@domain.com {firstname_initial}{last_name}@domain.com {firstname_initial}-{last_name}@domain.com {firstname_initial}{middlename_initial}{last_name}@domain.com {last_name}-{firstname_initial}@domain.com
-
(none)
Output, then input
{$workday.department}
General attribute access
$in
Source only
{$workday$in.department}
Get the authoritative source value
$out
Target only
{$workday$out.department}
eq
Equals
Boolean, string, number, timestamp
ne
Not equals
Boolean, string, number, timestamp
co
Contains
String, string list
sw
Starts with
username
`{email | REMOVE_DOMAIN}`
Removes the domain from the email to create username
"jsmith" is the output derived from [email protected]
user_id
`f{id | UPPER}`
Converts ID to uppercase
JSMITH" is the output derived from the userid, "jsmith"
String case
UPPER, LOWER, TITLE_CASE, SENTENCE_CASE, LOWER_CAMEL_CASE, UPPER_CAMEL_CASE, LOWER_SNAKE_CASE, UPPER_SNAKE_CASE
Standardize naming conventions
String manipulation
TRIM, TRIM_CHARS, REMOVE_CHARS, REMOVE_WHITESPACE, REPLACE_ALL, APPEND, PREPEND
Clean and format string data
Substring
FIRST_N, LAST_N, SUB_STRING, SPLIT
Extract portions of values
Padding
LEFT_PAD, RIGHT_PAD, ZERO_PAD
Create fixed-width identifiers
UPPER
john.doe
JOHN.DOE
{email | SPLIT("@") | INDEX(0)}
john.doe
{start_date | DATE_FORMAT("2006-01-02")}
2025-01-15T10:30:00Z
2025-01-15
{name | LOWER | REPLACE_ALL(" ", ".")}
John Smith
Validating a single transformer expression
Inline Testing
Testing how transformers work with real entity data
Verifying complete policy workflow execution
ADAccountTransformer ActiveDirectoryUser
account_name
{display_full_name}
No
Basic account name
distinguished_name
CN={first_name} {last_name},OU={department},OU={location},DC=company,DC=local
Yes
Full AD path
Get the current target value
String
john.smith
{$workday.first_name | LOWER}.{$workday.last_name | LOWER}@company.comIF $workday$in.department ne $ad$out.department
{$workday$in.department}
ELSE
{$ad$out.department}{$corp_ad.department}username`{firstname}{lastname}``{$target.username}@sample.com`>>[value1, value2, value3]organizationalStatus: ["Active", "Employee"]>>[Employee, Contractor, Temporary]organizationalStatus: ["Active", "Employee", "Contractor", "Temporary"]Configure fallback formatters for uniquely identifying attributes during identity synchronization
Fallback formatters can help resolve conflicts when provisioning identities with unique attributes. This is particularly useful when automated provisioning requires unique identifiers, but the standard generated values are already in use.
When provisioning new identities through Lifecycle Management, unique attributes like usernames, login IDs, or email addresses must not conflict with existing values. Fallback formatters provide an automated way to generate alternative values when conflicts arise, ensuring provisioning can proceed without manual intervention.
You can configure fallback formatters when configuring a to ensure new users can be onboarded efficiently, regardless of naming conflicts.
The most common use case for fallback formatters is handling username conflicts. For example:
Your organization uses a standard username format of first initial + last name (e.g., jsmith for John Smith).
When multiple employees have similar names, this can lead to conflicts:
John Smith already has jsmith
Jane Smith already has jsmith1
James Smith already has jsmith2
When Jennifer Smith joins, the fallback formatter automatically assigns jsmith3, maintaining your naming convention while ensuring uniqueness.
Fallback formatters can be configured as part of the "Sync Identities" action within a Lifecycle Management workflow:
Edit or create a Lifecycle Management policy
Edit the workflow containing the Sync Identities action
In the Sync Identities action configuration, click Add Fallback
Several transformers can be used for implementing fallback formatters depending on your specific use case.
A typical approach is to use the NEXT_NUMBER transformer, which is specifically designed to generate sequential numerical alternatives when naming conflicts occur.
The NEXT_NUMBER transformer:
Generates a set of sequential integers as strings
Takes two parameters: BeginInteger (starting number) and Length (how many numbers to generate)
Is unique among transformers in that it returns multiple values, making it ideal for fallback scenarios
In addition to NEXT_NUMBER, other transformers can be valuable for creating fallback formatters:
Using Random Alphanumeric for Unique Usernames:
This could generate usernames like jsmith8f3d instead of sequential jsmith1, jsmith2, etc.
Using UUID for Guaranteed Uniqueness:
This would append the first 8 characters of a UUID, creating identifiers like jsmith-a7f3e9c2.
When configuring a fallback formatter with the NEXT_NUMBER transformer:
Select the attribute that requires uniqueness (e.g., username, email)
Configure the primary pattern (e.g., {first_initial}{last_name})
Add a fallback using the NEXT_NUMBER transformer to generate sequential alternatives:
This will generate up to 10 alternatives: jsmith1, jsmith2, ... jsmith10
Here are some commonly used fallback patterns:
When Lifecycle Management attempts to provision a new identity with a unique attribute value that already exists:
The system first tries the primary format (e.g., jsmith)
If a conflict is detected, it automatically tries the first alternative using the NEXT_NUMBER transformer (e.g., jsmith1)
If that value also exists, it tries the next alternative (e.g.,
This automated conflict resolution ensures provisioning can proceed without manual intervention, even when your standard naming conventions result in conflicts.
Close the action sidebar and save your changes to the policy.
jsmith2This process continues until either:
A unique value is found
All alternatives from the NEXT_NUMBER range are exhausted (in which case an error would be reported)
{first_initial}{last_name}
{first_initial}{last_name}{NEXT_NUMBER(1, 10)}
jsmith, jsmith1, jsmith2, etc.
{first_name}.{last_name}
{first_name}.{last_name}{NEXT_NUMBER(1, 10)}
john.smith, john.smith1, john.smith2
{username}@domain.com
{username}{NEXT_NUMBER(1, 10)}@domain.com
{first_name}{last_initial}
{first_name}{last_initial}{NEXT_NUMBER(1, 10)}
johns, johns1, johns2
{first_initial}{last_name}{RANDOM_ALPHANUMERIC_GENERATOR(4)}{first_initial}{last_name}-{UUID_GENERATOR() | SUB_STRING,0,8}{first_initial}{last_name}{NEXT_NUMBER(1, 10)}Reference guide for supported transformation functions and parameters for attribute transformers
This page includes a comprehensive list of all supported transformer functions and parameters. Some commonly used transformation functions include:
Replacing a character with a different one
Removing domains from email addresses
Transforming to upper, lower, title, sentence, camel, or snake case
Using a substring from the original value
See for more information about using attribute transformers to update or create attributes in downstream systems based on changes in your source of identity.
When configuring attribute transformers in Lifecycle Management policies, you work with three key fields:
A Destination Attribute is a target-system attribute that Lifecycle Management creates or updates (e.g., email, username, first_name).
Lifecycle Management uses the Destination Attribute when you build a transformer or Sync/Deprovision action. You select a Destination Attribute (from the target entity type), then supply a Formatter (a fixed value, a {source_attribute}, or a transformation pipeline). Destination Attributes can be configured for continuous sync, used by action-level or common transformers, and referenced with $target inside the same action.
A Formatter defines the exact value or expression used to populate a target attribute (Destination Attribute) when provisioning or syncing identities.
Formatters can be a fixed literal, a reference to a source-of-identity attribute (e.g., {first_name}), or an expression that applies transformation functions and pipelines (e.g., {email | REMOVE_DOMAIN | LOWER}).
They're configured on transformers used by Sync/Deprovision actions and control continuous sync behavior, fallbacks, and uniqueness.
The base transformation expression that defines how to construct the attribute value. This field can contain:
Source attribute references: {first_name}, {email}
Static text and source attributes combined: {first_name}.{last_name}@company.com
Static values only: "Veza"
A Pipeline Function is a chained sequence of attribute formatters that run in order using the pipe (|) operator.
Pipeline Functions take the output of one formatter as the input to the next, letting you build multi-step transformations (for triggers, transformers, and formatter fields). They are used in workflow trigger conditions and transformer attributes to normalize, format, or compute values (for example: {department | TRIM | UPPER} or {hire_date | DATE_FORMAT, "2006-01-02"}).
Optional transformation functions applied after the Formatter. Multiple functions are chained using the pipe (|) character, with each function's output becoming the next function's input. Examples:
| LOWER
| ASCII | REMOVE_WHITESPACE
| TRIM_CHARS, "." | UPPER
At runtime, Veza combines the {Formatter}{Pipeline Functions} fields to create the complete transformation.
Each transformer below includes three types of examples:
Basic Usage: Shows the transformer used alone with a simple source attribute
In a Pipeline: Demonstrates chaining multiple transformers together
Example: Provides practical context from Lifecycle Management scenarios
Copy-paste these examples directly into your attribute transformer configuration, adjusting attribute names to match your source of identity.
Removes non-printable characters and replaces non-ASCII characters with their closest ASCII equivalents. Particularly useful for Active Directory sAMAccountName and other legacy systems with strict character requirements.
Note: The ASCII transformer performs operations on the base level, not the extended set.
Interprets the incoming time string as if it were in the specified time zone, then converts it to a UTC time. (example: if the input is "1/2/2025 11pm" and the defined time zone is America/Los_Angeles the function will treat "1/2/2025 11pm" as local time in Los Angeles and output the corresponding UTC time "1/3/2025 7am")
Transforms language to RFC 5646 format.
defines "Tags for Identifying Languages." It does not contain a fixed, exhaustive list of language codes within the RFC itself. Instead, it specifies the structure and rules for constructing language tags, which are then built using codes from various external standards and registries.
Interprets the incoming time string as if it were in UTC and then converts it to the specified time zone. (example: if the input is "1/2/2025 11pm" and the specified time zone is America/Los_Angeles the function will treat "1/2/2025 11pm" as the UTC time zone and output the corresponding America/Los_Angeles
trueConditional logic: IF statements with comparison operators
Basic Usage
Destination Attribute: email
Formatter: {username}
Pipeline Functions: | APPEND, "@company.com"
Result: If username = john.smith, output is [email protected]
In a Pipeline
Destination Attribute: display_name
Formatter: {first_name}
Pipeline Functions: | APPEND, " " | APPEND, "{last_name}"
Result: If first_name = John and last_name = Smith, output is John Smith
Example
Destination Attribute: user_principal_name
Formatter: {first_name}.{last_name}
Pipeline Functions: | LOWER | APPEND, "@contoso.com"
Use case: Generate standardized email addresses for Azure AD user provisioning
None (no parameters required)
Basic Usage
Destination Attribute: username
Formatter: {first_name}
Pipeline Functions: | ASCII
Result: If first_name = Łukasz, output is Lukasz
In a Pipeline
Destination Attribute: login_name
Formatter: {first_name}.{last_name}
Pipeline Functions: | ASCII | LOWER | REMOVE_WHITESPACE
Result: If first_name = José María, last_name = García, output is josemaria.garcia
Example
Destination Attribute: sAMAccountName
Formatter: {first_name}
Pipeline Functions: | ASCII | SUB_STRING, 0, 1 | LOWER
Use case: Generate Active Directory account names that comply with character restrictions while handling international names (converts "Łukasz" to "l")
String - Time Zone String (Optional) - Format
Usage Example
Input:
{activation_date | ASSUME_TIME_ZONE, "America/Los_Angeles"}
{activation_date | ASSUME_TIME_ZONE, "America/Los_Angeles", "RFC3339"}
{activation_date | ASSUME_TIME_ZONE, "-07:00"}
{activation_date | ASSUME_TIME_ZONE, "-07:00", "RFC3339"}
Format (STRING, optional): [alpha2, alpha3, numeric], defaults to alpha2
Usage Example
Input:
{"US" | COUNTRY_CODE_ISO3166, "alpha3"}
Output:
USA
Hours
INTEGER
Yes
Number of hours to add (use negative values to subtract)
Days
INTEGER
No
Number of days to add
Months
INTEGER
No
Number of months to add
Usage Example
Input:
{activation_date | DATE_ADJUST, +1, 2, 3, -1}
Adjusts the date by adding 1 hour, 2 days, 3 months, and subtracting 1 year.
{activation_date | DATE_ADJUST, +1, 2, 3, -1, "RFC3339"}
{activation_date | DATE_ADJUST, +1, 2, 3, -1, "2006-01-02T15:04:05Z07:00"}
Example
If the input date is 2021-01-01 00:00:00 and you apply DATE_ADJUST, +1, 2, 3, -1, the output is 2020-04-03 01:00:00 (added 1 hour, 2 days, 3 months, subtracted 1 year).
Days
INTEGER
Yes
Number of days to add (use negative values to subtract)
Format
STRING
No
Output format (defaults to auto-detection)
Usage Example
Input:
{activation_date | DATE_ADJUST_DAY, +1}
Adds 1 day to the activation date.
{activation_date | DATE_ADJUST_DAY, +1, "RFC3339"}
{activation_date | DATE_ADJUST_DAY, +1, "2006-01-02T15:04:05Z07:00"}
Example
If the input date is 2021-01-01 00:00:00 and you apply DATE_ADJUST_DAY, +1, the output is 2021-01-02 00:00:00.
Go Time Layout Syntax: Unlike most date formatting systems that use patterns like YYYY-MM-DD, Go uses a reference date: Mon Jan 2 15:04:05 MST 2006. Each component of this specific date represents a format element. This can be unintuitive at first, but provides unambiguous formatting.
Understanding Go Date Format
The reference date Mon Jan 2 15:04:05 MST 2006 breaks down as:
Year
2006
4-digit year
Year
06
2-digit year
Month
01
2-digit month (01-12)
Month
1
Common Format Patterns
ISO 8601 / RFC3339
2006-01-02T15:04:05Z07:00
2023-03-15T14:30:25-07:00
US date
01/02/2006
03/15/2023
European date
02/01/2006
15/03/2023
LDAP/AD format
20060102150405Z
Named Format Aliases
Instead of Go layout strings, you can use these named aliases (case-insensitive):
dateonly
2006-01-02
2023-03-15
timeonly
15:04:05
14:30:25
datetime
2006-01-02 15:04:05
2023-03-15 14:30:25
kitchen
3:04PM
The win32 format outputs the Windows FILETIME format used by Active Directory for attributes like accountExpires. This represents 100-nanosecond intervals since January 1, 1601 UTC.
Basic Usage
Destination Attribute: hire_date
Formatter: {start_date}
Pipeline Functions: | DATE_FORMAT, "01/02/2006"
Result: If start_date = 2023-03-15, output is 03/15/2023
In a Pipeline
Destination Attribute: formatted_date
Formatter: {hire_date}
Pipeline Functions: | DATE_FORMAT, "Jan 2, 2006" | REPLACE_ALL, " ", "_"
Result: If hire_date = 2023-03-15, output is Mar_15,_2023
Using Named Aliases
Destination Attribute: formatted_date
Formatter: {hire_date}
Pipeline Functions: | DATE_FORMAT, "rfc3339"
Result: Outputs ISO 8601 format like 2023-03-15T00:00:00Z
LDAP Z Time Format
Destination Attribute: accountExpires
Formatter: {termination_date}
Pipeline Functions: | DATE_FORMAT, "20060102150405Z"
Use case: Convert dates to LDAP Z time format (outputs UTC format like 20230315143025Z)
Active Directory FILETIME Format
Destination Attribute: accountExpires
Formatter: {termination_date}
Pipeline Functions: | DATE_FORMAT, "win32"
Use case: Convert dates to Windows FILETIME format for AD account expiration (outputs values like 133234218250000000)
Parsing Non-Standard Input Dates
When your source data uses a non-standard date format, provide both the output format AND input format:
First parameter: desired output format
Second parameter: format of the input data
For example, to convert 03-15-2023 (MM-DD-YYYY) to 2023-03-15 (ISO format):
Basic Usage
Destination Attribute: initial
Formatter: {first_name}
Pipeline Functions: | FIRST_N, 1
Result: If first_name = John, output is J
In a Pipeline
Destination Attribute: username
Formatter: {first_name}.{last_name}
Pipeline Functions: | FIRST_N, 1 | LOWER
Result: If first_name = John, output is j
Example
Destination Attribute: username
Formatter: {first_name}
Pipeline Functions: | FIRST_N, 1 | LOWER
Use case: Create abbreviated usernames in the format "j.smith" by combining first initial with last name
EntityType
STRING
Yes
The type of graph entity to search (e.g., Employee, OktaUser, ActiveDirectoryUser)
SourceAttribute
STRING
Yes
The attribute on the entity to match against the input value
TargetAttribute
STRING
Yes
The attribute to return from the matched entity. Use id or type for built-in entity properties.
How It Works
The input value (before the |) is used as the search term
The transformer finds an entity of type EntityType where SourceAttribute equals the input value
It returns the TargetAttribute value from that entity
If no entity is found and DefaultValue is provided, the default is returned; otherwise an error occurs
Special Behaviors
Empty input value
Returns empty string "" (no error)
Input wrapped in brackets [value]
Brackets are automatically stripped before lookup
TargetAttribute is id
Returns the entity's unique graph ID
TargetAttribute is type
Returns the entity's type name
No entity found, no default
Returns error with details about the failed lookup
Target attribute missing on entity
Returns error (unless default provided)
When used in sync workflows, this transformer checks previously computed values from the current job before querying the graph cache. This optimization prevents redundant lookups during batch operations.
Usage Examples
Example 1: Get manager's name from employee ID
Input: 12345 (an employee ID)
Finds: An Employee entity where employee_id = 12345
Returns: The manager_name attribute from that employee (e.g., Jane Smith)
Example 2: Get department from email with a default value
Input: [email protected]
Finds: An OktaUser entity where email = [email protected]
Returns: The department attribute, or Unknown if no user is found
Example 3: Chain with other transformers
Takes the employee ID from Workday
Looks up the employee's cost center
Converts the result to uppercase
Example 4: Get entity's graph ID
Looks up an OktaUser by login
Returns the entity's unique graph ID (useful for subsequent lookups)
EntityType
STRING
Yes
The type of graph entity to search (e.g., Employee, OktaUser)
SourceAttribute
STRING
Yes
The attribute on entities to match against the input value
TargetAttribute
STRING
Yes
How It Works
The input value (before the |) is used as the search term
The transformer finds ALL entities of type EntityType where SourceAttribute equals the input value
It collects the TargetAttribute value from each matched entity
Results are joined using the separator (comma by default)
If no entities are found, returns an empty string
Special Behaviors
Empty input value
Returns empty string "" (no error)
Input wrapped in brackets [value]
Brackets are automatically stripped before lookup
No entities found
Returns empty string "" (no error)
Target attribute missing on some entities
Those entities are skipped (no error logged)
TargetAttribute is id
Returns the entity's unique graph ID
TargetAttribute is type
Returns the entity's type name
Unlike FROM_ENTITY_ATTRIBUTE, this transformer does not error when entities are missing the target attribute—it silently skips them. Verify your results include all expected values.
Usage Examples
Example 1: Get all group names for a user
Input: [email protected]
Finds: All OktaGroup entities where member_email = [email protected]
Returns: Engineering,Sales,All-Employees (comma-separated group names)
Example 2: Custom separator for multi-value attributes
Input: 12345 (an owner ID)
Finds: All Application entities owned by this user
Returns: Slack;Salesforce;Jira (semicolon-separated)
Example 3: Get all entity IDs
Finds all employees in the Engineering department
Returns their graph IDs as a comma-separated list
Example 4: Get entity types
Looks up all identity nodes with the given email
Returns their type names (e.g., OktaUser,ActiveDirectoryUser,WorkdayWorker)
Parameter Format
None (no parameters required)
Usage Example
Input:
{"Spanish" | LANGUAGE_RFC5646}
Output:
es
Usage Example
Input:
{"helloworld" | LAST_N, 5}
Output:
world
Basic Usage
Destination Attribute: employee_id
Formatter: {id}
Pipeline Functions: | LEFT_PAD, 5, "0"
Result: If id = 123, output is 00123
In a Pipeline
Destination Attribute: formatted_code
Formatter: {cost_center}
Pipeline Functions: | TRIM_CHARS, "0" | LEFT_PAD, 6, "0"
Result: If cost_center = 001234, output is 001234 (first removes then re-applies padding)
Example
Destination Attribute: employee_id
Formatter: {employee_id}
Pipeline Functions: | REMOVE_CHARS, "#" | TRIM_CHARS, "0" | LEFT_PAD, 6, "0"
Use case: Standardize employee IDs to 6-digit format (converts "##001234##" to "001234")
TableName
STRING
Yes
Name or ID of the lookup table
ColumnName
STRING
Yes
Column to search for the input value
ReturnColumnName
STRING
Yes
Column whose value to return
How It Works
The transformer first tries to match TableName against configured lookup table names
If no name match is found, TableName is treated as a table ID
The input value is searched in ColumnName
If found, the corresponding ReturnColumnName value is returned
If not found and DefaultValue is provided, the default is returned
If not found and no default, an error is returned
Table name matching is case-sensitive. Ensure the table name in your transformer exactly matches the lookup table name defined in your policy, including capitalization.
Special Behaviors
Table name not found
Falls back to treating the parameter as a table ID
Value not found in table, default provided
Returns the default value
Value not found in table, no default
Returns error with lookup details
Other lookup errors
Returns error with full context
Basic Usage
Destination Attribute: city
Formatter: {location_code}
Pipeline Functions: | LOOKUP, "locationTable", "location_code", "city"
Result: If location_code = IL001 and locationTable contains that code, output is Chicago
With Default Value
Destination Attribute: region
Formatter: {office_code}
Pipeline Functions: | LOOKUP, "regionTable", "code", "region_name", "Unknown Region"
Result: Returns mapped region name, or Unknown Region if code not in table
In a Pipeline
Destination Attribute: office_email_domain
Formatter: {office_code}
Pipeline Functions: | LOOKUP, "officeTable", "code", "domain" | LOWER
Result: Looks up domain from table, then converts to lowercase
Example
Destination Attribute: office_location
Formatter: {location}
Pipeline Functions: | LOOKUP, "locationTable", "location_code", "city"
Use case: Convert abbreviated location codes (like "IL001", "CA002") to full city names for user profiles, maintaining consistency across different source systems
Basic Usage
Destination Attribute: username
Formatter: {first_name}
Pipeline Functions: | LOWER
Result: If first_name = JOHN, output is john
In a Pipeline
Destination Attribute: email
Formatter: {first_name}.{last_name}@company.com
Pipeline Functions: | LOWER
Result: If first_name = John, last_name = Smith, output is [email protected]
Example
Destination Attribute: login
Formatter: {first_name}
Pipeline Functions: | SUB_STRING, 0, 1 | LOWER
Use case: Create standardized Okta login names in format "j.smith" by extracting first initial and converting to lowercase
Usage Example
Input:
{"Hello World" | LOWER_SNAKE_CASE}
Output:
hello_world
Basic Usage
Destination Attribute: identifier
Formatter: {field_name}
Pipeline Functions: | LOWER_CAMEL_CASE
Result: If field_name = hello world, output is helloWorld
In a Pipeline
Destination Attribute: api_field
Formatter: {attribute_name}
Pipeline Functions: | TRIM | LOWER_CAMEL_CASE
Result: If attribute_name = User Display Name , output is userDisplayName
Example
Destination Attribute: json_property
Formatter: {column_name}
Pipeline Functions: | LOWER_CAMEL_CASE
Use case: Convert database column names or display names to JSON property names following JavaScript/TypeScript conventions
Basic Usage
Destination Attribute: username
Formatter: {first_name}.{last_name}
Pipeline Functions: | LOWER | NEXT_NUMBER, 2, 3
Result: Generates john.smith, john.smith2, john.smith3, john.smith4 as fallback options
In a Pipeline
Destination Attribute: email
Formatter: {first_name}{last_name}
Pipeline Functions: | LOWER | NEXT_NUMBER, 2, 5 | APPEND, "@company.com"
Result: Creates email alternatives: [email protected], [email protected], etc.
Example
Destination Attribute: user_principal_name
Formatter: (see conditional example below)
Pipeline Functions: N/A (used within IF statement)
Use case: Intelligent username generation with length-based fallbacks:
This handles both name length constraints and uniqueness conflicts automatically.
Parameter Format
Integer (NUMBER, required), Length (NUMBER, required)
Usage Example
Input:
{"foobar" | NEXT_NUMBER, 1, 12, 4}
Output:
foob foo1 foo2 foo3 foo4 foo5 foo6 foo7 foo8 foo9 fo10 fo11 fo12
Usage Example
Input:
{NOW}
{| NOW, "RFC3339"}
{NOW, "RFC3339"}
{NOW, "2006-01-02T15:04:05Z07:00"}
Usage Example
Input:
{"+1-800-555-1212" | PHONE_NUMBER_E164}
Output:
+18005551212
Basic Usage
Destination Attribute: location_code
Formatter: {city_code}
Pipeline Functions: | PREPEND, "CORP_"
Result: If city_code = NYC, output is CORP_NYC
In a Pipeline
Destination Attribute: contractor_username
Formatter: {username}
Pipeline Functions: | PREPEND, "c-" | LOWER
Result: If username = JSmith, output is c-jsmith
Example
Destination Attribute: account_name
Formatter: {username}
Pipeline Functions: | PREPEND, "c-"
Use case: Identify contractor accounts by prefixing their usernames (converts "jsmith" to "c-jsmith" to distinguish from employee accounts)
Usage Example
Input:
{| RANDOM_ALPHANUMERIC_GENERATOR, 8}
Output:
a1B2c3D4
Note: This transformer generates an alphanumeric string with eight characters.
Basic Usage
Destination Attribute: test_id
Formatter: TEST
Pipeline Functions: | RANDOM_INTEGER, 1000, 9999
Result: Output is TEST followed by random number like TEST4827
In a Pipeline
Destination Attribute: temp_username
Formatter: user
Pipeline Functions: | RANDOM_INTEGER, 1, 100 | APPEND, "@temp.local"
Result: Output like [email protected]
Example
Destination Attribute: temporary_id
Formatter: TEST
Pipeline Functions: | RANDOM_INTEGER, 1000, 9999
Use case: Generate unique test identifiers for sandbox environments (produces values like "TEST4827", "TEST8391")
Usage Example
Input:
{| RANDOM_NUMBER_GENERATOR, 4}
Output:
4829
Note: This transformer generates a random numeric string with four characters.
Usage Example
Input:
{| RANDOM_STRING_GENERATOR, 6}
Output:
uFkLxw
Note: This transformer generates a random alpha string with six characters.
Basic Usage
Destination Attribute: username
Formatter: {email}
Pipeline Functions: | REMOVE_CHARS, "@."
Result: If email = [email protected], output is johndoeexamplecom
In a Pipeline
Destination Attribute: phone
Formatter: {phone_number}
Pipeline Functions: | REMOVE_CHARS, "()- "
Result: If phone_number = (123) 456-7890, output is 1234567890
Example
Destination Attribute: user_id
Formatter: {email}
Pipeline Functions: | REMOVE_CHARS, "-"
Use case: Create clean user IDs from email addresses by removing hyphens (converts "" to "")
Usage Example
Input:
{"José" | REMOVE_DIACRITICS}
Output:
Jose
Basic Usage
Destination Attribute: username
Formatter: {email}
Pipeline Functions: | REMOVE_DOMAIN
Result: If email = [email protected], output is john.smith
In a Pipeline
Destination Attribute: login_name
Formatter: {email}
Pipeline Functions: | REMOVE_DOMAIN | REPLACE_ALL, ".", "_"
Result: If email = [email protected], output is john_smith
Example
Destination Attribute: username
Formatter: {email}
Pipeline Functions: | REMOVE_DOMAIN
Use case: Extract usernames for target systems that don't use email-style logins (converts "[email protected]" to "jsmith")
Basic Usage
Destination Attribute: username
Formatter: {display_name}
Pipeline Functions: | REMOVE_WHITESPACE
Result: If display_name = John A. Doe, output is JohnA.Doe
In a Pipeline
Destination Attribute: tag
Formatter: {department}
Pipeline Functions: | REMOVE_WHITESPACE | LOWER
Result: If department = Human Resources, output is humanresources
Example
Destination Attribute: cost_center_code
Formatter: {cost_center}
Pipeline Functions: | REMOVE_WHITESPACE
Use case: Ensure cost center codes have no embedded spaces for system integration (converts "CC 12345" to "CC12345")
Basic Usage
Destination Attribute: username
Formatter: {display_name}
Pipeline Functions: | REPLACE_ALL, " ", "_"
Result: If display_name = John Smith, output is John_Smith
In a Pipeline
Destination Attribute: identifier
Formatter: {employee_id}
Pipeline Functions: | REPLACE_ALL, "-", "" | TRIM
Result: If employee_id = EMP-12345, output is EMP12345
Example
Destination Attribute: email
Formatter: {email}
Pipeline Functions: | REPLACE_ALL, " ", "."
Use case: Fix malformed email addresses with spaces (converts "john " to "")
Pad (CHARACTER, optional): Default is space
Usage Example
Input:
{"123" | RIGHT_PAD, 5, "0"}
Output:
12300
Basic Usage
Destination Attribute: description
Formatter: {notes}
Pipeline Functions: | SENTENCE_CASE
Result: If notes = THE QUICK BROWN FOX, output is The quick brown fox
In a Pipeline
Destination Attribute: formatted_notes
Formatter: {comment}
Pipeline Functions: | TRIM | SENTENCE_CASE
Result: If comment = IMPORTANT MESSAGE HERE , output is Important message here
Example
Destination Attribute: job_description
Formatter: {job_title}
Pipeline Functions: | SENTENCE_CASE
Use case: Normalize job descriptions from all-caps source data to sentence case for cleaner display
Usage Example
Input:
{"[email protected]" | SPLIT, "@", 0}
Output:
first.last
Note: This transformer returns the results where the index starts at zero (0).
Basic Usage
Destination Attribute: initial
Formatter: {first_name}
Pipeline Functions: | SUB_STRING, 0, 1
Result: If first_name = John, output is J
In a Pipeline
Destination Attribute: short_id
Formatter: {employee_id}
Pipeline Functions: | SUB_STRING, 3, 4 | UPPER
Result: If employee_id = EMP12345, output is 1234
Example
Destination Attribute: username
Formatter: {first_name}
Pipeline Functions: | SUB_STRING, 0, 1 | LOWER
Use case: Generate usernames like "j.smith" by extracting first initial and combining with last name
Basic Usage
Destination Attribute: display_name
Formatter: {full_name}
Pipeline Functions: | TITLE_CASE
Result: If full_name = john doe, output is John Doe
In a Pipeline
Destination Attribute: formatted_name
Formatter: {name}
Pipeline Functions: | TRIM | TITLE_CASE
Result: If name = JANE SMITH , output is Jane Smith
Example
Destination Attribute: display_name
Formatter: {username}
Pipeline Functions: | TITLE_CASE
Use case: Format dot-separated usernames for display (converts "john.doe" to "John.Doe")
Basic Usage
Destination Attribute: username
Formatter: {display_name}
Pipeline Functions: | TRIM
Result: If display_name = " John Doe ", output is John Doe
In a Pipeline
Destination Attribute: email
Formatter: {email_address}
Pipeline Functions: | TRIM | LOWER
Result: If email_address = " [email protected] ", output is [email protected]
Example
Destination Attribute: display_name
Formatter: {display_name}
Pipeline Functions: | TRIM
Use case: Clean up imported user data that may have padding whitespace from CSV files or database fields
Basic Usage
Destination Attribute: employee_id
Formatter: {id_number}
Pipeline Functions: | TRIM_CHARS, "0."
Result: If id_number = 000.123.000, output is 123
In a Pipeline
Destination Attribute: clean_code
Formatter: {code}
Pipeline Functions: | TRIM_CHARS, "-_" | UPPER
Result: If code = ---ABC123___, output is ABC123
Example
Destination Attribute: office_code
Formatter: {office_code}
Pipeline Functions: | TRIM_CHARS, ".0" | TRIM_CHARS_RIGHT, ".USCA"
Use case: Clean up office codes with variable padding (converts "000.8675309.USCA" to "8675309")
Basic Usage
Destination Attribute: cost_center
Formatter: {cost_center_code}
Pipeline Functions: | TRIM_CHARS_LEFT, "0"
Result: If cost_center_code = 00012345, output is 12345
In a Pipeline
Destination Attribute: identifier
Formatter: {raw_id}
Pipeline Functions: | TRIM_CHARS_LEFT, "x" | UPPER
Result: If raw_id = xxxABC123, output is ABC123
Example
Destination Attribute: cost_center
Formatter: {cost_center}
Pipeline Functions: | TRIM_CHARS_LEFT, "0"
Use case: Remove leading zeros from cost center codes while preserving trailing zeros (converts "00012345" to "12345")
Basic Usage
Destination Attribute: office_code
Formatter: {raw_office_code}
Pipeline Functions: | TRIM_CHARS_RIGHT, "0"
Result: If raw_office_code = ABC12300, output is ABC123
In a Pipeline
Destination Attribute: clean_code
Formatter: {code}
Pipeline Functions: | TRIM_CHARS_RIGHT, "temp" | UPPER
Result: If code = ABC123temp, output is ABC123
Example
Destination Attribute: office_code
Formatter: {office_code}
Pipeline Functions: | TRIM_CHARS_RIGHT, "0"
Use case: Remove trailing zeros from office codes while preserving leading zeros (converts "ABC12300" to "ABC123")
Basic Usage
Destination Attribute: department_code
Formatter: {department}
Pipeline Functions: | UPPER
Result: If department = sales, output is SALES
In a Pipeline
Destination Attribute: user_id
Formatter: {username}
Pipeline Functions: | REMOVE_WHITESPACE | UPPER
Result: If username = john smith, output is JOHNSMITH
Example
Destination Attribute: last_name_normalized
Formatter: {name}
Pipeline Functions: | UPPER
Use case: Standardize employee last names for matching across systems (converts "Smith" to "SMITH")
Usage Example
Input:
{"hello world" | UPPER_CAMEL_CASE}
Output:
HelloWorld
Usage Example
Input:
{"hello world" | UPPER_SNAKE_CASE}
Output:
HELLO_WORLD
America/Los_Angeles-07:00Parameter Format
String - Time Zone String (Optional) - Format
Usage Example
Input:
{activation_date | UTC_TO_TIME_ZONE, "America/Los_Angeles"}
{activation_date | UTC_TO_TIME_ZONE, "America/Los_Angeles", "RFC3339"}
{activation_date | UTC_TO_TIME_ZONE, "-07:00"}
{activation_date | UTC_TO_TIME_ZONE, "-07:00", "RFC3339"}
None (no parameters required)
Usage Example
Input:
{| UUID_GENERATOR}
Output:
123e4567-e89b-12d3-a456-426614174000
Basic Usage
Destination Attribute: employee_id
Formatter: {id}
Pipeline Functions: | ZERO_PAD, 6
Result: If id = 1234, output is 001234
Example
Destination Attribute: badge_number
Formatter: {badge_id}
Pipeline Functions: | ZERO_PAD, 6
Use case: Standardize badge numbers to 6-digit format for access control systems (converts "1234" to "001234", leaves "12345678" unchanged, and passes non-numeric values like "admin" through unchanged)
{date_field | DATE_FORMAT, "2006-01-02", "01-02-2006"}{start_date | DATE_FORMAT, "2006-01-02", "01-02-2006"}{12345 | FROM_ENTITY_ATTRIBUTE, "Employee", "employee_id", "manager_name"}{[email protected] | FROM_ENTITY_ATTRIBUTE, "OktaUser", "email", "department", "Unknown"}{$workday.employee_id | FROM_ENTITY_ATTRIBUTE, "Employee", "id", "cost_center" | UPPERCASE}{john.smith | FROM_ENTITY_ATTRIBUTE, "OktaUser", "login", "id"}{[email protected] | FROM_MANY_ENTITIES_ATTRIBUTE, "OktaGroup", "member_email", "name"}{12345 | FROM_MANY_ENTITIES_ATTRIBUTE, "Application", "owner_id", "app_name", ";"}{Engineering | FROM_MANY_ENTITIES_ATTRIBUTE, "Employee", "department", "id"}{[email protected] | FROM_MANY_ENTITIES_ATTRIBUTE, "Identity", "email", "type"}IF sys_attr__would_be_value_len le 20
{first_name | LOWER}.{last_name | LOWER | NEXT_NUMBER, 2, 3}
ELSE IF sys_attr__would_be_value_len le 30
{first_name | LOWER}.{last_name | LOWER | FIRST_N, 1 | NEXT_NUMBER, 2, 3}
ELSE
{first_name | LOWER | FIRST_N, 1}.{last_name | LOWER | FIRST_N, 1 | NEXT_NUMBER, 2, 3}Years
INTEGER
No
Number of years to add
Format
STRING
No
Output format (defaults to auto-detection)
1 or 2-digit month (1-12)
Month
Jan
3-letter abbreviation
Month
January
Full month name
Day
02
2-digit day (01-31)
Day
2
1 or 2-digit day (1-31)
Day
_2
Space-padded day
Hour
15
24-hour format (00-23)
Hour
03 or 3
12-hour format (01-12 or 1-12)
Minute
04
Minutes (00-59)
Second
05
Seconds (00-59)
AM/PM
PM
Uppercase AM/PM
AM/PM
pm
Lowercase am/pm
Timezone
MST
Timezone abbreviation
Timezone
-0700
Numeric offset
Timezone
Z0700
Z for UTC, offset otherwise
20230315143025Z
Human readable
Jan 2, 2006
Mar 15, 2023
Full datetime
January 2, 2006 3:04 PM
March 15, 2023 2:30 PM
Date only
2006-01-02
2023-03-15
Time only
15:04:05
14:30:25
2:30PM
rfc822
02 Jan 06 15:04 MST
15 Mar 23 14:30 PDT
rfc822z
02 Jan 06 15:04 -0700
15 Mar 23 14:30 -0700
rfc850
Monday, 02-Jan-06 15:04:05 MST
Wednesday, 15-Mar-23 14:30:25 PDT
rfc1123
Mon, 02 Jan 2006 15:04:05 MST
Wed, 15 Mar 2023 14:30:25 PDT
rfc1123z
Mon, 02 Jan 2006 15:04:05 -0700
Wed, 15 Mar 2023 14:30:25 -0700
rfc3339
2006-01-02T15:04:05Z07:00
2023-03-15T14:30:25-07:00
rfc3339nano
2006-01-02T15:04:05.999999999Z07:00
2023-03-15T14:30:25.123456789-07:00
ansic
Mon Jan _2 15:04:05 2006
Wed Mar 15 14:30:25 2023
unixdate
Mon Jan _2 15:04:05 MST 2006
Wed Mar 15 14:30:25 PDT 2023
rubydate
Mon Jan 02 15:04:05 -0700 2006
Wed Mar 15 14:30:25 -0700 2023
stamp
Jan _2 15:04:05
Mar 15 14:30:25
stampmilli
Jan _2 15:04:05.000
Mar 15 14:30:25.123
stampmicro
Jan _2 15:04:05.000000
Mar 15 14:30:25.123456
stampnano
Jan _2 15:04:05.000000000
Mar 15 14:30:25.123456789
layout
01/02 03:04:05PM '06 -0700
03/15 02:30:25PM '23 -0700
win32
Active Directory FILETIME
133234218250000000
DefaultValue
STRING
No
Value to return if no matching entity is found
The attribute to return from all matched entities. Use id or type for built-in entity properties.
Separator
STRING
No
Character(s) to join multiple results (defaults to ,)
Result ordering
Results appear in graph discovery order (not guaranteed to be consistent)
DefaultValue
STRING
No
Value to return if lookup fails