# Insight Point (Helm Chart)

An [Insight Point](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md) is a lightweight agent that runs in your environment to securely connect Veza to internal data sources, relay webhooks, route Lifecycle Management actions, and support OAA custom integrations. Veza provides a Helm chart to deploy and manage an Insight Point on Kubernetes. Once deployed, the Insight Point can serve any integration that requires private network connectivity, including the [Kubernetes integration](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md) for cluster RBAC discovery.

## Configuration Options

The Insight Point Helm chart accepts the following configuration parameters via `--set` flags. Typically only `key` is required.

| Parameter                   | Description                                                             | Default                                    | Example                                                           |
| --------------------------- | ----------------------------------------------------------------------- | ------------------------------------------ | ----------------------------------------------------------------- |
| `key`                       | Insight Point Registration key for connecting to Veza                   | `""`                                       | `--set key=abc123`                                                |
| `image`                     | Container image to deploy                                               | `public.ecr.aws/veza/insight_point:latest` | `--set image=my-registry/insight_point:v1`                        |
| `addr`                      | Address for Veza API connection, overriding the one provided by the key | `""`                                       | `--set addr=customer.vezacloud.com`                               |
| `skipVerify`                | Disable TLS certificate validation                                      | `false`                                    | `--set skipVerify=true`                                           |
| `authority`                 | Overrides the request authority for certificate validation              | `""`                                       | `--set authority=veza.example.com`                                |
| `caBundle`                  | Custom CA certificate bundle in PEM format (inline)                     | `""`                                       | See [Custom CA Certificates](#custom-ca-certificate-bundle)       |
| `caBundleConfigMapRef`      | Reference to existing ConfigMap containing CA bundle                    | `""`                                       | `--set caBundleConfigMapRef=custom-ca-bundle`                     |
| `env`                       | List of additional environment variables to inject into the pod         | `[]`                                       | See [Custom environment variables](#custom-environment-variables) |
| `replicaCount`              | Number of Insight Point replicas for high availability                  | `3`                                        | `--set replicaCount=1`                                            |
| `createClusterRole`         | Create ClusterRole for Kubernetes integration RBAC                      | `true`                                     | `--set createClusterRole=false`                                   |
| `roleName`                  | Name of the ClusterRole created when `createClusterRole` is enabled     | `veza-insight-point`                       | `--set roleName=custom-veza-role`                                 |
| `enableSecrets`             | Enable Kubernetes Secrets extraction via ClusterRole permissions        | `false`                                    | `--set enableSecrets=true`                                        |
| `nodeSelector`              | Constrain pods to nodes with specific labels                            | `{}`                                       | `--set nodeSelector.disktype=ssd`                                 |
| `tolerations`               | Allow pods to schedule on tainted nodes                                 | `[]`                                       | See [Scheduling](#scheduling-constraints)                         |
| `topologySpreadConstraints` | Control pod distribution across topology domains                        | `[]`                                       | See [Scheduling](#scheduling-constraints)                         |

* `key` is your unique Insight Point registration key, generated in the Veza UI.
  * Create a key in Veza: **Integrations** > **Insight Points** > **Create**
  * Store this value securely as it cannot be recovered if lost
* `skipVerify` (TLS\_INSECURE\_SKIP\_VERIFY) should only be set to `true` to disable certificate validation for testing/troubleshooting.

### Custom CA Certificate Bundle

If your Insight Point needs to trust custom Certificate Authorities (for example, when connecting through a corporate proxy with SSL inspection, or when the control plane uses certificates signed by a private CA), you can provide a custom CA bundle.

The CA bundle should be in PEM format and can contain multiple certificates.

**Important:** Do not use `skipVerify=true` in production. Instead, add your custom CA certificates using this feature. The `skipVerify` option should only be used for testing and development.

#### Option 1: Inline CA Bundle

Create a file `ca-values.yaml` with the certificate contents:

```yaml
caBundle: |
  -----BEGIN CERTIFICATE-----
  MIIDXTCCAkWgAwIBAgIJAKZ...
  -----END CERTIFICATE-----
  -----BEGIN CERTIFICATE-----
  MIIDXTCCAkWgAwIBAgIJAKZ...
  -----END CERTIFICATE-----
```

Install or upgrade with:

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace veza \
  --create-namespace \
  --set key=<your-registration-key> \
  --values ca-values.yaml
```

#### Option 2: Reference to existing ConfigMap

To use an existing ConfigMap containing your CA bundle:

```bash
# Create ConfigMap from your CA certificate file
kubectl create configmap custom-ca-bundle \
  --from-file=ca-certificates.crt=/path/to/your/ca-bundle.crt \
  -n veza

# Reference in helm install
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace veza \
  --create-namespace \
  --set key=<your-registration-key> \
  --set caBundleConfigMapRef=custom-ca-bundle
```

**Note:** The custom CA bundle is mounted to `/etc/ssl/certs/ca-certificates.crt` inside the container, which is the standard location for Go applications. This will replace the default system CA bundle, so ensure your custom bundle includes any default certificates you need to trust.

#### Proxy Configuration

When using an HTTPS inspection proxy:

* Set `addr` to your proxy's address if different from the Veza endpoint. This value overrides the default request authority.
* Ensure your proxy can connect to your Veza deployment.
* `authority` specifies the domain name to use for TLS certificate validation and is only required when `addr` points to a proxy instead of directly to Veza. Must be a specific domain (wildcards not supported).
* Use the custom CA bundle feature (above) to trust your proxy's CA certificate.

### Custom environment variables

You can inject additional environment variables into the Insight Point pod using the `env` parameter. This is required by integrations that read configuration from the pod environment, such as Active Directory with [Kerberos Token Authentication](/4yItIzMvkpAvMVFAamTf/integrations/integrations/active-directory.md#kerberos-token-authentication).

In your `values.yaml`:

```yaml
env:
  - name: KRB5_CONFIG
    value: /tmp/krb5.conf
  - name: KRB5CCNAME
    value: /tmp/krb5cc_go
  - name: LDAP_CERTIFICATE
    value: /tmp/ldap_cert.pem
```

Or using `--set` flags:

```bash
helm upgrade veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace veza \
  --reuse-values \
  --set env[0].name=KRB5_CONFIG \
  --set env[0].value=/tmp/krb5.conf \
  --set env[1].name=KRB5CCNAME \
  --set env[1].value=/tmp/krb5cc_go
```

### Placing credential files on the pod

Some integrations require credential files to be present at specific paths on the Insight Point pod. The Helm chart does not support arbitrary volume mounts. The `/tmp` directory is available as an ephemeral volume inside the container and can be used to place credential files.

Use `kubectl cp` to copy files to a running pod:

```bash
# Find the pod name
kubectl get pods -n veza -l app=veza-insight-point

# Copy the Kerberos configuration and LDAP certificate
kubectl cp krb5.conf veza/<pod-name>:/tmp/krb5.conf
kubectl cp ldap_cert.pem veza/<pod-name>:/tmp/ldap_cert.pem
```

{% hint style="info" %}
The Insight Point container uses a minimal base image without a shell or additional binaries. Tools like `kinit` are not available inside the container. Generate Kerberos credential caches on an external host and copy the resulting cache file to the pod using `kubectl cp`.
{% endhint %}

{% hint style="warning" %}
Files in `/tmp` are stored in an ephemeral volume and are lost if the pod restarts. Re-copy credential files after any pod restart. Plan a process to renew the credential cache before tickets expire — the default lifetime is 5 days, subject to the maximum configured on your domain controller.
{% endhint %}

### Configuring Tags

Tags are custom key-value labels that help organize and categorize your Insight Point instances. For an overview of tags, their use cases, and requirements, see [Tags](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md#tags) in the main Insight Point documentation.

#### Using values.yaml File

Create or edit your `values.yaml`:

```yaml
# Custom tags
tags:
  environment: production
  datacenter: us-west-1
  team: platform-engineering
  owner: ops-team@company.com
```

Then install:

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  -f values.yaml \
  --namespace veza
```

#### Using Command-Line Flags

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --set key=<your-registration-key> \
  --set tags.environment=production \
  --set tags.datacenter=us-west-1 \
  --set tags.team=platform-engineering \
  --namespace veza
```

#### Updating Tags on Existing Deployment

```bash
helm upgrade veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --reuse-values \
  --set tags.environment=production \
  --set tags.new_key=new_value \
  --namespace veza
```

### High Availability Configuration

The Insight Point Helm chart supports high availability (HA) deployment to ensure continuous operation and resilience against node failures or pod disruptions. By default, the chart deploys three replicas of the Insight Point. You can customize the HA settings based on your requirements.

#### Replica Count

For high availability, deploy multiple Insight Point replicas:

* **Single Instance**: Use `replicaCount: 1` for basic deployments
* **High Availability**: Use `replicaCount: 2` or higher for production environments
* **Recommended**: `replicaCount: 3` (default) provides good balance of availability and resource usage

#### Pod Anti-Affinity

When running multiple replicas, configure pod anti-affinity to distribute pods across nodes or availability zones:

| Parameter                     | Description               | Values                                                                    | Default                  |
| ----------------------------- | ------------------------- | ------------------------------------------------------------------------- | ------------------------ |
| `podAntiAffinity.type`        | Anti-affinity enforcement | `soft` (preferred) or `hard` (required)                                   | `soft`                   |
| `podAntiAffinity.topologyKey` | Distribution topology     | `kubernetes.io/hostname` (nodes) or `topology.kubernetes.io/zone` (zones) | `kubernetes.io/hostname` |

* **Soft Anti-Affinity**: Kubernetes will try to place pods on different nodes/zones but will allow co-location if necessary
* **Hard Anti-Affinity**: Kubernetes will never place pods on the same node/zone, which may prevent scheduling if insufficient resources

#### Pod Disruption Budget

Control the number of pods that can be disrupted simultaneously during maintenance:

| Parameter                          | Description                             | Default |
| ---------------------------------- | --------------------------------------- | ------- |
| `podDisruptionBudget.enabled`      | Enable PodDisruptionBudget              | `true`  |
| `podDisruptionBudget.minAvailable` | Minimum pods that must remain available | `1`     |

The PodDisruptionBudget ensures that at least one Insight Point remains available during cluster updates, node maintenance, or voluntary pod evictions.

### Webhook Relay Configuration

The webhook relay service allows the Insight Point to forward webhook requests to destinations in your private network. For an overview of webhook relay, when to use it, security considerations, and supported host formats, see [Webhook Relay](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md#webhook-relay) in the main Insight Point documentation.

#### Configuration options

The following parameters configure webhook relay behavior:

| Parameter                   | Description                                                                                                                                                                 | Default | Example                                                                                                                                                 |
| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `webhookRelay.enabled`      | Enable the webhook relay service                                                                                                                                            | `false` | `--set webhookRelay.enabled=true`                                                                                                                       |
| `webhookRelay.allowedHosts` | A list of allowed destinations (supports multiple formats as documented in [Webhook Relay](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md#webhook-relay)) | `""`    | `--set 'webhookRelay.allowedHosts[0]=172.17.0.0/24' --set 'webhookRelay.allowedHosts[1]=172.16.0.*' --set 'webhookRelay.allowedHosts[3]=*.example.com'` |

#### Configuration via Command Line

Configure webhook relay when installing or upgrading the Insight Point:

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace <NAMESPACE> \
  --set key=<KEY> \
  --set webhookRelay.enabled=true \
  --set webhookRelay.allowedHosts[0]="webhook.site" \
  --set webhookRelay.allowedHosts[1]="*.example.com" \
  --set webhookRelay.allowedHosts[2]="172.17.0.0/24"
```

Or when upgrading an existing deployment:

```bash
helm upgrade veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace <NAMESPACE> \
  --set webhookRelay.enabled=true \
  --set webhookRelay.allowedHosts[0]="webhook.site" \
  --set webhookRelay.allowedHosts[1]="*.example.com"
```

#### Configuration via values.yaml

Create or edit a `values.yaml` file with webhook relay configuration:

```yaml
key: "<your-insight-point-key>"

webhookRelay:
  enabled: true
  allowedHosts:
    - "webhook.site"
    - "*.example.com"        # Wildcard domain
    - "172.17.0.100"         # IP address
    - "10.0.0.0/8"           # CIDR range
    - "172.16.*"             # Wildcard IP
```

Then install or upgrade with the values file:

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace <NAMESPACE> \
  --values values.yaml
```

#### Verifying Webhook Relay Configuration

To verify webhook relay is configured correctly:

1. Check the Helm values:

   ```bash
   helm get values veza-insight-point -n <NAMESPACE>
   ```
2. Check the pod environment variables:

   ```bash
   kubectl get pods -n <NAMESPACE> -l app=veza-insight-point -o jsonpath='{.items[0].spec.containers[0].env}' | jq
   ```

If webhook relay is enabled but not working:

* Verify the allowed hosts are in the correct format
* Check that the destination is included in the allowed hosts list
* Review the Insight Point logs for validation or connection errors:

  ```bash
  kubectl logs -l app=veza-insight-point -n <NAMESPACE>
  ```
* Ensure the destination is actually reachable from the Insight Point's network

### RBAC and Kubernetes Permissions

By default, the Helm chart creates a ClusterRole, ServiceAccount, and ClusterRoleBinding to enable the Insight Point to discover Kubernetes RBAC entities. The ClusterRole grants the following permissions:

| Resource              | API Group                   | Verbs  | Purpose                                    |
| --------------------- | --------------------------- | ------ | ------------------------------------------ |
| `namespaces`          | `""`                        | `list` | Discover cluster namespaces                |
| `configmaps`          | `""`                        | `get`  | Read configuration                         |
| `clusterroles`        | `rbac.authorization.k8s.io` | `list` | Discover RBAC roles                        |
| `clusterrolebindings` | `rbac.authorization.k8s.io` | `list` | Discover role assignments                  |
| `roles`               | `rbac.authorization.k8s.io` | `list` | Discover namespace-scoped roles            |
| `rolebindings`        | `rbac.authorization.k8s.io` | `list` | Discover namespace-scoped role assignments |

When `enableSecrets=true` is set, the ClusterRole additionally grants `list` access to `secrets` across all namespaces.

**Disabling RBAC resource creation:** If your organization manages RBAC separately, set `createClusterRole=false`. When disabled, the chart skips all three RBAC resources — the ClusterRole, ClusterRoleBinding, and ServiceAccount. The pod will use the namespace default ServiceAccount. Create your own ServiceAccount, ClusterRole, and ClusterRoleBinding in the `veza` namespace before deploying.

### Secrets Vault Configuration

The Insight Point can retrieve credentials from external secrets vault providers (such as Azure Key Vault) instead of requiring credentials to be passed directly. This is configured using the `secretsVaultsConfig` parameter or a reference to an existing Kubernetes secret.

#### Inline configuration

```yaml
secretsVaultsConfig:
  vaults:
    - name: example-vault
      vault_provider: azure_key_vault
      auth_type: client_secret
      auth_config:
        vault_uri: https://my-vault.vault.azure.net
        tenant_id: <tenant-id>
        client_id: <client-id>
        client_secret: <client-secret>
```

#### Reference to existing secret

If the secrets vault configuration already exists as a Kubernetes secret:

```bash
helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point \
  --version <VERSION> \
  --namespace veza \
  --create-namespace \
  --set key=<your-registration-key> \
  --set secretsVaultsConfigSecretRef=my-vault-config
```

| Parameter                      | Description                                                  | Default |
| ------------------------------ | ------------------------------------------------------------ | ------- |
| `secretsVaultsConfig`          | Inline YAML configuration for secrets vault providers        | `{}`    |
| `secretsVaultsConfigSecretRef` | Reference to an existing Kubernetes secret with vault config | `""`    |

Only one of `secretsVaultsConfig` or `secretsVaultsConfigSecretRef` can be set.

### Scheduling Constraints

The Helm chart supports standard Kubernetes scheduling parameters for controlling pod placement.

#### Node Selector

Constrain Insight Point pods to nodes with specific labels:

```yaml
nodeSelector:
  disktype: ssd
  node-role.kubernetes.io/compute: "true"
```

#### Tolerations

Allow pods to schedule on tainted nodes:

```yaml
tolerations:
  - key: dedicated
    operator: Equal
    value: insight-point
    effect: NoSchedule
```

#### Topology Spread Constraints

Control how pods are distributed across topology domains:

```yaml
topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: insight-point
```

### Resource Allocation

The Helm chart sets fixed resource requests and limits of **2 CPU cores** and **4 GB RAM** per pod. These values match the [system requirements](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md#system-requirements) and cannot be overridden via Helm values. If your workload requires different resource allocation, contact Veza support.

### Requirements

A Kubernetes Helm chart is a package format used to define, install, and upgrade applications in Kubernetes. Helm is often referred to as a package manager for Kubernetes. To install the chart, you will need:

* **System Resources**: Ensure your Kubernetes cluster has sufficient resources to meet the [Insight Point system requirements](/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point.md#system-requirements) (minimum: 2 CPU cores, 4 GB RAM per Insight Point pod).
* **Insight Point Key**: You will need to generate a secret key for the Insight Point. To create one, go to Veza **Integrations** > *Insight Point* > *Create*.
* **Insight Point Version**: Note the most recent Insight Point version from [Veza's OCI repository](https://gallery.ecr.aws/veza/helm-chart/insight-point).
* **Access to the Kubernetes Cluster**: Ensure you have the necessary permissions and access credentials to interact with the target Kubernetes cluster.
* **Helm Installed**: Ensure Helm version `3.8` or greater is installed on your local machine. You can install Helm by following the official documentation: [Helm Installation](https://helm.sh/docs/intro/install/).
* Your organization security policies must allow chart installation from the VEZA ECR `public.ecr.aws/veza`

### Install Insight Point (Helm Chart)

1. **Customize Values and Install the Insight Point**:

   Use the `helm install` command to install the Insight Point into the Kubernetes cluster. Replace `<NAME>`, `<VERSION>`, `<KEY>`, and key with your specific values:

   ```shell
   helm install <NAME> oci://public.ecr.aws/veza/helm-chart/insight-point --version <VERSION> --namespace <NAMESPACE> --create-namespace  --set key=<KEY>
   ```

   * `--namespace <NAMESPACE>`: required if installing the Insight Point into a different namespace than the default.
   * `--create-namespace`: required if the namespace does not exist yet.
   * `--set enableSecrets=true`: optional field, required to enable Kubernetes Secrets extraction. Secrets are not extracted by default.

   A Veza Insight Point Key must be provided. To do this, you can specify the value with the `--set key=<registration-key>` option when installing the chart.

   Example:

   ```shell
   helm install veza-insight-point oci://public.ecr.aws/veza/helm-chart/insight-point --version <VERSION> --namespace veza --create-namespace --set enableSecrets=true --set key=<YOUR_KEY>
   ```
2. **Verify Installation**:

   Verify the status of the installation by running:

   ```shell
   helm list -n <NAMESPACE>
   ```

   This command will return a list of Helm releases, including the Insight Point you just installed. Ensure the STATUS is "DEPLOYED."
3. **Get Insight Point Logs**:

   If the Insight Point fails to initialize or can't connect to Veza, you can get more details by reviewing the container logs. You can retrieve this using the terminal:

   ```shell
   kubectl logs -l app=<veza-insight-point> -n <NAMESPACE>
   ```
4. **Upgrade and Maintain**:

   Over time, you may need to upgrade the Insight Point to newer versions or adjust its configuration. Use the `helm upgrade` command to make these changes.

   **Standard upgrade**:

   ```shell
   helm upgrade <veza-insight-point> oci://public.ecr.aws/veza/helm-chart/insight-point --version <VERSION> --namespace <NAMESPACE>
   ```

   Note that newer versions can introduce breaking changes (e.g., replacing Kubernetes resources with others), which can cause a brief unavailability of the Insight Point.
5. **Uninstall the Insight Point**:

   If you need to uninstall the Insight Point, you can do so using the `helm uninstall` command:

   ```shell
   helm uninstall <veza-insight-point> --namespace <NAMESPACE>
   ```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.veza.com/4yItIzMvkpAvMVFAamTf/integrations/connectivity/insight-point/insight-point-kubernetes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
