Skip to main content
this document is for:

Deployment: Invicti Platform on-premises

Kubernetes cluster requirements for the DAST Helm chart

To ensure a reliable installation and smooth operation, your Kubernetes cluster should meet the following prerequisites and recommendations. These requirements help ensure that all components of the Helm chart—such as KEDA, storage backends, load balancing, and infrastructure services—operate correctly and scale as intended.

1. No existing KEDA installation

The Helm chart bundles and deploys its own supported version of KEDA (including its operator and CRDs). Because KEDA CRDs are cluster-wide resources, only one KEDA installation can run in a cluster.

If another version of KEDA is already installed, resource conflicts will occur.

Action:

  • Uninstall or deactivate any existing KEDA installation before deploying the chart, or
  • Install this chart in a separate cluster where KEDA is not present.

2. Node resources and autoscaling capacity

The DAST scanner scales using KEDA, which may create many concurrent job pods. Your cluster must be able to handle this workload.

Requirements:

  • Sufficient CPU and memory across nodes to run scanning jobs and supporting components.
  • Ability to schedule up to N concurrent scanner pods, where N is the maximum expected scale.
  • To avoid pending pods caused by insufficient nodes, enable the Kubernetes Cluster Autoscaler in cloud environments.

This ensures there is always enough capacity available to run the required number of scanner jobs and prevents them from being stuck in a pending state due to insufficient nodes.

Recommendation: Plan cluster size (node count and machine type) based on expected scanning frequency, concurrency, and workload bursts.

3. Default StorageClass supporting single-writer volumes

The deployment requires persistent volumes for several built-in services. Your cluster must have a default StorageClass capable of dynamic provisioning.

Requirements:

  • A default StorageClass must exist; otherwise, PersistentVolumeClaims is going to fail to bind.
  • The storage backend must support ReadWriteOnce (RWO) single-writer volumes.

Most managed cloud Kubernetes offerings meet these requirements by default (for example, AWS EBS, Azure Managed Disks, Google Cloud Persistent Disks).

If your environment doesn't provide a default StorageClass (common in bare‑metal clusters), create one or configure the chart to use an existing provisioner.

In production setups, using reliable block-storage (or NFS if multi-writer is ever needed) with proper performance is recommended for the best results.

Storage capacity planning

The following volumes are required for built-in infrastructure components:

Infrastructure components:

  • SeaweedFS: 1000Gi (data) + 100Gi (filer) → 1100Gi total
  • PostgreSQL: 5Gi
  • MongoDB: 3Gi
  • Valkey: 8Gi

Minimum recommended total storage: 1.2 to 1.5 TB for production deployments.

important

DAST scan results stored in SeaweedFS can grow significantly based on usage. Monitor storage consumption and plan capacity growth accordingly.

4. LoadBalancer service support

The Helm chart provisions an NGINX reverse proxy exposed via a Service of type LoadBalancer.

Requirements:

The cluster environment must support allocating an external IP or hostname for LoadBalancer services.

In managed cloud platforms (AWS, Azure, GCP), this is automatic. On managed cloud platforms (AWS, Azure, GCP, etc.), this is usually built-in, creating a Service of type LoadBalancer will automatically allocate a cloud load balancer and public IP. For example, EKS clusters provision AWS Elastic Load Balancers, AKS uses Azure Load Balancers, and GKE uses Google Cloud Load Balancers.

For bare‑metal or private Kubernetes clusters without a cloud provider:

  • Deploy a solution such as MetalLB to assign external IPs to LoadBalancer services, or
  • use NodePort with external DNS/routing as an alternative.

Ensure your cluster can provide an accessible external address for the NGINX service, either through native cloud provider integration or a custom load balancer implementation.

5. Outbound network connectivity

Your cluster requires outbound access to the following:

  • platform-registry.invicti.com - to pull container images
  • SMTP server - required if email notifications are enabled (configurable in values.yaml)
  • Target applications - DAST scanners must reach the applications being tested

Ensure firewall, proxy, or routing rules allow this outbound connectivity.

6. Cloud provider considerations

The Helm chart is cloud‑agnostic, but different environments require different preparation.

AWS (EKS)

  • Default storage class (gp2/gp3) supports RWO volumes out of the box.
  • Ensure the cluster is configured for AWS Load Balancer provisioning (usually handled via the IAM roles on the worker nodes).
  • Enable the Cluster Autoscaler for node scaling when the scanning jobs increase (AWS provides a Cluster Autoscaler addon, or you can deploy it manually).
  • Verify subnet tagging, IAM roles, and security groups permit load balancer creation and EBS usage.

GCP (GKE)

  • Default storage class (standard or standard-rwo) supports RWO volumes out of the box.
  • Ensure the cluster can provision a Google Cloud Load Balancer for LoadBalancer services.
  • Enable the Cluster Autoscaler on node pools to scale nodes when scanning jobs increase.
  • Verify node IAM permissions, firewall rules, and quotas allow persistent disk creation, load balancer provisioning, and network access.

On‑premises / Other Kubernetes distributions

For clusters in on-prem or using kubeadm, Rancher, OpenShift, MicroK8s, kind, or similar:

Storage:

  • Install a dynamic storage provisioner (NFS, GlusterFS, or local‑path provisioner).
  • Ensure the default StorageClass supports RWO access.

Ingress / Load Balancing:

  • Deploy an ingress controller, for example ingress-nginx, Traefik.
  • Use MetalLB or an equivalent solution for external IP allocation.
  • Alternatively, expose services via NodePort with external routing.
  • You may also use DNS entries pointing to node IPs as a workaround, but a proper load balancer solution is recommended for production.

Resources:

  • Autoscaling your cluster distribution is subject to your Kubernetes engine posibili.
  • Monitor cluster capacity carefully to handle peak scanning loads.
  • The cluster admin should be prepared to add nodes or schedule jobs carefully to avoid saturation.

Continue with the pre-installation checklist to confirm your environment is fully prepared for deployment.



Need help?

Invicti Support team is ready to provide you with technical help. Go to Help Center

Was this page useful?