Skip to main content
this document is for:

Deployment: Invicti Platform on-premises

Troubleshooting

1. Pods stuck in pending state

Symptoms

Shows pods with STATUS: Pending

kubectl get pods -n invicti

Diagnosis

kubectl describe pod <pod-name> -n invicti

Common causes and solutions

1. Insufficient resources

Check node resource availability:

kubectl top nodes
kubectl describe nodes | grep -A 5 "Allocated resources"
  • kubectl top nodes - this only works of you have Metrics API available in your cluster

Solution

Reduce resource allocations for running pods, or scale your cluster by adding additional nodes to ensure sufficient capacity.

Install with reduced resource profile:

helm upgrade invicti-platform . \
-f values-resources-basic.yaml \
2. Storage issues

Check PersistentVolumeClaims:

kubectl get pvc -n invicti

Solution

Ensure a default storage class exists and is properly configured:

Verify default storage class is marked as (default)

kubectl get storageclass

2. ImagePullBackOff or ErrImagePull

Symptoms

Shows pods with STATUS: ImagePullBackOff or ErrImagePull

kubectl get pods -n invicti

Diagnosis

Look for authentication or pull errors

kubectl describe pod <pod-name> -n invicti

Solutions

Registry authentication failed

Check your email and license keys and try to re-deploy.

Network connectivity

Test connectivity to registry from within cluster:

kubectl run test-pull --rm -it --restart=Never \
--image=curlimages/curl \
-- curl -I https://platform-registry.invicti.com

It should return HTTP 200 or 301.

3. CrashLoopBackOff

Symptoms:

Shows pods with STATUS: CrashLoopBackOff.

kubectl get pods -n invicti

Diagnosis

Check pod logs using this command:

kubectl logs <pod-name> -n invicti --previous

# Check events
kubectl get events -n invicti --sort-by='.lastTimestamp' | grep <pod-name>

Common causes and solutions

Database connection issues for PostgreSQL-related services
# Verify PostgreSQL is running
kubectl get pods -n invicti -l app=postgresql

# Check PostgreSQL logs
kubectl logs -n invicti deployment/postgresql

# Test connectivity from a pod
kubectl run psql-test -n invicti --rm -it --restart=Never \
--image=postgres:16 \
--env="PGPASSWORD=$(kubectl get secret -n invicti invicti-generated-secrets -o jsonpath='{.data.data_databasePassword}' | base64 -d)" \
-- psql -h postgresql -U postgres -c "SELECT 1"
Database connection issues for MongoDB-related services
# Verify MongoDB is running
kubectl get pods -n invicti -l app=mongodb

# Check MongoDB logs
kubectl logs -n up statefulset/mongodb

# Test connectivity
kubectl run mongo-test -n up --rm -it --restart=Never \
--image=mongo:8.0 \
-- mongosh "mongodb://mongo-service:27017" --eval "db.adminCommand('ping')"

4. LoadBalancer external IP stuck on pending

Symptoms

EXTERNAL-IP shows <pending>

kubectl get svc nginx-service -n invicti

Diagnosis

kubectl describe svc nginx-service -n invicti

Solutions

For cloud providers (AWS, GCP, Azure)

Ensure your cluster has proper IAM permissions to create load balancers:

  • AWS: Check EKS cluster role has elasticloadbalancing:* permissions
  • GCP: Verify GKE cluster has compute load balancer admin role
  • Azure: Confirm AKS has network contributor role
For Bare-Metal

Make sure your cluster supports LoadBalancer services.

5. NATS connection failures

Symptoms

Services cannot connect to NATS messaging system.

Diagnosis

# Check NATS pods
kubectl get pods -n invicti -l app=natsio

# Verify NATS StatefulSet
kubectl describe statefulset natsio -n invicti

# Check NATS logs
kubectl logs -n invicti statefulset/natsio

# Test NATS connectivity
kubectl run nats-test --rm -it --restart=Never \
--image=natsio/nats-box:0.18.1 \
-- nats server check --server=nats://natsio:4222

6. SeaweedFS storage issues

Symptoms

File uploads fail or scan data is not persisted.

Diagnosis

# Check SeaweedFS pods
kubectl get pods -n invicti | grep seaweedfs

# Verify StatefulSets
kubectl get statefulset -n invicti | grep seaweedfs

# Check master logs
kubectl logs -n invicti statefulset/seaweedfs-master-0

# Check filer logs
kubectl logs -n invicti statefulset/seaweedfs-filer-0

# Verify S3 service
kubectl get svc -n invicti | grep seaweedfs

Test S3 connectivity:

# Install AWS CLI in a test pod
kubectl run aws-cli --rm -it --restart=Never \
--image=amazon/aws-cli \
-- s3 ls --endpoint-url=http://seaweedfs:8333 \
--no-sign-request

Bucket verification:

# Exec into filer pod
kubectl exec -it -n invicti seaweedfs-filer-0 -- sh

# List buckets
weed shell
> s3.bucket.list

Expected buckets should include:

  • dast-scan-results
  • dast-scan-data
  • dast-scan-configuration
  • dast-lsr-logs
  • dast-http-data
  • apihub
  • loki-bucket

7. DAST scanner not scaling

Symptoms

Scan jobs are queued but scanner pods are not being created.

Diagnosis

# Check KEDA ScaledJob
kubectl get scaledjob -n invicti

# Describe ScaledJob
kubectl describe scaledjob dast-scanner -n invicti

# Check KEDA operator logs
kubectl logs -n keda -l app=keda-operator

# Verify ElasticMQ queue
kubectl get pods -n invicti -l app=elasticmq
kubectl logs -n invicti deployment/elasticmq

Advanced diagnostics

Get all resources in namespace
kubectl get all -n invicti
View recent events
kubectl get events -n invicti \
--sort-by='.lastTimestamp' \
--field-selector type!=Normal
Check resource consumption
kubectl top pods -n invicti
kubectl top nodes
Export Helm values for review
helm get values invicti-platform -n invicti > current-values.yaml
View Helm release history
helm history invicti-platform -n invicti
Rollback to previous version
helm rollback invicti-platform <revision> -n invicti
Debug pod startup issues
# View init container logs
kubectl logs <pod-name> -n invicti -c <init-container-name>

# Get pod YAML for inspection
kubectl get pod <pod-name> -n invicti -o yaml

# Execute into running pod
kubectl exec -it <pod-name> -n invicti -- /bin/sh

Get support

If issues persist after troubleshooting:

Collect logs

Use this command to collect logs:

kubectl logs -n invicti -l app --all-containers=true > logs.txt

What this command does:

  • kubectl logs - retrieves logs from running containers in your Kubernetes cluster.
  • -n invicti - tells kubectl which namespace to look in.
  • -l app - is a label selector. It tells Kubernetes to gather logs from all pods that have the label key app. Most Helm charts, including this one, assign the app label automatically to all components of the release, so this flag ensures you capture logs from every related pod.
  • --all-containers=true - includes logs from every container within each pod, not just the first one. This is useful because some pods run multiple containers (for example, a main app and a sidecar).

This will print all recent logs from every container in every pod that belongs to the application.

Collect diagnostic information

# Save all pod logs
kubectl logs -n invicti --all-containers=true \
--prefix=true > invicti-platform-logs.txt

# Export cluster info
kubectl cluster-info dump -n invicti > cluster-dump.txt

# Get Helm release details
helm get all invicti-platform -n invicti > helm-release.txt

Contact Invicti support

Contact Invicti support with this information prepared:

  • Platform version
  • Kubernetes version and distribution
  • Diagnostic files collected above
  • Description of the issue and steps to reproduce

If you need to remove your deployment, continue to the uninstallation instructions.



Need help?

Invicti Support team is ready to provide you with technical help. Go to Help Center

Was this page useful?