Skip to main content
availability

Package: Invicti API Security Standalone or Bundle

NTA with eBPF Sniffer

The Invicti Network Traffic Analyzer (NTA) eBPF Sniffer captures encrypted API traffic at the Linux kernel level using eBPF technology. It intercepts plaintext data before encryption and after decryption — enabling automatic API discovery for HTTPS/TLS traffic without requiring TLS termination, certificate manipulation, or application changes.

The eBPF Sniffer works only with TLS-encrypted connections. It does not capture unencrypted HTTP traffic — for that, use the Tap Plugin. Compared to the Tap Plugin, the eBPF Sniffer supports:

  • HTTPS/TLS traffic — no TLS termination or certificate injection required
  • HTTP/2 — full protocol support beyond HTTP/1.1
  • Cross-container capture — one DaemonSet pod per node captures all containers on that node
  • No service mesh dependency — works without Istio or any proxy

Prerequisites

  • A Kubernetes cluster (EKS, GKE, AKS, k3s, or self-managed) with Linux worker nodes
  • Helm command-line tool installed (version 3+)
  • Cluster access configured (for example, via kubeconfig)
  • Linux kernel version 4.18 or later on cluster worker nodes (5.4 or later recommended)

Overview

The eBPF Sniffer attaches eBPF probes (uprobes) to TLS library functions (for example, OpenSSL's SSL_write and SSL_read), capturing plaintext data before encryption and after decryption. For Go and Rust applications, the sniffer attaches to their built-in TLS implementations directly. This approach requires no changes to your applications, no proxy configuration, and no service mesh.

How it works

eBPF Sniffer data flow: from SSL_write call through eBPF uprobe, user-space parsing, Reconstructor, APIHub, to Platform UIeBPF Sniffer data flow: from SSL_write call through eBPF uprobe, user-space parsing, Reconstructor, APIHub, to Platform UI

Supported SSL/TLS libraries

LibrarySupported runtimesStatus
OpenSSL (libssl)Python, Node.js, Ruby, PHP, curl, nginx, ApacheFully supported
GnuTLS (libgnutls)Linux system toolsFully supported
NSS (libnspr4)Firefox, some Linux toolsSupported
Go crypto/tlsAny Go application using crypto/tls (for example, Traefik, Caddy, CoreDNS)Supported (opt-in)
Rust rustlsAny Rust application using rustls (for example, linkerd2-proxy, vector, cloudflared)Supported (opt-in)

Supported protocols

  • HTTP/1.1
  • HTTP/2

Data handling

The eBPF Sniffer captures HTTP metadata (method, path, headers, status code) and request/response bodies from decrypted TLS traffic. This data is processed locally inside the sniffer pod and sent to the Reconstructor service over an encrypted (TLS) connection. The Reconstructor is a companion service, deployed alongside the sniffer, that assembles captured HTTP telemetry into OpenAPI specifications and forwards only the relevant API endpoint metadata to the Invicti Platform.

Sensitive headers (Authorization, Cookie, Set-Cookie, X-API-Key) are redacted by the sniffer before leaving the pod. No captured data is written to disk on the node.

caution

The sniffer captures plaintext data from TLS library calls. Ensure that deploying traffic capture tools in your cluster complies with your organization's security and privacy policies.

Installation

Step 1: Generate a registration token

  1. Select Discovery from the left-side menu.
  2. Under API configuration, select API sources.
  3. Click Add source.
  4. Leave the Import type as External platform.
  5. Enter a name for the source configuration. This helps you identify it later in your list of API sources.
  6. Select Invicti Network Traffic Analyzer as the Source type.
  7. Click Generate token.
  8. Click the copy icon next to the newly generated registration token.
  9. Click Save at the bottom of the page. Do not skip this step.

Step 2: Authenticate with the Invicti registry

  1. Launch the Helm command-line tool.
  2. Run the following command:
helm registry login registry.invicti.com
Authentication credentials

Username: your Invicti Platform email Password: your valid Invicti Platform license key

Step 3: Verify kernel compatibility

Verify that your cluster worker nodes are running a supported kernel version. You can check this with:

kubectl get nodes -o wide

The KERNEL-VERSION column shows the kernel version on each node.

Kernel headers are not required in most cases

The sniffer container ships with a pre-compiled eBPF program (CO-RE) that works on kernel 5.2 or later with BTF support. Most modern cloud kernels (EKS with Amazon Linux 2023, GKE with Ubuntu, AKS with Ubuntu) include BTF by default — no manual kernel header installation is needed.

For older kernels without BTF, the sniffer falls back to runtime compilation using in-kernel headers (CONFIG_IKHEADERS), which is available on many cloud kernels automatically.

Step 4: Deploy the Helm chart

Run the following command to install the NTA with the eBPF Sniffer into your Kubernetes cluster:

helm install invicti-api-discovery \
oci://registry.invicti.com/invicti-api-discovery \
--version <version> \
-n <your-namespace> \
--set trafficSource.ebpfSniffer.enabled=true \
--set imageRegistryUsername=<email-address> \
--set imageRegistryPassword=<license-key> \
--set reconstructor.JWT_TOKEN="<registration-token>" \
--create-namespace

Replace the following placeholders:

PlaceholderDescription
<version>The Helm chart version (for example, 25.11). Omit to pull the latest version automatically.
<your-namespace>The Kubernetes namespace for the deployment
<email-address>Your Invicti Platform email address
<license-key>Your Invicti Platform license key
<registration-token>The token generated in Step 1. Keep it enclosed in double quotes.

Step 5: Verify the installation

Check that the pods are running. You should see one sniffer pod per worker node (DaemonSet) and one reconstructor pod:

kubectl get pods -n <your-namespace>

Expected output:

NAME                                          READY   STATUS    RESTARTS   AGE
invicti-api-discovery-ebpf-sniffer-abc12 1/1 Running 0 2m
invicti-api-discovery-ebpf-sniffer-def34 1/1 Running 0 2m
invicti-api-discovery-reconstructor-xyz56 1/1 Running 0 2m

Verify that the sniffer is capturing traffic by checking the logs:

kubectl logs -n <your-namespace> -l app.kubernetes.io/name=ebpf-sniffer --tail=20

You should see output indicating that SSL libraries were discovered and eBPF probes were attached:

INFO  Discovered 3 SSL libraries across 12 processes
INFO Attached 6 eBPF probes (SSL_read, SSL_write, SSL_read_ex, SSL_write_ex)
INFO Sniffer ready — capturing traffic
Pod name

The pod name suffix is randomized. Copy the pod name from the kubectl get pods output.

If everything looks good, the NTA with the eBPF Sniffer is now capturing and analyzing encrypted traffic in your Kubernetes cluster.

Update or reinstall

Use the same registration token and credentials from your initial installation. If your token has expired, generate a new one following Step 1.

  1. Log in to the Invicti registry as described in Step 2.
  2. Run the upgrade command:
helm upgrade invicti-api-discovery \
oci://registry.invicti.com/invicti-api-discovery \
--version <version> \
-n <your-namespace> \
--set trafficSource.ebpfSniffer.enabled=true \
--set imageRegistryUsername=<email-address> \
--set imageRegistryPassword=<license-key> \
--set reconstructor.JWT_TOKEN="<registration-token>"

Uninstall

To remove the NTA eBPF Sniffer from your cluster:

helm uninstall invicti-api-discovery -n <your-namespace>

This removes all sniffer and reconstructor pods, services, and related resources. No data is persisted on the nodes, so no additional cleanup is required.

Configuration

You can customize the eBPF Sniffer deployment using Helm values. Pass them using --set or a values file. The following example shows an installation command with commonly used parameters:

helm install invicti-api-discovery \
oci://registry.invicti.com/invicti-api-discovery \
--version <version> \
-n <your-namespace> \
--set trafficSource.ebpfSniffer.enabled=true \
--set trafficSource.ebpfSniffer.logLevel=info \
--set trafficSource.ebpfSniffer.capture.maxBufferSize=8192 \
--set trafficSource.ebpfSniffer.capture.perfBufferPages=64 \
--set trafficSource.ebpfSniffer.goDiscovery.enabled=false \
--set trafficSource.ebpfSniffer.rustDiscovery.enabled=false \
--set imageRegistryUsername=<email-address> \
--set imageRegistryPassword=<license-key> \
--set reconstructor.JWT_TOKEN="<registration-token>"

Configuration reference

ParameterDescriptionDefault
trafficSource.ebpfSniffer.enabledEnable the eBPF Sniffer DaemonSetfalse
trafficSource.ebpfSniffer.logLevelLog level: debug, info, warning, errorinfo
trafficSource.ebpfSniffer.capture.maxBufferSizeMaximum bytes captured per SSL event8192
trafficSource.ebpfSniffer.capture.perfBufferPagesPerf buffer size in pages (power of 2). Increase for high-throughput environments.64
trafficSource.ebpfSniffer.goDiscovery.enabledEnable Go crypto/tls binary discoveryfalse
trafficSource.ebpfSniffer.rustDiscovery.enabledEnable Rust rustls binary discoveryfalse
trafficSource.ebpfSniffer.discovery.rescanIntervalInterval in seconds to rescan for new SSL libraries. 0 = scan only at startup.0
trafficSource.ebpfSniffer.namespaceA custom label added to telemetry data to identify the source cluster or environment (for example, production-us or staging). This does not filter which Kubernetes namespace is monitored.""
trafficSource.ebpfSniffer.privilegedRun as fully privileged (required for kernel earlier than 5.8)false

Frequently asked questions

Show all questions
What does the eBPF Sniffer do?

The eBPF Sniffer monitors SSL/TLS library function calls at the Linux kernel level using eBPF uprobes. It captures the plaintext data before encryption (on outgoing requests) and after decryption (on incoming responses), extracts HTTP telemetry, and sends it to the Reconstructor service for API discovery.

Does it capture both internal and external API traffic?

The eBPF Sniffer captures SSL/TLS traffic from all processes running on the node. This includes any encrypted HTTP traffic that those processes send or receive, whether to other services within the cluster or to external endpoints.

In shared or multi-tenant clusters, be aware that the sniffer captures traffic from all containers on the node, not just a specific namespace or application. Plan your deployment accordingly and consult your security team if namespace isolation is a concern.

Does it support HTTPS traffic?

Yes. This is the primary advantage of the eBPF Sniffer over the Tap Plugin. It captures plaintext data directly from SSL library functions, so TLS encryption is transparent to the sniffer. No TLS termination, proxy, or certificate injection is required.

Does it require a service mesh?

No. The eBPF Sniffer operates at the kernel level and does not require Istio, Linkerd, or any other service mesh. It works with any Kubernetes setup.

What kernel version is required?

The minimum kernel version is 4.18. Kernel 5.4 or later is recommended. Kernel 5.8 or later enables fine-grained Linux capabilities instead of requiring fully privileged pods.

Kernel versionSupport level
Earlier than 4.18Not supported
4.18–5.3Basic support
5.4–5.7Recommended
5.8 and laterFull support (fine-grained capabilities, BPF ring buffer)
Which Kubernetes platforms are supported?
PlatformStatusNotes
Amazon EKS (Amazon Linux 2 / Ubuntu)SupportedBTF available by default
Google GKE (Ubuntu node pools)SupportedUse Ubuntu image type
Google GKE (Container-Optimized OS)LimitedRequires kernel 5.8+ with BTF (CO-RE mode)
Azure AKS (Ubuntu)SupportedBTF available by default
k3s / RKE2SupportedWorks out of the box
OpenShiftSupportedRequires SecurityContextConstraint for privileged pods
Are Go and Rust applications supported?

Yes, with opt-in configuration. Go and Rust applications typically use statically-linked TLS libraries that are not shared as .so files. Enable support via:

  • trafficSource.ebpfSniffer.goDiscovery.enabled=true for Go applications using crypto/tls
  • trafficSource.ebpfSniffer.rustDiscovery.enabled=true for Rust applications using rustls
note

Go and Rust support uses uretprobes, which may have a minor performance impact on the target application. Enable only if needed.

Does the eBPF Sniffer capture request and response bodies?

Yes. Request bodies up to 256 KB and response bodies up to 1 MB are captured. Larger bodies are truncated. Sensitive headers (such as Authorization, Cookie, and X-API-Key) are redacted by default.

What is the performance impact?

The eBPF Sniffer operates in kernel space with minimal overhead. The probes execute only during TLS read/write calls and copy a small buffer of plaintext data. Based on internal testing with typical production workloads (up to 10,000 requests per second), the overhead is less than 1% additional CPU and less than 5 ms additional latency per request.

How many pods does it deploy?

The eBPF Sniffer runs as a Kubernetes DaemonSet — one pod per node. Because eBPF operates at the kernel level, a single pod captures traffic from all containers on that node.

Do pods run in privileged mode?
  • Kernel 5.8 and later: No. The sniffer uses fine-grained Linux capabilities instead of running as a fully privileged container:
    • CAP_BPF — load and attach eBPF programs
    • CAP_PERFMON — access perf events for data transfer from kernel to user space
    • CAP_SYS_ADMIN — required by older eBPF verifier paths (cannot be avoided on current kernels)
    • CAP_SYS_PTRACE — read /proc/*/maps to discover SSL libraries in other containers
  • Kernel earlier than 5.8: Yes. These fine-grained capabilities are not available, so the pod must run as fully privileged. Set trafficSource.ebpfSniffer.privileged=true in your Helm values.
What happens if a new application is deployed after the sniffer?

By default, the sniffer discovers SSL libraries at startup only. If you deploy new applications after the sniffer is running, their traffic will not be captured until the sniffer rescans.

To enable automatic discovery of new applications, set trafficSource.ebpfSniffer.discovery.rescanInterval to a non-zero value (in seconds). For example, 60 rescans every minute. This is recommended for clusters where applications are deployed or restarted frequently.

Troubleshooting

Show all troubleshooting topics
Pods are stuck in Init state

The init container is attempting to set up kernel header symlinks. Check the init container logs:

kubectl logs -n <your-namespace> <sniffer-pod> -c fix-kernel-headers

Check the kernel version on the node (see Step 3) and restart the pod.

Sniffer reports "0 probes attached"

No SSL libraries were found on the node. Possible causes:

  • No applications using OpenSSL, GnuTLS, or NSS are running on the node
  • The /proc volume is not mounted correctly
  • The sniffer does not have sufficient permissions to read /proc/*/maps

Check the sniffer logs for detailed discovery output:

kubectl logs -n <your-namespace> <sniffer-pod> | grep -i "discover"
Readiness probe fails (503)

The readiness probe returns 503 during startup while the sniffer loads eBPF programs and attaches probes. This typically takes 10–30 seconds. If it persists:

  1. Check for compilation errors in the logs.
  2. Ensure the kernel version is 4.18 or later (see Step 3).
  3. For kernels without BTF support, verify that kernel headers or CONFIG_IKHEADERS are available on the node.
No traffic appears in the Platform UI
  1. Verify the sniffer is running and ready:
kubectl get pods -n <your-namespace> -l app.kubernetes.io/name=ebpf-sniffer
  1. Check connectivity to the Reconstructor:
kubectl logs -n <your-namespace> <sniffer-pod> | grep -i "reconstructor\|telemetry\|send"
  1. Verify the registration token is valid and the Reconstructor is authenticated with the platform.
  2. Confirm that the target applications use TLS. The eBPF Sniffer does not capture unencrypted HTTP traffic. For plain HTTP, use the Tap Plugin.
High memory usage

If the sniffer pod is consuming more memory than expected:

  • Reduce trafficSource.ebpfSniffer.capture.perfBufferPages (for example, from 64 to 32).
  • Increase the memory limit if your environment has high SSL/TLS throughput.

For additional issues, refer to NTA troubleshooting.

Need help?

Invicti Support team is ready to provide you with technical help. Go to Help Center

Was this page useful?