eBPF networking, observability, and security for Kubernetes. Free practice questions sampled from our full 120-question bank, with detailed explanations for every option.
Format
multi-choice
Duration
90 min
Pass mark
75%
Study time
2–8 wks
Mocks here
2
What the CCA exam tests
The Cilium Certified Associate exam is structured around 8 weighted domains. Each domain link below opens a focused practice page with sample questions from that area.
One representative question per domain, drawn from the 120-question pool. Click "Reveal answer" to see the correct option plus explanations for every distractor.
Architecture
Q1. In a Cilium cluster, if you observe that the cilium-operator pod is unavailable for an extended period, which of the following will be directly impacted?
Reveal answer and explanations
ANew nodes joining the cluster will not receive IPAM pool allocations
Correct. The operator is responsible for allocating IP pools to nodes; without it, new nodes cannot be onboarded with IP allocations.
BAll eBPF programs on every node will be removed
Incorrect. eBPF programs are loaded by cilium-agent and persist independently.
CDNS resolution will fail cluster-wide
Incorrect. DNS is handled by the individual cilium-agents, not the operator.
DExisting pod-to-pod connectivity will immediately stop functioning
Incorrect. The cilium-agent maintains datapath functionality independent of the operator.
Network Policy
Q2. How does a CiliumNetworkPolicy differ from a standard Kubernetes NetworkPolicy in terms of supported features?
Reveal answer and explanations
AThey are functionally identical; CiliumNetworkPolicy is just a different API version
Incorrect. CiliumNetworkPolicy has significantly more capabilities.
BCiliumNetworkPolicy only works with Cilium CNI, while NetworkPolicy works with any CNI
Incorrect. This is true but not the key difference in features.
CKubernetes NetworkPolicy supports more features; CiliumNetworkPolicy is a simplified alternative
Incorrect. CiliumNetworkPolicy is a superset with more features.
DCiliumNetworkPolicy supports identity-based selectors, L7 policies, and DNS-based rules that NetworkPolicy does not
Correct. CiliumNetworkPolicy extends NetworkPolicy with identity-based policies, L7 rules, and DNS-based endpoint selection.
Service Mesh
Q3. In a Cilium-based service mesh, what is the purpose of the 'destinationRule' concept in traffic management?
Reveal answer and explanations
AReplaces CiliumNetworkPolicy for security
Incorrect. Service mesh routing and network policy are separate concerns.
BSpecifies connection pool settings, outlier detection, and load balancing policy for services
Correct. Destination rules configure connection parameters, load balancing, and resilience behaviors for traffic to services.
CDefines how traffic is routed to individual pods
Incorrect. Individual pod routing is defined at a different level.
DConfigures DNS-based service discovery
Incorrect. Service discovery is handled separately from traffic management rules.
Network Observability
Q4. A pod generates thousands of flows per second. You want to sample 1 in 100 flows in Hubble to reduce overhead. Where is this sampling configured?
Reveal answer and explanations
ASampling is automatically applied when Hubble event buffer exceeds a threshold
Incorrect. Sampling is configured explicitly, not applied automatically based on buffer conditions.
BIn the Hubble Relay deployment via the '--flow-sample-rate' flag
Incorrect. Hubble Relay aggregates flows; sampling is configured at the source (agent level), not the relay.
CIn the per-node Hubble server configuration using 'monitoring-sampling-ratio'
Incorrect. Per-node Hubble server doesn't have a sampling ratio config; sampling is cluster-wide.
DIn the cilium-config ConfigMap using 'hubble-flow-sample-rate: 100'
Correct. Flow sampling is configured globally in the cilium-config ConfigMap via the hubble-flow-sample-rate parameter.
Installation and Configuration
Q5. You're upgrading Cilium from 1.14 to 1.15. Which command performs a rolling upgrade while maintaining network policies?
Correct. 'helm upgrade' is the standard method to upgrade Cilium while preserving configuration and policies.
D'kubectl set image daemonset/cilium -n kube-system cilium=cilium:v1.15'
Incorrect. Manual image updates bypass Helm state management and can cause inconsistencies.
Cluster Mesh
Q6. In ClusterMesh, a service is annotated with 'io.cilium.service/global: true'. What does this enable?
Reveal answer and explanations
AThe service is accessible cluster-wide with the same cluster-local IP address
Incorrect. ClusterIP addresses are cluster-local; they can't be shared across clusters.
BThe service's endpoints are distributed across all connected clusters for load balancing
Incorrect. Endpoints aren't distributed; the service stays in its origin cluster with all endpoints there.
CThe service is replicated to all connected clusters with the same name and namespace
Incorrect. The service isn't replicated; it remains in its original cluster.
DThe service is advertised to all connected clusters; pods in remote clusters can access it using its FQDN
Correct. Global services are advertised to all connected clusters; remote pods can resolve and access them via FQDN (e.g., 'service.namespace.svc.clustermesh.local').
eBPF
Q7. A Cilium BPF map is configured as LRU (Least Recently Used) type. When the map is full, what happens to the least recently used entry?
Reveal answer and explanations
AThe least recently used entry is automatically evicted to make space for the new entry
Correct. LRU maps automatically evict the least recently used entry when full, enabling bounded map size without explicit cleanup.
BNew entries are rejected until map space is freed by userspace
Incorrect. LRU maps automatically evict; new entries aren't rejected.
CThe kernel blocks the inserting eBPF program until userspace deletes entries
Incorrect. LRU doesn't block programs; eviction is automatic and synchronous.
DLRU entries are persisted to disk and reloaded on demand
Incorrect. BPF maps are kernel memory; they don't persist to disk.
BGP and External Networking
Q8. A multi-cluster setup uses Cilium with BGP on each cluster, both advertising the same pod CIDR (10.0.0.0/8) to external routers. External routers receive BGP announcements from both clusters. What is the expected behavior for traffic to pod IPs?
Reveal answer and explanations
ATraffic is always routed to the cluster that advertised first; subsequent announcements are ignored
Incorrect. BGP doesn't prioritize based on announcement order; metrics and path attributes determine routing.
BRouters load-balance traffic between clusters using equal-cost multipath (ECMP)
Correct. When both clusters advertise the same CIDR, external routers typically use ECMP to load-balance traffic across both paths if available and metrics are equal.
CDuplicate CIDR advertisements cause a BGP routing conflict; external routers reject one path
Incorrect. BGP allows multipath announcements; no conflict prevents both paths from coexisting.
DRouters select the route based on BGP AS path length; the shorter path is preferred
Incorrect. Both clusters would have similar AS paths; path length preference doesn't clearly resolve the choice.
Roughly 2–8 weeks of focused study, but it depends heavily on what you already know. Engineers with hands-on production Kubernetes (or Cilium / Argo / OTel / etc. for project-specific certs) can compress this to a week or two of mocks; people coming in cold should expect the upper end. The exam is multi-choice and recall-heavy — practice exams matter more than reading documentation cover to cover. Aim for 85%+ on full timed mocks before booking the real exam.
Why this practice library
This library was built by a Platform Engineer chasing Golden Kubestronaut who got frustrated by the lack of decent practice material for the associate-tier CNCF exams. Question banks track curriculum updates from CNCF and Linux Foundation.