Correct. Monitoring answers known-unknowns via predefined signals while observability provides high-cardinality data to explore previously unanticipated questions.
Fundamentals of Observability
Q2. Which statement most accurately distinguishes `service.instance.id` from `host.id`?
Reveal answer and explanations
ABoth attributes are aliases of one another and are interchangeable in OpenTelemetry semantic conventions
Incorrect. They are independent attributes with different semantics and different cardinality.
B`service.instance.id` is unique per request, while `host.id` is unique per process
Incorrect. Neither attribute changes per request; both are resource attributes set once per process or host.
C`service.instance.id` is a Kubernetes-only attribute, while `host.id` is for bare-metal deployments only
Incorrect. Both attributes apply across deployment models, not exclusively to Kubernetes or bare metal.
D`service.instance.id` identifies one service instance (e.g., a Pod replica); `host.id` identifies the underlying node
Correct. `service.instance.id` distinguishes one instance of a logical service (e.g., one replica) and `host.id` identifies the physical or virtual host the instance runs on, so multiple service instances can share a `host.id`.
Fundamentals of Observability
Q3. What is the primary purpose of OpenTelemetry semantic conventions?
Reveal answer and explanations
ATo compress telemetry payloads on the wire across SDK boundaries
Incorrect. Semantic conventions concern naming, not transport-level compression.
BTo define the on-disk storage format used by backends such as Prometheus and Loki for long-term retention
Incorrect. Storage formats are a backend concern; semantic conventions are about telemetry attribute keys and values.
CTo standardize attribute names so telemetry is interoperable across sources
Correct. Semantic conventions define standard attribute names such as `http.request.method` and `db.system` so telemetry is consistent across vendors and languages.
DTo enforce default sampling rates across all OpenTelemetry SDK languages
Incorrect. Sampling configuration is independent from semantic conventions.
Fundamentals of Observability
Q4. A team complains their Prometheus storage doubled overnight after they added a new label. Which mitigation directly addresses the root cause rather than the symptom?
Reveal answer and explanations
ADrop the high-cardinality label using a metric `View` or processor
Correct. The cardinality cost is driven by unique label-value combinations, so dropping or aggregating the offending label at the SDK (`View`) or pipeline level removes the new time series at the source.
BIncrease the Prometheus retention period to amortize the cost of new series
Incorrect. Longer retention multiplies storage cost; it does not reduce the cardinality explosion.
CSwitch the exporter from `prometheusremotewrite` to `otlphttp`
Incorrect. Changing the exporter protocol does not change how many time series are emitted.
DIncrease the SDK's `BatchSpanProcessor` queue size to absorb the new load
Incorrect. `BatchSpanProcessor` operates on traces, not on metric cardinality.
Fundamentals of Observability
Q5. Which set of telemetry signals are commonly referred to as the three pillars of observability that OpenTelemetry standardizes?
Reveal answer and explanations
ATraces, metrics, and logs
Correct. OpenTelemetry defines traces, metrics, and logs as its three primary signals, each with its own data model and SDK.
BAlerts, dashboards, and runbooks
Incorrect. These are downstream consumer artifacts produced from telemetry, not the underlying signals themselves.
CEvents, alarms, and notifications
Incorrect. These describe operational notification mechanisms, not the OpenTelemetry signal types.
DCPU, memory, and disk usage
Incorrect. These are specific resource dimensions captured as metrics, not the categories of telemetry signal.
Fundamentals of Observability accounts for 18% of the OTCA exam. Expect questions that test recall of terminology and the ability to read short scenarios — not deep configuration. Use the sample questions above as difficulty calibration; if any feel hard, the rest of our 22-question domain bank will close those gaps.