7.0 KiB
Newt OpenTelemetry Review
Overview
This document summarises the current OpenTelemetry (OTel) instrumentation in Newt, assesses
compliance with OTel guidelines, and lists concrete improvements to pursue before release.
It is based on the implementation in internal/telemetry and the call-sites that emit
metrics and traces across the code base.
Current metric instrumentation
All instruments are registered in internal/telemetry/metrics.go. They are grouped
into site, tunnel, connection, configuration, build, WebSocket, and proxy domains.
A global attribute filter (see buildMeterProvider) constrains exposed label keys to
site_id, region, and a curated list of low-cardinality dimensions so that Prometheus
exports stay bounded.
- Site lifecycle:
newt_site_registrations_total,newt_site_online, andnewt_site_last_heartbeat_timestamp_secondscapture registration attempts and liveness. They are fed either manually (IncSiteRegistration) or via theTelemetryViewstate callback that publishes observable gauges for the active site. - Tunnel health and usage: Counters and histograms track bytes, latency, reconnects,
and active sessions per tunnel (
newt_tunnel_*family). Attribute helpers respect theNEWT_METRICS_INCLUDE_TUNNEL_IDtoggle to keep cardinality manageable on larger fleets. - Connection attempts:
newt_connection_attempts_totalandnewt_connection_errors_totalare emitted throughout the WebSocket client to classify authentication, dial, and transport failures. - Operations/configuration:
newt_config_reloads_total,process_start_time_seconds,newt_config_apply_seconds, andnewt_cert_rotation_totalprovide visibility into blueprint reloads, process boots, configuration timings, and certificate rotation outcomes. - Build metadata:
newt_build_inforecords the binary version/commit together with optional site metadata when build information is supplied at startup. - WebSocket control-plane:
newt_websocket_connect_latency_seconds,newt_websocket_messages_total,newt_websocket_connected, andnewt_websocket_reconnects_totalreport connect latency, ping/pong/text activity, connection state, and reconnect reasons. - Proxy data-plane: Observable gauges (
newt_proxy_active_connections,newt_proxy_buffer_bytes,newt_proxy_async_backlog_bytes) plus counters for drops, accepts, connection lifecycle events (newt_proxy_connections_total), and duration histograms (newt_proxy_connection_duration_seconds) surface backlog, drop behaviour, and churn alongside per-protocol byte counters.
Refer to docs/observability.md for a tabular catalogue with instrument types,
attributes, and sample exposition lines.
Tracing coverage
Tracing is optional and enabled only when OTLP export is configured. When active:
- The admin HTTP mux is wrapped with
otelhttp.NewHandler, producing spans for/metricsand/healthzrequests. - The WebSocket dial path creates a
ws.connectspan around the gRPC-based handshake.
No other subsystems currently create spans, so data-plane operations, blueprint fetches, Docker discovery, and WireGuard reconfiguration happen without trace context.
Guideline & best-practice alignment
The implementation adheres to most OTel Go recommendations:
- Naming & units – Every instrument follows the
newt_*prefix with_totalsuffixes for counters and_seconds/_bytesunit conventions. Histograms are registered with explicit second-based buckets. - Resource attributes – Service name/version and optional
site_id/regionpopulate theresource.Resource. Metric labels mirror these by default (and on per-site gauges) but can be disabled withNEWT_METRICS_INCLUDE_SITE_LABELS=falseto avoid unnecessary cardinality growth. - Attribute hygiene – A single attribute filter (
sdkmetric.WithView) enforces the allow-list of label keys to prevent accidental high-cardinality emission. - Runtime metrics – Go runtime instrumentation is enabled automatically through
runtime.Start. - Configuration via environment –
telemetry.FromEnvhonoursOTEL_*variables alongsideNEWT_*overrides so operators can configure exporters without code changes. - Shutdown handling –
Setup.Shutdowniterates exporters in reverse order to flush buffers before process exit.
Adjustments & improvements
The review identified a few actionable adjustments:
- Record registration failures –
newt_site_registrations_totalis currently incremented only on success. Emitresult="failure"samples whenever Pangolin rejects a registration or credential exchange so operators can alert on churn. - Surface config reload failures –
telemetry.IncConfigReloadis invoked withresult="success"only. Callers should record a failure result when blueprint parsing or application aborts before success counters are incremented. - Expose robust uptime – Document using
time() - process_start_time_secondsto derive uptime now that the restart counter has been replaced with a timestamp gauge. - Propagate contexts where available – Many emitters call metric helpers with
context.Background(). Passing real contexts (when inexpensive) would allow future exporters to correlate spans and metrics. - Extend tracing coverage – Instrument critical flows such as blueprint fetches, WireGuard reconfiguration, proxy accept loops, and Docker discovery to provide end to end visibility when OTLP tracing is enabled.
Metrics to add before release
Prioritised additions that would close visibility gaps:
- Config reload error taxonomy – Split reload attempts into a dedicated
newt_config_reload_errors_total{phase}counter to make blueprint validation failures visible alongside the existing success counter. - Config source visibility – Export
newt_config_source_info{source,version}so operators can audit the active blueprint origin/commit during incidents. - Certificate expiry – Emit
newt_cert_expiry_timestamp_seconds(per cert) to enable proactive alerts before mTLS credentials lapse. - Blueprint/config pull latency – Measuring Pangolin blueprint fetch durations and HTTP status distribution would expose slow control-plane operations.
- Tunnel setup latency – Histograms for DNS resolution and tunnel handshakes would help correlate connect latency spikes with network dependencies.
These metrics rely on data that is already available in the code paths mentioned above and would round out operational dashboards.
Tracing wishlist
To benefit from tracing when OTLP is active, add spans around:
- Pangolin REST calls (wrap the HTTP client with
otelhttp.NewTransport). - Docker discovery cycles and target registration callbacks.
- WireGuard reconfiguration (interface bring-up, peer updates).
- Proxy dial/accept loops for both TCP and UDP targets.
Capturing these stages will let operators correlate latency spikes with reconnects and proxy drops using distributed traces in addition to the metric signals.