Ktor 3.2.3 Help

Distributed tracing with OpenTelemetry in Ktor Server

Ktor integrates with OpenTelemetry — an open-source observability framework for collecting telemetry data such as traces, metrics, and logs. It provides a standard way to instrument applications and export data to monitoring and observability tools like Grafana or Jaeger.

The KtorServerTelemetry plugin enables distributed tracing of incoming HTTP requests in a Ktor server application. It automatically creates spans containing route, HTTP method, and status code information, extracts existing trace context from incoming request headers, and allows customizing span names, attributes, and span kinds.

Add dependencies

To use KtorServerTelemetry, you need to include the opentelemetry-ktor-3.0 artifact in the build script:

implementation("io.opentelemetry.instrumentation:opentelemetry-ktor-3.0:2.18.1-alpha")
implementation "io.opentelemetry.instrumentation:opentelemetry-ktor-3.0:2.18.1-alpha"
<dependencies> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-ktor-3.0</artifactId> <version>2.18.1-alpha</version> </dependency> </dependencies>

Configure OpenTelemetry

Before installing the KtorServerTelemetry plugin in your Ktor application, you need to configure and initialize an OpenTelemetry instance. This instance is responsible for managing telemetry data, including traces and metrics.

Automatic configuration

A common way to configure OpenTelemetry is to use AutoConfiguredOpenTelemetrySdk. This simplifies setup by automatically configuring exporters and resources based on system properties and environment variables.

You can still customize the automatically detected configuration — for example, by adding a service.name resource attribute:

package com.example import io.opentelemetry.api.OpenTelemetry import io.opentelemetry.sdk.autoconfigure.AutoConfiguredOpenTelemetrySdk import io.opentelemetry.semconv.ServiceAttributes fun getOpenTelemetry(serviceName: String): OpenTelemetry { return AutoConfiguredOpenTelemetrySdk.builder().addResourceCustomizer { oldResource, _ -> oldResource.toBuilder() .putAll(oldResource.attributes) .put(ServiceAttributes.SERVICE_NAME, serviceName) .build() }.build().openTelemetrySdk }

Programmatic configuration

To define exporters, processors, and propagators in code, instead of relying on environment-based configuration, you can use OpenTelemetrySdk.

The following example shows how to configure OpenTelemetry programmatically with an OTLP exporter, a span processor, and a trace context propagator:

import io.opentelemetry.api.OpenTelemetry import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator import io.opentelemetry.context.propagation.ContextPropagators import io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter import io.opentelemetry.sdk.OpenTelemetrySdk import io.opentelemetry.sdk.trace.SdkTracerProvider import io.opentelemetry.sdk.trace.export.BatchSpanProcessor fun configureOpenTelemetry(): OpenTelemetry { val spanExporter = OtlpGrpcSpanExporter.builder() .setEndpoint("http://localhost:4317") .build() val tracerProvider = SdkTracerProvider.builder() .addSpanProcessor(BatchSpanProcessor.builder(spanExporter).build()) .build() return OpenTelemetrySdk.builder() .setTracerProvider(tracerProvider) .setPropagators(ContextPropagators.create(W3CTraceContextPropagator.getInstance())) .buildAndRegisterGlobal() }

Use this approach if you require full control over telemetry setup, or when your deployment environment cannot rely on automatic configuration.

Install KtorServerTelemetry

To install the KtorServerTelemetry plugin to the application, pass it to the install function in the specified module and set the configured OpenTelemetry instance:

import io.ktor.server.engine.* import io.ktor.server.netty.* import io.ktor.server.application.* import io.opentelemetry.instrumentation.* fun main() { embeddedServer(Netty, port = 8080) { val openTelemetry = getOpenTelemetry(serviceName = "opentelemetry-ktor-sample-server") install(KtorServerTelemetry){ setOpenTelemetry(openTelemetry) } // ... }.start(wait = true) }
import io.ktor.server.application.* import io.opentelemetry.instrumentation.* // ... fun Application.module() { val openTelemetry = getOpenTelemetry(serviceName = "opentelemetry-ktor-sample-server") install(KtorServerTelemetry){ setOpenTelemetry(openTelemetry) } // ... }

Configure tracing

You can customize how the Ktor server records and exports OpenTelemetry spans. The options below allow you to adjust which requests are traced, how spans are named, what attributes they contain, and how span kinds are determined.

Trace additional HTTP methods

By default, the plugin traces standard HTTP methods (GET, POST, PUT, etc.). To trace additional or custom methods, configure the knownMethods property:

install(KtorServerTelemetry) { // ... knownMethods(HttpMethod.DefaultMethods + CUSTOM_METHOD) }

Capture headers

To include specific HTTP request headers as span attributes, use the capturedRequestHeaders property:

install(KtorServerTelemetry) { // ... capturedRequestHeaders(HttpHeaders.UserAgent) }

Select span kind

To override the span kind (such as SERVER, CLIENT, PRODUCER, CONSUMER) based on request characteristics, use the spanKindExtractor property:

install(KtorServerTelemetry) { // ... spanKindExtractor { if (httpMethod == HttpMethod.Post) { SpanKind.PRODUCER } else { SpanKind.CLIENT } } }

Add custom attributes

To attach custom attributes at the start or end of a span, use the attributesExtractor property:

install(KtorServerTelemetry) { // ... attributesExtractor { onStart { attributes.put("start-time", System.currentTimeMillis()) } onEnd { attributes.put("end-time", Instant.now().toEpochMilli()) } } }

Additional properties

To fine-tune tracing behavior across your application, you can also configure additional OpenTelemetry properties like propagators, attribute limits, and enabling/disabling instrumentation. For more details, see the OpenTelemetry Java configuration guide.

Verify telemetry data with Grafana LGTM

To visualize and verify your telemetry data, you can export traces, metrics, and logs to a distributed tracing backend, such as Grafana. The grafana/otel-lgtm all-in-one image bundles Grafana, Tempo (traces), Loki (logs), and Mimir (metrics).

Using Docker Compose

Create a docker-compose.yml file with the following content:

services: grafana-lgtm: image: grafana/otel-lgtm:latest ports: - "4317:4317" # OTLP gRPC receiver (traces, metrics, logs) - "4318:4318" # OTLP HTTP receiver - "3000:3000" # Grafana UI environment: - GF_SECURITY_ADMIN_USER=admin - GF_SECURITY_ADMIN_PASSWORD=admin restart: unless-stopped

To start the Grafana LGTM all-in-one container, run the following command:

docker compose up -d

Using Docker CLI

Alternatively, you can run Grafana directly using the Docker command line:

docker run -d --name grafana_lgtm \ -p 4317:4317 \ # OTLP gRPC receiver (traces, metrics, logs) -p 4318:4318 \ # OTLP HTTP receiver -p 3000:3000 \ # Grafana UI -e GF_SECURITY_ADMIN_USER=admin \ -e GF_SECURITY_ADMIN_PASSWORD=admin \ grafana/otel-lgtm:latest

Application export configuration

To send telemetry from your Ktor application to an OTLP endpoint, configure the OpenTelemetry SDK to use the gRPC protocol. You can set these values via environment variables before building the SDK:

export OTEL_TRACES_EXPORTER=otlp export OTEL_EXPORTER_OTLP_PROTOCOL=grpc export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317

Or use JVM flags:

-Dotel.traces.exporter=otlp -Dotel.exporter.otlp.protocol=grpc -Dotel.exporter.otlp.endpoint=http://localhost:4317

Accessing Grafana UI

Once running, the Grafana UI will be available at http://localhost:3000/.

  1. Open the Grafana UI at http://localhost:3000/.

  2. Login with the default credentials:

    • User:admin

    • Password:admin

  3. In the left-hand navigation menu, go to Drilldown → Traces:

    Grafana UI Drilldown traces view

    Once in the Traces view, you can:

    • Select Rate, Errors, or Duration metrics.

    • Apply span filters (e.g., by service name or span name) to narrow down your data.

    • View traces, inspect details, and interact with span timelines.

08 September 2025