Changelog 3.1 version
3.1.3
released 6th May 2025
Client
OkHttp: Cancelling of SSE request job doesn't cancel the connection
I'm seeing active OkHttp connections after cancellation of the request itself even after upgrading to 3.1.2 (KTOR-8244).
Details
I've created something in Compose where I subscribe to an SSE endpoint within a LaunchedEffect. When the Compose element leaves the view, the LaunchedEffect is cancelled, cancelling the SEE request. When this is done, I'm still seeing an active OkHttp connection to my server.
Analysis
-
It seems that the current implementation of
OkHttpSSESession
is very passive and relies on OkHttp to issue either anonFailure
oronClosed
event for it tocancel()
the event source. This seems backwards? If OkHttp is issuing a failure or a closure, the source is already being closed, so there really isn't a need to cancel it. -
When an event is received, it doesn't throw a CancellationException like is intended. I believe this is because if the
incoming
consuming flow is cancelled because of the request coroutine being cancelled, it closes the channel without an exception. So it should be usingonClosed
here as well. -
Nothing is directly cancelling the event source when the awaiting coroutine is cancelled. So if no new events are received, nothing is sent to the
_incoming
channel, which means that the OkHttp connection is still parked listening for a response.
Recommendation
I think that the serverSentEventsSource
should be created external from the session and cancelled not in the onFailure
or onClosed
overrides, but when the upstream coroutine is cancelled (similar to how Call is currently handled). I'd be happy to attempt this fix and open a PR if that is something you are interested in.
OkHttp: Exceptions are not propagated to flow collectors
Current implementation of the OkHttpSSESession.onFailure
invokes _incoming.close()
without parameter in case of a failure.
https://github.com/ktorio/ktor/blob/main/ktor-client/ktor-client-okhttp/jvm/src/io/ktor/client/engine/okhttp/OkHttpSSESession.kt#L57
Because of that exceptions are not propagated to the flow collectors. It makes proper exception handling (e.g. socket timeout) impossible.
Don't send Authorization header for requests marked with markAsRefreshTokenRequest
Hello, as discussed in another issue, it would be beneficial to adjust the "markAsRefreshTokenRequest()" functionality. This should prevent the inclusion of the authorization token in marked requests. Currently, the refresh request is sent with an outdated authorization token in the Auth header. This can cause problems with authorization servers that may refuse to respond to requests with invalid headers.
The current workaround involves removing tokens from BearerAuthProvider. However, this creates another issue: requests with a 401 response are not queued correctly for the "refreshTokens" lambda.
Apache5: "ProtocolException: OPTIONS request must have Content-Type header" is thrown when body isn't set
We are using Apache5 as the engine for out KTor client. Currently on 3.0.3.
When we upgrade to 3.1.x an OPTIONS call starts failing (without body) as a dependency in Apache5 has added a guard that forces content type headers.
Is this correct behaviour? From the comments in Apache5 it seems to me it should only be enforced if there is some content?
The code part that triggers: https://github.com/apache/httpcomponents-core/blame/master/httpcore5/src/main/java/org/apache/hc/core5/http/protocol/RequestContent.java#L162
We can add a content type, but as we are not getting it from the source it would be a workaround.
HttpTimeout: Reference to nonexistent INFINITE_TIMEOUT_MS in the exception message
The message on HttpTimeout
validation needs to be updated as it points to a non existent variable.
private fun checkTimeoutValue(value: Long?): Long? {
require(value == null || value > 0) {
"Only positive timeout values are allowed, for infinite timeout use HttpTimeout.INFINITE_TIMEOUT_MS"
}
return value
}
It used to be HttpTimeout.INFINITE_TIMEOUT_MS
in 2.x.x but it is now HttpTimeoutConfig.INFINITE_TIMEOUT_MS
on 3.x.x
IO
ByteChannel single-byte operations are slow
The following code snippet takes longer than 30 seconds:
val oneBillionBytes = 1_000_000_000L
val buffer = ByteArray(8192)
val randomBytes = writer {
var count = 0
while (count < oneBillionBytes) {
Random.nextBytes(buffer)
channel.writeByteArray(buffer)
count += buffer.size
}
}.channel
var count = 0L
while(!randomBytes.exhausted() && count < oneBillionBytes) {
randomBytes.readByte()
count++
}
...but it ought to complete in under 2 or 3 seconds.
Server
Websockets: Unable to send a frame when ktor-serialization-kotlinx-json-jvm dependency is defined in Maven build
Hello,
Steps to reproduce are:
Select Routing, Content Negotiation, kotlinx.serialization, Websockets
Replace Application.kt with:
package com.example
import io.ktor.serialization.kotlinx.*
import io.ktor.server.application.*
import io.ktor.server.routing.*
import io.ktor.server.websocket.*
import kotlinx.serialization.*
import kotlinx.serialization.json.*
@Serializable
data class Customer(val id: Int, val firstName: String, val lastName: String)
fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)
fun Application.module() {
install(WebSockets) {
contentConverter = KotlinxWebsocketSerializationConverter(Json)
}
routing {
webSocket("/customer/1") {
sendSerialized(Customer(1, "Jane", "Smith"))
while (true){}
}
webSocket("/customer") {
val customer = receiveDeserialized<Customer>()
println("A customer with id ${customer.id} is received by the server.")
}
}
}
This is the same as the demo here: https://github.com/ktorio/ktor-documentation/tree/3.0.0/codeSnippets/snippets/server-websockets-serialization but with a while(true) on the webSocket("/customer/1") endpoint.
Try to connect a client with ws to webSocket("/customer/1") with postman or any other tool.
You should find the connection just closes and nothing is sent.
The expected behaviour is the serialized object is sent, and the connection stays open, due to the while true.
Interestingly this works completely fine in:
Ktor 3.0.0 with Gradle
Ktoir 2.3.12
Kotlin version 2.0.21 in all circumstances (default of the start.ktor.io)
The peculiar behaviour is only with Ktor 3.0.0 + Maven, specifically, this dep seems to be the culprit: ktor-serialization-kotlinx-json-jvm, even without using serialization if that dep is present, this strange connection closing behaviour occurs everywhere.
Have repro'd this across two different windows 10 environments.
Thanks, please reach out for any other details
Implement toString for staticContentRoute
The issue was created from a pull request: https://github.com/ktorio/ktor/pull/4807
Route rendering is important in several places. For example, in metrics collection.
Currently, for static files we have metrics like
ktor_http_server_requests_seconds_sum{method="GET",route="/io.ktor.server.http.content.StaticContentKt$staticContentRoute$1@70830e08/images/{...}",status="404",
throwable="n/a"} 0.227668303
OOM in CountedByteReadChannel while copying from multipart/form-data part channel
Problem
I am getting an OOM when attempting to pipe a form part to file coming from the io.ktor.utils.io.CountedByteReadChannel#buffer
property via io.ktor.utils.io.ByteReadChannelOperationsKt.readUntil
.
The code I initially had for parsing is based on the example in the ktor docs @ https://ktor.io/docs/server-requests.html#file-uploads
fun main() {
embeddedServer(Netty, port = 9090) {
routing {
post("/") {
call.receiveMultipart(Long.MAX_VALUE).forEachPart { part ->
if (part is PartData.FileItem) {
val output = File("output")
part.provider().copyAndClose(output.writeChannel())
}
call.respond(HttpStatusCode.OK, "ok")
}
}
}
}.start(true)
}
Running a POST request with a large file against the above code results in an OOM exception.
I didn't run into this issue until upgrading from 2.3.x
Reproducing
I am able to reproduce this consistently with the setup below.
The resulting .hprof file from my testing indicates that ~90% of the heap is being eaten by io.ktor.utils.io.CountedByteReadChannel
specifically, the buffer
property.
Setup
build.gradle.kts
plugins {
id("com.gradleup.shadow") version "8.3.6"
kotlin("jvm")
}
dependencies {
implementation("io.ktor:ktor-server-core-jvm:3.1.1")
implementation("io.ktor:ktor-server-netty-jvm:3.1.1")
}
tasks.jar {
manifest {
attributes["Main-Class"] = "MainKt"
}
}
Test input file:
dd if=/dev/urandom of=bigfile bs=1G count=1 iflag=fullblock
Server startup:
java -jar -Xms16m -Xmx64m -XX:+CrashOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError build/libs/demo-all.jar
Request:
curl -F file=@bigfile localhost:9090/
Receiving multipart without Content-Length is very slow
To reproduce, send a 2GB file as part of the multipart/form-data
request to the following server:
embeddedServer(Netty, port = 8080, host = "127.0.0.1") {
routing {
post("/multipart") {
val multipartData = call.receiveMultipart(formFieldLimit = Long.MAX_VALUE)
val part = multipartData.readPart() as PartData.FileItem
val provider = part.provider()
val bufferSize = DEFAULT_BUFFER_SIZE
while (!provider.isClosedForRead) {
val source = provider.readRemaining(bufferSize.toLong())
if (source.exhausted()) break
val bytes = source.readByteArray()
}
call.respond(HttpStatusCode.OK)
}
}
}.start(wait = true)
As a result, the server handles the file for about 1 minute on my machine when using the loopback interface.
I expect the handling duration to be on par with receiving a binary body (on my machine, 2.2 seconds).
The original issue https://github.com/ktorio/ktor/issues/4788
Workaround
When submitting files through multi-part uploads, specifying the Content-Length
with the file size will fix the performance problem.
MicrometerMetrics: different path 404s requests can be abused to trigger OOM
To reproduce, run the following test:
@Test
fun test() = testApplication {
val appMicrometerRegistry = SimpleMeterRegistry()
install(MicrometerMetrics) { registry = appMicrometerRegistry }
routing {
get("/metrics-micrometer") {
call.respond(appMicrometerRegistry.metersAsString)
}
}
(1..1000).forEach {
(1..10_000).map {
client.get("/${UUID.randomUUID()}")
}
println("Metrics page size: ${client.get("/metrics-micrometer").bodyAsText().length}")
}
}
As a result, the OOM exception is thrown:
java.lang.OutOfMemoryError: Java heap space
at kotlinx.io.SourcesKt.readByteArrayImpl(Sources.kt:268)
at kotlinx.io.SourcesKt.readByteArray(Sources.kt:252)
at io.ktor.utils.io.core.BuffersKt.readBytes(Buffers.kt:16)
at io.ktor.utils.io.core.BuffersKt.readBytes$default(Buffers.kt:15)
at io.ktor.utils.io.ByteReadChannelOperationsKt.toByteArray(ByteReadChannelOperations.kt:38)
at io.ktor.utils.io.ByteReadChannelOperationsKt$toByteArray$1.invokeSuspend(ByteReadChannelOperations.kt)
at _COROUTINE._BOUNDARY._(CoroutineDebugging.kt:42)
at io.ktor.server.testing.TestApplicationKt.runTestApplication(TestApplication.kt:464)
at io.ktor.server.testing.TestApplicationKt$testApplication$1.invokeSuspend(TestApplication.kt:447)
at io.ktor.test.dispatcher.TestCommonKt$runTestWithRealTime$1.invokeSuspend(TestCommon.kt:40)
at kotlinx.coroutines.test.TestBuildersKt__TestBuildersKt$runTest$2$1$1.invokeSuspend(TestBuilders.kt:317)
Caused by: java.lang.OutOfMemoryError: Java heap space
at kotlinx.io.SourcesKt.readByteArrayImpl(Sources.kt:268)
at kotlinx.io.SourcesKt.readByteArray(Sources.kt:252)
at io.ktor.utils.io.core.BuffersKt.readBytes(Buffers.kt:16)
at io.ktor.utils.io.core.BuffersKt.readBytes$default(Buffers.kt:15)
at io.ktor.utils.io.ByteReadChannelOperationsKt.toByteArray(ByteReadChannelOperations.kt:38)
at io.ktor.utils.io.ByteReadChannelOperationsKt$toByteArray$1.invokeSuspend(ByteReadChannelOperations.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:586)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:829)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:717)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:704)
Compression & Static Content: No Vary Header when serving a compressed resource
The Compression
& StaticContent
plugins do not add the Vary
header to the HTTP response when it is compressed.
According to MDN (emphasis my own):
To select the algorithm to use, browsers and servers use proactive content negotiation. The browser sends an
Accept-Encoding
header with the algorithm it supports and its order of precedence, the server picks one, uses it to compress the body of the response and uses theContent-Encoding
header to tell the browser the algorithm it has chosen. As content negotiation has been used to choose a representation based on its encoding, the server must send aVary
header containing at leastAccept-Encoding
alongside this header in the response; that way, caches will be able to cache the different representations of the resource.
However, this is not the case with the Compression
plugin.
Here is a reproduction sample:
fun main() {
embeddedServer(Netty, port = 8080) {
install(Compression) {
gzip {
minimumSize(0)
}
}
routing {
get("/") {
call.respondText("Hello, world!")
}
}
}.start(wait = true)
}
If you perform a curl
request, you can see that the Vary
header is not added:
curl localhost:8080 -Lsv --compressed
* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8080...
* Connected to localhost (::1) port 8080
* using HTTP/1.x
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.12.1
> Accept: */*
> Accept-Encoding: deflate, gzip, br, zstd
>
* Request completely sent off
< HTTP/1.1 200 OK
< Content-Encoding: gzip
< Content-Type: text/plain; charset=UTF-8
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
Hello, world!
The same is also true when serving files using StaticContent
plugin via staticResources
and staticFiles
.
Reproduction sample:
fun main() {
embeddedServer(Netty, port = 8080) {
install(Compression) {
gzip {
minimumSize(0)
}
}
routing {
get("/") {
call.respondText("Hello, world!")
}
staticResources("/resource/", "") {
preCompressed(CompressedFileType.GZIP)
}
staticFiles("/file/", File("./")) {
preCompressed(CompressedFileType.GZIP)
}
}
}.start(wait = true)
}
Now, in the resources
directory, execute the following:
$ cat <<EOF > test.txt
This is a test file
EOF
$ gzip -9 -k test.txt
$ cp test.* ../../../
This will create test.txt
and test.gzip
in both the project root as well as the resources
directory.
Now, if you execute the following curl commands, you can see that the issue is also present:
curl localhost:8080/resource/test.txt -Lsv --compressed
* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8080...
* Connected to localhost (::1) port 8080
* using HTTP/1.x
> GET /resource/test.txt HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.12.1
> Accept: */*
> Accept-Encoding: deflate, gzip, br, zstd
>
* Request completely sent off
< HTTP/1.1 200 OK
< Content-Encoding: gzip
< Content-Length: 47
< Content-Type: text/plain; charset=UTF-8
<
This is a test file
* Connection #0 to host localhost left intact
curl localhost:8080/file/test.txt -Lsv --compressed
* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8080...
* Connected to localhost (::1) port 8080
* using HTTP/1.x
> GET /file/test.txt HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.12.1
> Accept: */*
> Accept-Encoding: deflate, gzip, br, zstd
>
* Request completely sent off
< HTTP/1.1 200 OK
< Content-Encoding: gzip
< Content-Length: 47
< Content-Type: text/plain; charset=UTF-8
<
This is a test file
* Connection #0 to host localhost left intact
neither of them have the Vary
header, even though it should be set.
Netty/Websockets: server processes hanging in CLOSE_WAIT state after many concurrent requests
Hi!
We've noticed that our WebSocket-based application is consuming more resources over time, with a large number of open file descriptors, eventually leading to crashes.
After investigating, we discovered thousands of TCP connections stuck in the CLOSE_WAIT state on our servers. We also managed to reproduce the issue locally using a simple test scenario:
My simple server that runs locally:
package org.example
import io.ktor.server.application.*
import io.ktor.server.response.*
import io.ktor.server.routing.*
import io.ktor.server.engine.*
import io.ktor.server.websocket.*
import io.ktor.server.netty.*
fun main() {
embeddedServer(Netty, port = 82) {
install(WebSockets)
routing {
webSocket("/test") {}
}
}.start(wait = true)
}
I also run this testing script multiple times in a Docker environment (as I couldn't reproduce the issue directly on localhost):
import socket
import base64
import hashlib
import concurrent.futures
def create_websocket_handshake_headers(host, port, resource):
key = base64.b64encode(hashlib.sha1().digest()).decode('utf-8')
headers = (
f"GET {resource} HTTP/1.1\r\n"
f"Host: {host}:{port}\r\n"
f"Upgrade: websocket\r\n"
f"Connection: Upgrade\r\n"
f"Sec-WebSocket-Key: {key}\r\n"
f"Sec-WebSocket-Version: 13\r\n\r\n"
)
return headers
def websocket_client(host, port, resource):
try:
# Create a socket connection
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("host.docker.internal", port))
# Send WebSocket handshake headers
headers = create_websocket_handshake_headers(host, port, resource)
sock.send(headers.encode('utf-8'))
for i in range(4):
sock.send(b"TEST IT")
sock.close()
except Exception as e:
print(f"Error: {e}")
host = 'host.docker.internal'
port = 82
resource = '/test'
# Run 10 WebSocket clients concurrently
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = [executor.submit(websocket_client, host, port, resource) for _ in range(10)]
concurrent.futures.wait(futures)
And then after a few seconds, checking the sockets on my machine:
netstat -tlan -p tcp | grep "\.82 " | grep CLOSE
tcp4 0 0 127.0.0.1.82 127.0.0.1.50471 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50466 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50463 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50461 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50405 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50400 CLOSE_WAIT
tcp4 7 0 127.0.0.1.82 127.0.0.1.50349 CLOSE_WAIT
tcp4 14 0 127.0.0.1.82 127.0.0.1.50347 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50345 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50344 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50295 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50274 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50248 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50247 CLOSE_WAIT
tcp4 7 0 127.0.0.1.82 127.0.0.1.50246 CLOSE_WAIT
tcp4 7 0 127.0.0.1.82 127.0.0.1.50245 CLOSE_WAIT
tcp4 7 0 127.0.0.1.82 127.0.0.1.50244 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50206 CLOSE_WAIT
tcp4 7 0 127.0.0.1.82 127.0.0.1.50187 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50176 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50154 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50147 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50145 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50115 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50114 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50108 CLOSE_WAIT
tcp4 0 0 127.0.0.1.82 127.0.0.1.50106 CLOSE_WAIT
[...]
I'm using a MacBook Pro M2 and managed to reproduce the issue with Ktor versions 2.3.9, 2.3.13, and 3.0.2.
The problem occurs only when using Netty as the engine. I tested with Jetty and CIO but couldn't reproduce it with either of them.
Update JTE to the version supporting Kotlin 2.1.0
After the update to Kotlin 2.1.0, tests for JTE started failing with: Module was compiled with an incompatible version of Kotlin. The binary version of its metadata is 2.1.0, expected version is 1.9.0.
Not to block the entire project from updating to Kotlin 2.1.0, we've disabled JteTest
until JTE is updated to the new version supporting Kotlin 2.1. This class should be un-ignored after the JTE update.
PR to JTE to track progress: https://github.com/casid/jte/pull/411
3.1.2
released 28th March 2025
Client
OkHttp: Cancelling of SSESession.incoming flow doesn't cancel connection
Look at these source code.
In the OkHttpSSESession implementation of SSESession, cancel incoming flow will NOT trigger serverSentEventsSource.cancel(), which will keep SSE still connecting !
internal class OkHttpSSESession(
engine: OkHttpClient,
engineRequest: Request,
override val coroutineContext: CoroutineContext,
) : SSESession, EventSourceListener() {
private val serverSentEventsSource = EventSources.createFactory(engine).newEventSource(engineRequest, this)
...
private val _incoming = Channel<ServerSentEvent>(8)
override val incoming: Flow<ServerSentEvent>
get() = _incoming.receiveAsFlow()
...
override fun onEvent(eventSource: EventSource, id: String?, type: String?, data: String) {
_incoming.trySendBlocking(ServerSentEvent(data, type, id))
}
override fun onFailure(eventSource: EventSource, t: Throwable?, response: Response?) {
...
_incoming.close()
serverSentEventsSource.cancel()
}
override fun onClosed(eventSource: EventSource) {
_incoming.close()
serverSentEventsSource.cancel()
}
}
But in the DefaultClientSSESession implementation of SSESession, cancel incoming flow will trigger coroutineContext.cancel() and input.cancel(), which will disconnect SSE connection !
@OptIn(InternalAPI::class)
@Deprecated("It should be marked with `@InternalAPI`, please use `ClientSSESession` instead")
public class DefaultClientSSESession(
content: SSEClientContent,
private var input: ByteReadChannel,
override val coroutineContext: CoroutineContext
) : SSESession {
...
public constructor(
content: SSEClientContent,
input: ByteReadChannel
) : this(content, input, content.callContext + Job() + CoroutineName("DefaultClientSSESession"))
private var _incoming = flow {
// inner while for parsing events of current input (=connection), and when the current input is closed,
// we have an outer while to obtain new input
while (this@DefaultClientSSESession.coroutineContext.isActive) {
while (this@DefaultClientSSESession.coroutineContext.isActive) {
val event = input.parseEvent() ?: break
if (event.isCommentsEvent() && !showCommentEvents) continue
if (event.isRetryEvent() && !showRetryEvents) continue
emit(event)
}
if (needToReconnect) {
doReconnection()
} else {
close()
}
}
}.catch { cause ->
when (cause) {
is CancellationException -> {
close()
}
else -> {
LOGGER.trace { "Error during SSE session processing: $cause" }
close()
throw cause
}
}
}
init {
coroutineContext.job.invokeOnCompletion {
close()
}
}
private suspend fun doReconnection() {
...
}
...
override val incoming: Flow<ServerSentEvent>
get() = _incoming
private fun close() {
coroutineContext.cancel()
input.cancel()
}
....
}
Created from GitHub issue: #4692
Auth: AuthTokenHolder.clearToken executed in the middle of an ongoing token update doesn't actually clear
We are always setting the newToken. This could cause an issue if clearTokens (logout) is called while suspended in line 67.
Docs
Fix autoreload-embedded-server snippet project
Use the workaround provided in the linked issue until a fix is delivered.
Replace parameter access syntax in the documentation
From
val id = call.parameters[id].toInt() ?: error("...")
To
val id: Int by call.parameters
Infrastructure
Remove empty artifacts from publication
We have some empty artifacts published accidentally:
ktor-client-plugins
ktor-server-jetty-test-http2
ktor-server-jetty-test-http2-jakarta
ktor-server-plugins
ktor-shared
These artifacts can be safely removed from dependencies and should not be published anymore.
Server
Android: "Array has more than one element" error when starting a server with release build
Please see the stack trace below (sorry I was unable to fully deobfuscate it)
This happens on Ktor 1.6.8, 2.3.1 and 3.0.0-beta-2. A non obfuscated build works.
I have used proguard rules found here: https://github.com/ktorio/ktor-documentation/blob/3.0.0-beta-1/codeSnippets/snippets/proguard/proguard.pro
Any ideas? Any assistance would be greatly appreciated - I have spent the whole day on this and I am really stuck. Pretty desperate for a solution..
2024-08-08 16:24:04.050 7186-7249 E No implementation found for int io.netty.channel.kqueue.Native.sizeofKEvent() (tried Java_io_netty_channel_kqueue_Native_sizeofKEvent and Java_io_netty_channel_kqueue_Native_sizeofKEvent__) - is the library loaded, e.g. System.loadLibrary?
2024-08-08 16:24:04.058 7186-7249 E No implementation found for int io.netty.channel.epoll.Native.offsetofEpollData() (tried Java_io_netty_channel_epoll_Native_offsetofEpollData and Java_io_netty_channel_epoll_Native_offsetofEpollData__) - is the library loaded, e.g. System.loadLibrary?
2024-08-08 16:24:04.081 7186-7244 ShareServer$start$flow E Server error
java.lang.IllegalArgumentException: Array has more than one element.
at kotlin.collections.ArraysKt___ArraysKt.single(ArraysKt___ArraysKt.java:2849)
at io.ktor.server.engine.internal.CallableUtilsKt.executeModuleFunction(CallableUtilsKt.java:43)
at io.ktor.server.engine.EmbeddedServer.launchModuleByName$lambda$25(EmbeddedServer.java:349)
at io.ktor.server.engine.EmbeddedServer.$r8$lambda$OR12ThJzS6-whiNLKrW82GJqXkU(EmbeddedServer.java:0)
at C5.h.invoke(SourceFile:33)
at io.ktor.server.engine.EmbeddedServer.avoidingDoubleStartupFor(SourceFile:32)
at io.ktor.server.engine.EmbeddedServer.launchModuleByName(SourceFile:8)
at io.ktor.server.engine.EmbeddedServer.instantiateAndConfigureApplication$lambda$24(SourceFile:64)
at io.ktor.server.engine.EmbeddedServer.b(SourceFile:1)
at C5.h.invoke(SourceFile:50)
at io.ktor.server.engine.EmbeddedServer.avoidingDoubleStartup(SourceFile:1)
at io.ktor.server.engine.EmbeddedServer.instantiateAndConfigureApplication(SourceFile:64)
at io.ktor.server.engine.EmbeddedServer.createApplication(SourceFile:16)
at io.ktor.server.engine.EmbeddedServer.start(SourceFile:36)
at com.example.share_server.server.pub.ShareServer$start$flow$1$1.invokeSuspend(SourceFile:27)
at Y8.a.resumeWith(SourceFile:9)
at u9.P.run(SourceFile:113)
at B9.b.run(SourceFile:96)
Shared
URL-safe base64 decoding problem
URL-safe base64 encoding treats + as - and / as _, however our implementation fails to account for this in the decoding. As a result, encoding and decoding some strings will not work.
For a more thorough description of the problem, see https://github.com/ktorio/ktor/pull/4721
WebSockets: extensions in sec-websocket-extensions header must be separated by comma
Hello,
I was playing with websockets and I ended in a case where I had multiple extensions. Looking in wireshark, I could montior and find in the http upgrade:
sec-websocket-extensions: frame-metadata ;frame-debug ;permessage-deflate
with parameters I ended with:
frame-metadata , testParameter;frame-debug , testParameter;permessage-deflate
Reading around it sounds that the header should look like this instead:
sec-websocket-extensions: frame-metadata, frame-debug, permessage-deflate
and with parameters:
sec-websocket-extensions: frame-metadata; testParameter, frame-debug; testParameter, permessage-deflate
Am i missing something?
Other
Update Kotlin to 2.1.20
TBD when it's released
This issue was created in advance to track steps that should be done after the upgrade.
- [x] Update Kotlin version badge in readme
- [x] Remove a workaround for compatibility with test-retry (KT-49155)
- [x] Remove
withAndroidNativeArm32Fixed
(KT-71866)
3.1.1
released 25th February 2025
Client
JS/WASM fails with "IllegalStateException: Content-Length mismatch" on requesting gzipped content
Issue was introduced in https://github.com/ktorio/ktor/pull/4505
This happens when client sends Accept-Encoding
and server returns a gzipped response. It looks like the client checks the size of the unpacked data against the Content-Length
header, which returns the gzipped size.
Example: https://vooft.github.io/pepper-bdd/
IllegalStateException: Content-Length mismatch: expected 229 bytes, but received 863 bytes
Unpacked file size is 863 bytes, but if you try to download it with Accept-Encoding: gzip
, then the server returns 229.
HttpCache: Cache isn't updated when Vary header for 304 response matches but not equal to Vary for 200 response
We use a standard Apache server with some common PHP frameworks.
The 2xx response returns http header Vary: X-Requested-With, Accept-Encoding
The 304 response returns http header Vary: X-Requested-With
Ktor HttpCache decides to create a duplicate entry on this line instead of updating the original record and when the cache is requested, it returns the original, non-updated record. Full findAndRefresh method logic here:
val varyKeysFrom304 = response.varyKeys()
val cache = findResponse(storage, varyKeysFrom304, url, request) ?: return null
val newVaryKeys = varyKeysFrom304.ifEmpty { cache.varyKeys }
storage.store(request.url, cache.copy(newVaryKeys, response.cacheExpires(isSharedClient)))
return cache.createResponse(request.call.client, request, response.coroutineContext)
We think this is a bug because browsers like Chrome, Firefox or Safari have no problem with this behavior.
Logging: messages are printed per line with OkHttp logger format
This causes loggers like log4j to add in a bunch more data, making the logs denser and more intermixed / harder to follow.
The messages should be printed in blobs containing new lines.
Race condition when writing to a buffer leads to NPE inside CIOReaderKt.readFrom
We've just received NPE in Ktor 3.0.3
java.lang.NullPointerException: null
at kotlinx.io.Buffer.write(Buffer.kt:452)
at kotlinx.io.Buffer.readAtMostTo(Buffer.kt:319)
at kotlinx.io.Buffer.transferFrom(Buffer.kt:486)
at io.ktor.utils.io.ByteChannel.flushWriteBuffer(ByteChannel.kt:105)
at io.ktor.utils.io.ByteChannel.flush(ByteChannel.kt:91)
at io.ktor.utils.io.ByteWriteChannelOperations_jvmKt.write(ByteWriteChannelOperations.jvm.kt:31)
at io.ktor.utils.io.ByteWriteChannelOperations_jvmKt.write$default(ByteWriteChannelOperations.jvm.kt:24)
at io.ktor.network.sockets.CIOReaderKt.readFrom(CIOReader.kt:133)
at io.ktor.network.sockets.CIOReaderKt.access$readFrom(CIOReader.kt:1)
at io.ktor.network.sockets.CIOReaderKt$attachForReadingDirectImpl$1.invokeSuspend(CIOReader.kt:109)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:101)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:113)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:89)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:589)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:823)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:720)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:707)
WebSocket and SSE don't respect connection timeout set in the HttpTimeout plugin
Hi, I am using ktor websocket client. Http client is configured with connection timeout
During experiments I noticed that connection timeout is not honored.
I believe that the problem is in the plugin - it exits early for all websocket requests.
Http client:
private val client = HttpClient(Java) {
install(WebSockets) { pingInterval = PING_INTERVAL.inWholeMilliseconds }
install(HttpTimeout) {
connectTimeoutMillis = 3000
}
}
public companion object Plugin :
HttpClientPlugin<HttpTimeoutCapabilityConfiguration, HttpTimeout>,
HttpClientEngineCapability<HttpTimeoutCapabilityConfiguration> {
...
@OptIn(InternalAPI::class)
override fun install(plugin: HttpTimeout, scope: HttpClient) {
scope.plugin(HttpSend).intercept { request ->
val isWebSocket = request.url.protocol.isWebsocket()
if (isWebSocket || request.body is ClientUpgradeContent) return@intercept execute(request)
var configuration = request.getCapabilityOrNull(HttpTimeout)
if (configuration == null && plugin.hasNotNullTimeouts()) {
configuration = HttpTimeoutCapabilityConfiguration()
request.setCapability(HttpTimeout, configuration)
}
configuration?.apply {
connectTimeoutMillis = connectTimeoutMillis ?: plugin.connectTimeoutMillis
socketTimeoutMillis = socketTimeoutMillis ?: plugin.socketTimeoutMillis
requestTimeoutMillis = requestTimeoutMillis ?: plugin.requestTimeoutMillis
val requestTimeout = requestTimeoutMillis ?: plugin.requestTimeoutMillis
if (requestTimeout == null || requestTimeout == INFINITE_TIMEOUT_MS) return@apply
val executionContext = request.executionContext
val killer = scope.launch {
delay(requestTimeout)
val cause = HttpRequestTimeoutException(request)
LOGGER.trace("Request timeout: ${request.url}")
executionContext.cancel(cause.message!!, cause)
}
request.executionContext.invokeOnCompletion {
killer.cancel()
}
}
execute(request)
}
}
}
ArrayIndexOutOfBounds kotlinx-io
Encountered during Ktor 3.0.3 testing in the AI platform https://jetbrains.slack.com/archives/C07U498LLUR/p1737441643894639?thread_ts=1737356811.807179&cid=C07U498LLUR
java.lang.ArrayIndexOutOfBoundsException: arraycopy: length -3044 is negative
at java.base/java.lang.System.arraycopy(Native Method)
at kotlin.collections.ArraysKt___ArraysJvmKt.copyInto(_ArraysJvm.kt:955)
at kotlinx.io.Segment.readTo$kotlinx_io_core(Segment.kt:339)
at kotlinx.io.Buffer.readAtMostTo(Buffer.kt:305)
at kotlinx.io.SourcesKt.readTo(Sources.kt:294)
at kotlinx.io.SourcesKt.readTo$default(Sources.kt:290)
at kotlinx.io.SourcesKt.readByteArrayImpl(Sources.kt:269)
at kotlinx.io.SourcesKt.readByteArray(Sources.kt:252)
at kotlinx.io.Utf8Kt.commonReadUtf8(Utf8.kt:620)
at kotlinx.io.Utf8Kt.readString(Utf8.kt:221)
at io.ktor.utils.io.ByteReadChannelOperationsKt.readUTF8LineTo(ByteReadChannelOperations.kt:387)
Core
formData: implementation of copying Source is broken
To reproduce, run the following code:
val client = HttpClient(CIO) {}
val response = client.post("https://httpbin.org/post") {
setBody(MultiPartFormDataContent(formData {
append(
key = "key",
value = SystemFileSystem.source(Path("build.gradle.kts")).buffered(),
headers = Headers.build {
append(HttpHeaders.ContentType, "text/plain")
append(HttpHeaders.ContentDisposition, "filename=\"build.gradle.kts\"")
},
)
}))
}
println(response.bodyAsText())
As a result, the file contents for the key
key isn't sent. The reason, according to this comment, is that the Ktor's implementation of the Source.copy
method is deeply broken.
TLS client: IOException while writing to a closed TLS socket since 3.0.0
If you close a TCP TLS socket while you are writing a buffer, the app crash with the following error:
FATAL EXCEPTION: DefaultDispatcher-worker-9
PID: 29107
java.io.IOException: Channel is closed for write
at io.ktor.utils.io.ByteChannel.getWriteBuffer(ByteChannel.kt:49)
at io.ktor.utils.io.ByteWriteChannelOperationsKt.writeByte(ByteWriteChannelOperations.kt:18)
at io.ktor.network.tls.RenderKt.writeRecord(Render.kt:18)
at io.ktor.network.tls.TLSClientHandshake$output$1.invokeSuspend(TLSClientHandshake.kt:119)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:101)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:589)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:832)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:720)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:707)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [CoroutineName(cio-tls-encoder), ActorCoroutine{Cancelling}@cf8623c, Dispatchers.Default]
I'm only able to reproduce this error using version 3.0.1 with TCP TLS socket. If you use the same code version 3.0.1 without TLS or version 2.3.13 with and without TLS, all is working fine.
Docs
Docs: Docker example includes unnecessary copy statements in the Dockerfile
In the deployment docs, example Dockerfile seems to include unnecessary COPY
operation.
Stage 2 (gradle):
# Stage 2: Build Application
FROM gradle:latest AS build
COPY --from=cache /home/gradle/cache_home /home/gradle/.gradle
COPY . /usr/src/app/
WORKDIR /usr/src/app
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
# Build the fat JAR, Gradle also supports shadow
# and boot JAR by default.
RUN gradle buildFatJar --no-daemon
Isn't this copy unnecessary?
COPY . /usr/src/app/
WORKDIR /usr/src/app
Gradle Plugin
Support enabling development mode from the CLI
The development mode of the generated projects is currently configured with the following code:
application {
// ...
val isDevelopment: Boolean = project.ext.has("development")
applicationDefaultJvmArgs = listOf("-Dio.ktor.development=$isDevelopment")
}
We need to have this logic in our Gradle plugin to be able to enable development mode for any project that has our Gradle plugin via CLI.
For example, ./gradlew run -Pio.ktor.development=true
could run the application with the development mode on.
Server
Exception thrown in onCallRespond makes the client wait for response indefinitely
To reproduce, make a / request to the following server:
embeddedServer(Netty, port = 8080) {
install(createApplicationPlugin("MyPlugin") {
onCallRespond { call ->
error("oh nooooo")
}
})
routing {
get {
call.respondText("hello world")
}
}
}.start(wait = true)
As a result, the client unexpectedly waits for the response until a timeout, but the connection reset or 200 OK is expected.
Resources: a / route isn't resolved when there is a sibling `staticResources`
This type-safe route:
@Resource("/")
class Home
With this routing:
get<Home> { call.respondText("OK") }
Results in a 404.
Changing it back to
get("/") { call.respondText("OK") }
works around the issue.
This limitation should either be documented or fixed.
Other
NPE in readBuffer
Encountered when Grazie team was migrating to Ktor 2 to 3. Occurs when there are many concurrent requests.
https://jetbrains.slack.com/archives/C07U498LLUR/p1737356811807179
java.lang.NullPointerException
at kotlinx.io.Buffer.write(Buffer.kt:452)
at kotlinx.io.Buffer.readAtMostTo(Buffer.kt:319)
at kotlinx.io.Buffer.transferFrom(Buffer.kt:486)
at io.ktor.utils.io.ByteReadChannelOperationsKt.readBuffer(ByteReadChannelOperations.kt:84)
at io.ktor.utils.io.ByteReadChannelOperationsKt.toByteArray(ByteReadChannelOperations.kt:38)
at io.ktor.client.plugins.DefaultTransformKt$defaultTransformers$2.invokeSuspend(DefaultTransform.kt:81)
at io.ktor.client.plugins.DefaultTransformKt$defaultTransformers$2.invoke(DefaultTransform.kt)
at io.ktor.client.plugins.DefaultTransformKt$defaultTransformers$2.invoke(DefaultTransform.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.client.HttpClient$4.invokeSuspend(HttpClient.kt:1379)
at io.ktor.client.HttpClient$4.invoke(HttpClient.kt)
at io.ktor.client.HttpClient$4.invoke(HttpClient.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.client.plugins.ReceiveError$install$1.invokeSuspend(HttpCallValidator.kt:149)
at io.ktor.client.plugins.ReceiveError$install$1.invoke(HttpCallValidator.kt)
at io.ktor.client.plugins.ReceiveError$install$1.invoke(HttpCallValidator.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:86)
at io.ktor.client.call.HttpClientCall.bodyNullable(HttpClientCall.kt:87)
at ai.grazie.client.ktor.GrazieKtorHTTPClient.toResponse(GrazieKtorHTTPClient.kt:342)
at ai.grazie.client.ktor.GrazieKtorHTTPClient.access$toResponse(GrazieKtorHTTPClient.kt:31)
at ai.grazie.client.ktor.GrazieKtorHTTPClient$send$2.invoke(GrazieKtorHTTPClient.kt:62)
at ai.grazie.client.ktor.GrazieKtorHTTPClient$send$2.invoke(GrazieKtorHTTPClient.kt:62)
at ai.grazie.client.ktor.GrazieKtorHTTPClient$sendAndWaitBody$2.invokeSuspend(GrazieKtorHTTPClient.kt:172)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:101)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:263)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:95)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:69)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:47)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
3.1.0
released 12th February 2025
Client
Support static linking for curl on all platforms
There is a proposal to provide a common set of curl features across all target platforms, e.g. WebSockets.
For that we need to use static linking to libcurl on all platforms: linuxArm64/linuxX64/macosArm64/macosX64/mingwX64.
But at the same time we need to keep these binaries up-to-date due to newly discovered security vulnerabilities.
WasmJS WebSocket client sometimes drops a frame received immediately after handshake
Please see all the details in KTOR-6883, as this issue is exactly the same: the code of the JS engine was copy-pasted for the wasmJS before the fix of KTOR-6883 was made. So the fix has to be done as well in the copied code.
Add reconnection in ClientSSESession
If the connection with a server is broken, a client should reestablish the connection.
https://html.spec.whatwg.org/multipage/server-sent-events.html#sse-processing-model
OutOfMemoryError when sending a large binary file through ByteReadChannel converted from InputStream
Im trying to upload a 5G video and i see a crash on android device:
this is a crash:
Fatal Exception: java.lang.OutOfMemoryError: Failed to allocate a 48 byte allocation with 4278960 free bytes and 4178KB until OOM, target footprint 536870912, growth limit 536870912; giving up on allocation because <1% of heap free after GC.
at java.nio.ByteBuffer.wrap(ByteBuffer.java:322)
at io.ktor.utils.io.bits.MemoryFactoryJvmKt.useMemory$default(MemoryFactoryJvm.kt:20)
at io.ktor.utils.io.bits.MemoryJvmKt.copyTo-SG11BkQ(MemoryJvm.kt:191)
at io.ktor.utils.io.core.BufferPrimitivesJvmKt.writeFully(BufferPrimitivesJvm.kt:22)
at io.ktor.utils.io.ByteBufferChannel.readAsMuchAsPossible(ByteBufferChannel.kt:516)
at io.ktor.utils.io.ByteBufferChannel.readAsMuchAsPossible$default(ByteBufferChannel.kt:499)
at io.ktor.utils.io.ByteBufferChannel.readRemainingSuspend(ByteBufferChannel.kt:2046)
at io.ktor.utils.io.ByteBufferChannel.access$getWriteOp(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel.access$readRemainingSuspend(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel$readRemainingSuspend$1.invokeSuspend(ByteBufferChannel.kt:13)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.internal.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:202)
at io.ktor.utils.io.internal.CancellableReusableContinuation.resumeWith(CancellableReusableContinuation.kt:93)
at io.ktor.utils.io.ByteBufferChannel.resumeReadOp(ByteBufferChannel.kt:2059)
at io.ktor.utils.io.ByteBufferChannel.flushImpl(ByteBufferChannel.kt:185)
at io.ktor.utils.io.ByteBufferChannel.flush(ByteBufferChannel.kt:195)
at io.ktor.utils.io.ByteBufferChannel.writing(ByteBufferChannel.kt:446)
at io.ktor.utils.io.ByteBufferChannel.writeAsMuchAsPossible(ByteBufferChannel.kt:1347)
at io.ktor.utils.io.ByteBufferChannel.writeSuspend(ByteBufferChannel.kt:1438)
at io.ktor.utils.io.ByteBufferChannel.access$getWriteOp(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel.access$writeSuspend(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel.access$getWriteOp(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel.access$writeSuspend(ByteBufferChannel.kt:23)
at io.ktor.utils.io.ByteBufferChannel$writeSuspend$1.invokeSuspend(ByteBufferChannel.kt:13)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
This is my code:
val call = client.patch(url) {
headers {
append("Upload-Offset", offset)
append("Tus-Resumable", "1.0.0")
append("Content-Type", "application/offset+octet-stream")
if (sizeBytes != null)
append("Content-Length", (sizeBytes - offset.toLong()).toString())
}
setBody(byteReadChannel)
onUpload { bytesSentTotal, size -\>
trySendBlocking(
UploadStatus(
bytesSentTotal + offset.toLong(),
sizeBytes ?: size,
null
)
)
}
}.call
..........................................................................................................................
File(entry.deviceMedia.data).inputStream()
.toByteReadChannelWithOffset(offset = offset.toInt())
..........................................................................................................................
@OptIn(DelicateCoroutinesApi::class)
@Suppress("BlockingMethodInNonBlockingContext")
@JvmName("toByteReadChannelWithArrayPool")
fun InputStream.toByteReadChannelWithOffset(
context: CoroutineContext = Dispatchers.IO,
pool: ObjectPool\<ByteArray\> = ByteArrayPool,
offset: Int = 0
): ByteReadChannel = GlobalScope.writer(context, autoFlush = true) {
val buffer = pool.borrow()
try {
this@toByteReadChannelWithOffset.skip(offset.toLong())
while (true) {
val readCount = read(buffer, 0, buffer.size)
if (readCount \< 0) break
if (readCount == 0) continue
channel.writeFully(buffer, 0, readCount)
}
} catch (cause: Throwable) {
channel.close(cause)
} finally {
pool.recycle(buffer)
close()
}
}.channel
Auth: BasicAuthProvider caches credentials until process death
BasicAuthProvider
caches credentials until process death. Unlike BearerAuthProvider
there is no public method for clearing tokensHolder
to refresh credentials with new login/password. Therefore users need to manually add Basic Auth headers using interceptors if they want to change credentials at runtime of the program.
package io.ktor.client.plugins.auth.providers
// BearerAuthProvider.kt
public class BearerAuthProvider {
public fun clearToken() {
tokensHolder.clearToken()
}
}
// BasicAuthProvider.kt
public class BasicAuthProvider {
// There is no clearToken
}
Recommended solution: Add clearToken()
method to BasicAuthProvider
Support WebSockets in Curl engine
Curl 7.86.0 added experimental support for WebSockets.
Ktor Curl client engine could use newly introduced functions to support WebSocket protocol on native platforms.
References:
Darwin: Ambiguous DarwinHttpRequestException for SSL Pinning failure
In case of SSL Pinning failure for iOS(Darwin Engine), It throws DarwinHttpRequestException which is very ambiguous instead of throwing proper Exception like it throws for Android (okHttp) it throws javax.net.ssl.SSLPeerUnverifiedException which is very much correct.
API1:::: io.ktor.client.engine.darwin.DarwinHttpRequestException: Exception in http request: Error Domain=NSURLErrorDomain Code=-999 "cancelled" UserInfo={NSErrorFailingURLStringKey=https://dummyjson.com/products/1, NSErrorFailingURLKey=https://dummyjson.com/products/1, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalDataTask <87944327-ACED-46AB-A30E-468079AB5D05>.<1>"
), _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <87944327-ACED-46AB-A30E-468079AB5D05>.<1>, NSLocalizedDescription=cancelled}
"AbortError: BodyStreamBuffer was aborted" error when canceling parent job
While testing our browser JS app after migrating to the latest Ktor 3, we encountered AbortError
error. In our logic, we make multiple requests and then read data from the BodyReadChannel
. Sometimes, we need to cancel the recent requests, and at this point that the error randomly occurs somewhere inside the Ktor library.
Uncaught (in promise) AbortError: BodyStreamBuffer was aborted
at InvokeOnCancelling.handler_1 (JsClientEngine.kt:48:1)
at protoOf.invoke_py2q9a_k$ (JobSupport.kt:1571:1)
at notifyCancelling (JobSupport.kt:365:1)
at tryMakeCompletingSlowPath (Standard.kt:158:1)
at tryMakeCompleting (JobSupport.kt:894:1)
at cancelMakeCompleting (JobSupport.kt:727:1)
at protoOf.cancelImpl_465b6c_k$ (JobSupport.kt:698:1)
at protoOf.cancelInternal_fraw7c_k$ (JobSupport.kt:663:1)
at protoOf.cancel_hkmm2i_k$ (JobSupport.kt:648:1)
at InvokeOnCancelling.handler_1 (HttpClientEngine.kt:103:1)
In our app we use the following Ktor dependencies:
- io.ktor:ktor-client-core:3.0.1
- io.ktor:ktor-client-js:3.0.1
Tested on Kotlin version 2.0.21
Java, Js, Darwin: Response header Sec-WebSocket-Protocol is missing
After opening a web socket session from the client side, it is important to be able to access the subprotocol accepted by the server from the list of protocols sent by the client. Ktor transmits the Sec-WebSocket-Protocol
header to the server in different engines, but it doesn't give access to the accepted/negotiated subprotocol from that list (which is the Sec-WebSocket-Protocol
header of the handshake response).
In the Java, JS, and Darwin engines (at least), session.call.response.headers
doesn't contain any header from the websocket handshake response:
val wsKtorSession = httpClient.webSocketSession(url) { ... }
println(wsKtorSession.call.response.headers) // Headers []
It would be great to at least provide the Sec-WebSocket-Protocol
, which is available in all cases in the platform-specific web socket implementation:
- in the JDK 11 engine, it can be accessed via
webSocket.subprotocol?.ifEmpty { null }
- in the JS engine, it can be accessed via
webSocket.protocol.ifEmpty { null }
- in the Darwin engine, the protocol is directly given in the
URLSession
open callback as thedidOpenWithProtocol
parameter, and should just be passed to Ktor'swsSession.didOpen()
so it can be added to the response data
Auth: Make re-auth/refresh status codes configurable
Currently, the Auth plugin is hard-coded against 401, but some backends return 403 instead and they can't be changed because we have either no control over them or they went through a certification process and are basically set in stone now.
We should allow configuring this in the Auth plugin.
I've already prepared the PR: https://github.com/ktorio/ktor/pull/4420
Logging: HTTP method is logged with the class name
Description:
When logging HTTP requests using Ktor's client plugins, the HttpMethod
is being logged as the entire data class object instead of just its value. For example:
Code:
install(Logging) {
logger = Logger.DEFAULT
level = LogLevel.INFO
}
Logging output:
REQUEST: http://localhost:8080
METHOD: HttpMethod(value=POST)
This issue arises because the HttpMethod is a data class, and the entire object is being stringified when logged.
Expected behavior:
The HttpMethod
should be logged as its value only. For example:
REQUEST: http://localhost:8080
METHOD: POST
Impact:
- Logs are currently less readable and harder to debug because they contain unnecessary information.
- Including the entire data class object increases log size unnecessarily.
Proposed solutions:
- Directly access
value
property (as implemented): This is a simple and effective solution that directly accesses thevalue
property ofHttpMethod
to obtain the HTTP method string. - Override
toString
method (proposed addition): Overriding thetoString
method ofHttpMethod
to return only the value provides a more elegant and potentially more maintainable solution. This approach aligns with common practices for object string representation and allows for future modifications to theHttpMethod
class without affecting the logging behavior.
Priority:
This issue is of low priority due to its relatively minor impact on overall functionality, though addressing it would improve log readability and maintenance.
HttpRequestRetry: race condition for isClosedForRead leads to EOFException: Channel is already closed
In rare cases, this line of code can throw a Channel already closed exception. I could only reproduce this regression on 3.0.2 or newer
java.io.EOFException: Channel is already closed
at io.ktor.utils.io.ByteReadChannelOperationsKt.readFully(ByteReadChannelOperations.kt:464)
at io.ktor.utils.io.ByteReadChannelOperationsKt.readFully$default(ByteReadChannelOperations.kt:462)
at io.ktor.client.statement.ReadersKt.readBytes(Readers.kt:15)
at io.ktor.client.plugins.HttpRequestRetryKt$HttpRequestRetry$2$1.invokeSuspend(HttpRequestRetry.kt:300)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:165)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:154)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:470)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core(CancellableContinuationImpl.kt:504)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core$default(CancellableContinuationImpl.kt:493)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:359)
at kotlinx.coroutines.ResumeAwaitOnCompletion.invoke(JobSupport.kt:1557)
at kotlinx.coroutines.JobSupport.notifyCompletion(JobSupport.kt:1625)
at kotlinx.coroutines.JobSupport.completeStateFinalization(JobSupport.kt:316)
at kotlinx.coroutines.JobSupport.finalizeFinishingState(JobSupport.kt:233)
at kotlinx.coroutines.JobSupport.tryMakeCompletingSlowPath(JobSupport.kt:946)
at kotlinx.coroutines.JobSupport.tryMakeCompleting(JobSupport.kt:894)
at kotlinx.coroutines.JobSupport.makeCompletingOnce$kotlinx_coroutines_core(JobSupport.kt:859)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:98)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:101)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:113)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:89)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:589)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:823)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:720)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:707)
The raw stream and the response stream are not in agreement on if the steam is actually closed:
Snapsot taken from line ByteReadChannelOperations.kt:464
{width=70%}
Snapshot from same breakpoint but taken at a higher stack frame at line HttpRequestRetry.kt:304
{width=70%}
You can reproduce it by running this minimal reproduction example:
Curl: Error linking curl in linkDebugExecutableLinuxX64 on macOS
I have a KMP project using the curl ktor client and when compiling the project on linux, everything links ok.
When compiling the project on macOS, though, I get the folllowing log:
> Task :gameplay-app:linkDebugExecutableLinuxX64 FAILED
e: /Users/brunojcm/.konan/dependencies/apple-llvm-20200714-macos-x64-essentials/bin/ld.lld invocation reported errors
The /Users/brunojcm/.konan/dependencies/apple-llvm-20200714-macos-x64-essentials/bin/ld.lld command returned non-zero exit code: 1.
output:
ld.lld: error: undefined symbol: curl_global_init
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_global_init_wrapper23)
>>> did you mean: _curl_global_init
>>> defined in: /usr/local/opt/curl/lib/libcurl.a
ld.lld: error: undefined symbol: curl_slist_append
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_slist_append_wrapper27)
>>> did you mean: _curl_slist_append
>>> defined in: /usr/local/opt/curl/lib/libcurl.a
ld.lld: error: undefined symbol: curl_slist_free_all
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_slist_free_all_wrapper28)
ld.lld: error: undefined symbol: curl_easy_strerror
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_easy_strerror_wrapper33)
ld.lld: error: undefined symbol: curl_easy_pause
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_easy_pause_wrapper35)
ld.lld: error: undefined symbol: curl_easy_init
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_easy_init_wrapper36)
ld.lld: error: undefined symbol: curl_easy_cleanup
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_easy_cleanup_wrapper38)
ld.lld: error: undefined symbol: curl_multi_init
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_init_wrapper44)
ld.lld: error: undefined symbol: curl_multi_add_handle
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_add_handle_wrapper45)
ld.lld: error: undefined symbol: curl_multi_remove_handle
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_remove_handle_wrapper46)
ld.lld: error: undefined symbol: curl_multi_poll
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_poll_wrapper49)
ld.lld: error: undefined symbol: curl_multi_wakeup
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_wakeup_wrapper50)
ld.lld: error: undefined symbol: curl_multi_perform
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_perform_wrapper51)
ld.lld: error: undefined symbol: curl_multi_cleanup
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_cleanup_wrapper52)
ld.lld: error: undefined symbol: curl_multi_info_read
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_info_read_wrapper53)
ld.lld: error: undefined symbol: curl_multi_strerror
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(libcurl_curl_multi_strerror_wrapper54)
ld.lld: error: undefined symbol: curl_easy_setopt
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(knifunptr_libcurl39_curl_easy_setopt)
ld.lld: error: undefined symbol: curl_easy_getinfo
>>> referenced by out
>>> /private/var/folders/0b/kkc0ysjs6m357dwbhcj2n_n00000gn/T/konan_temp17233035539013036940/result.o:(knifunptr_libcurl42_curl_easy_getinfo)
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':gameplay-app:linkDebugExecutableLinuxX64'.
> Compilation finished with errors
Is this an issue with Ktor?
Apache5 client: Upgrade HttpClient to 5.4
To make Ktor Apache5 client compatible with HttpClient 5.4, we should adjust the nullability of some overridden methods and add support for configuring hostname verification policy.
Access to the configuration options of a HttpClient plugin to tweak or wrap them with additional logic
The use case is passing a preconfigured HttpClient instance to the SDK client factory to construct an SDK client instance for some server (for instance, Space SDK client for accessing Space HTTP API). In Space SDK, we use Ktor HttpClient to make requests to Space HTTP API and allow SDK users to customize the underlying HttpClient according to their needs:
public fun ktorClientForSpace(block: HttpClientConfig<*>.() -> Unit = {}): HttpClient = HttpClient {
block()
configureKtorClientForSpace()
}
This seems to be a good idea, because the SDK client is basically just a wrapper around HttpClient and we’d better rely on Ktor possibilities for different customization options rather than hide them completely behind SDK API.
At the same time, SDK handles request authentication and specifically, refreshing access tokens when they expire and retrying the request with the newly obtained token. This refresh & retry does not rely on the HttpRequestRetry plugin. But it turns out this token refresh doesn’t work well with the HttpRequestRetry plugin configured by the SDK user. When the retry intervals are long enough, the access token expires earlier than the retry count is exceeded. SDK then performs access token refresh and retries the request transparently, but the request with the new token is seen by the HttpRequestRetry plugin as a completely new one, so the retry count is reset. The result is an indefinite loop of retries with intermittent access token updates.
To fix this issue, I considered using HttpRequestRetry plugin for token refresh as well. But the problem is, in SDK code I cannot neither tweak nor access the plugin configuration specified by the caller, because HttpRequestRetry.Configuration is write-only due to visibility restrictions of its members:
public fun HttpClientConfig<*>.configureKtorClientForSpace(configureRetry: HttpRequestRetry.Configuration.() -> Unit = {}) {
val retryWithCustomConfig = HttpRequestRetry.prepare(configureRetry)
install(HttpRequestRetry) {
retryIf {
// DOESN'T WORK - no access to retryWithCustomConfig.retryIf
// retryWithCustomConfig.retryIf || authentication token has expired
}
modifyRequest {
// DOESN'T WORK - no access to retryWithCustomConfig.modifyRequest
// if authentication token has expired, refresh the token first
// then apply retryWithCustomConfig.modifyRequest
}
}
}
The only idea I currently have is to re-implement some basic subset of HttpRequestRetry in Space SDK code itself and strongly advise against using the HttpRequestRetry plugin for the underlying HttpClient. But this is not a good solution because of the two reasons. We'll need to mirror part of the HttpRequestRetry plugin API in the Space SDK client API. We also have no means to ensure that the caller doesn't still install the HttpRequestRetry plugin into the client — we can only put a warning into the Javadoc or SDK documentation, but we cannot do anything at compile or run time to prevent that.
Logging: Format log like OkHttp client does
Consider to format request/response log equals to okttp3 logger: it's a standard de-facto and more readable and compact
Support ARM target in Ktor client with Kotlin/Native and Curl
Windows: undefined symbols in linker when ktor-client-curl is used
I've just added ktor-client-curl
version 2.0.1 to my Windows Kotlin/Native target.
However, it fails to compile on GitHub Actions, with errors like these:
> Task :client:linkDebugTestMingwX64
w: duplicate library name: org.jetbrains.kotlinx:atomicfu
w: duplicate library name: org.jetbrains.kotlinx:atomicfu-cinterop-interop
e: C:\Users\runneradmin\.konan\dependencies\llvm-11.1.0-windows-x64-essentials/bin/clang++ invocation reported errors
The C:\Users\runneradmin\.konan\dependencies\llvm-11.1.0-windows-x64-essentials/bin/clang++ command returned non-zero exit code: 1.
output:
lld-link: error: undefined symbol: __mingw_init_ehandler
>>> referenced by E:/mingwbuild/mingw-w64-crt-git/src/mingw-w64/mingw-w64-crt/crt\crtexe.c:288
>>> C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(__tmainCRTStartup)
lld-link: error: undefined symbol: __security_init_cookie
>>> referenced by E:/mingwbuild/mingw-w64-crt-git/src/mingw-w64/mingw-w64-crt/crt\crtexe.c:194
>>> C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.l_startw)
>>> referenced by E:/mingwbuild/mingw-w64-crt-git/src/mingw-w64/mingw-w64-crt/crt\crtexe.c:222
>>> C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.l_start)
lld-link: error: undefined symbol: mingw_app_type
>>> referenced by C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.refptr.mingw_app_type)
lld-link: error: undefined symbol: mingw_initcharmax
>>> referenced by C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.refptr.mingw_initcharmax)
lld-link: error: undefined symbol: mingw_initltssuo_force
>>> referenced by C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.refptr.mingw_initltssuo_force)
lld-link: error: undefined symbol: mingw_initltsdyn_force
>>> referenced by C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.refptr.mingw_initltsdyn_force)
lld-link: error: undefined symbol: mingw_initltsdrot_force
>>> referenced by C:\Users\runneradmin\.konan\dependencies\msys2-mingw-w64-x86_64-1\x86_64-w64-mingw32\lib\crt2.o:(.refptr.mingw_initltsdrot_force)
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
> Task :client:linkDebugTestMingwX64 FAILED
I've confirmed that libcurl is installed (eg. https://github.com/batect/docker-client/runs/6404281016?check_suite_focus=true#step:12:24).
SaveBodyPlugin: UninitializedPropertyAccessException when reading response body within receivePipeline
Plugin:
import io.ktor.client.HttpClient
import io.ktor.client.engine.cio.CIO
import io.ktor.client.plugins.HttpClientPlugin
import io.ktor.client.plugins.contentnegotiation.ContentNegotiation
import io.ktor.client.request.HttpRequestPipeline
import io.ktor.client.request.get
import io.ktor.client.statement.HttpReceivePipeline
import io.ktor.client.statement.HttpResponse
import io.ktor.client.statement.bodyAsText
import io.ktor.serialization.kotlinx.json.json
import io.ktor.util.AttributeKey
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.coroutineScope
import kotlinx.coroutines.launch
import kotlinx.serialization.json.Json
import org.slf4j.LoggerFactory
suspend fun main() = coroutineScope {
val logger = LoggerFactory.getLogger("repro")
val client = HttpClient(CIO) {
install(MyRequestLogger) {
saveFunction = {
launch(Dispatchers.Default) {
logger.info("Received response in plugin: {}", it.bodyAsText())
}
}
}
install(ContentNegotiation) {
json(json = Json {
ignoreUnknownKeys = true
})
}
}
while (true) {
client.get("https://jsonplaceholder.typicode.com/todos/1")
}
}
class MyRequestLogger(
private val config: Config,
) {
class Config {
var saveFunction: suspend (HttpResponse) -> Unit = {}
}
companion object Plugin : HttpClientPlugin<Config, MyRequestLogger> {
override val key = AttributeKey<MyRequestLogger>("HttpRequestLogger")
override fun prepare(block: Config.() -> Unit): MyRequestLogger {
val config = Config().apply(block)
return MyRequestLogger(config)
}
override fun install(plugin: MyRequestLogger, scope: HttpClient) {
scope.requestPipeline.intercept(HttpRequestPipeline.Before) {
proceed()
}
scope.receivePipeline.intercept(HttpReceivePipeline.After) { response ->
plugin.config.saveFunction(response)
proceedWith(response)
}
}
}
}
Exception:
Exception in thread "main" java.io.IOException: lateinit property writerJob has not been initialized
at io.ktor.utils.io.CloseToken.getCause(CloseToken.kt:37)
at io.ktor.utils.io.ByteChannel.cancel(ByteChannel.kt:136)
at io.ktor.utils.io.ByteWriteChannelOperationsKt$writer$job$1.invokeSuspend(ByteWriteChannelOperations.kt:150)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith$$$capture(ContinuationImpl.kt:33)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt)
at --- Async.Stack.Trace --- (captured by IntelliJ IDEA debugger)
at kotlinx.coroutines.debug.internal.DebugProbesImpl$CoroutineOwner.<init>(DebugProbesImpl.kt:531)
at kotlinx.coroutines.debug.internal.DebugProbesImpl.createOwner(DebugProbesImpl.kt:510)
at kotlinx.coroutines.debug.internal.DebugProbesImpl.probeCoroutineCreated$kotlinx_coroutines_core(DebugProbesImpl.kt:497)
at kotlin.coroutines.jvm.internal.DebugProbesKt.probeCoroutineCreated(DebugProbes.kt:7)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsJvmKt.createCoroutineUnintercepted(IntrinsicsJvm.kt:161)
at kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable(Cancellable.kt:26)
at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:358)
at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:124)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:52)
at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch$default(Builders.common.kt:43)
at kotlinx.coroutines.BuildersKt.launch$default(Unknown Source)
at io.ktor.utils.io.ByteWriteChannelOperationsKt.writer(ByteWriteChannelOperations.kt:139)
at io.ktor.utils.io.ByteWriteChannelOperationsKt.writer(ByteWriteChannelOperations.kt:131)
at io.ktor.utils.io.ByteWriteChannelOperationsKt.writer$default(ByteWriteChannelOperations.kt:126)
at io.ktor.client.plugins.internal.ByteChannelReplay.replay(ByteChannelReplay.kt:32)
at io.ktor.client.plugins.DoubleReceivePluginKt$SaveBodyPlugin$2$1.invokeSuspend$lambda$0(DoubleReceivePlugin.kt:70)
at io.ktor.client.plugins.observer.DelegatedResponse.getRawContent(DelegatedCall.kt:78)
at io.ktor.client.call.HttpClientCall.getResponseContent$suspendImpl(HttpClientCall.kt:67)
at io.ktor.client.call.HttpClientCall.getResponseContent(HttpClientCall.kt)
at io.ktor.client.call.HttpClientCall.bodyNullable(HttpClientCall.kt:84)
at io.ktor.client.statement.HttpResponseKt.bodyAsText(HttpResponse.kt:123)
at io.ktor.client.statement.HttpResponseKt.bodyAsText$default(HttpResponse.kt:102)
at io.heapy.kotbot.KotbotReproKt$main$2$client$1$1$1$1.invokeSuspend(KotbotRepro.kt:26)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith$$$capture(ContinuationImpl.kt:33)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt)
at --- Async.Stack.Trace --- (captured by IntelliJ IDEA debugger)
at kotlinx.coroutines.debug.internal.DebugProbesImpl$CoroutineOwner.<init>(DebugProbesImpl.kt:531)
at kotlinx.coroutines.debug.internal.DebugProbesImpl.createOwner(DebugProbesImpl.kt:510)
at kotlinx.coroutines.debug.internal.DebugProbesImpl.probeCoroutineCreated$kotlinx_coroutines_core(DebugProbesImpl.kt:497)
at kotlin.coroutines.jvm.internal.DebugProbesKt.probeCoroutineCreated(DebugProbes.kt:7)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsJvmKt.createCoroutineUnintercepted(IntrinsicsJvm.kt:161)
at kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable(Cancellable.kt:26)
at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:358)
at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:124)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:52)
at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch$default(Builders.common.kt:43)
at kotlinx.coroutines.BuildersKt.launch$default(Unknown Source)
at io.heapy.kotbot.KotbotReproKt$main$2$client$1$1$1.invokeSuspend(KotbotRepro.kt:25)
at io.heapy.kotbot.KotbotReproKt$main$2$client$1$1$1.invoke(KotbotRepro.kt)
at io.heapy.kotbot.KotbotReproKt$main$2$client$1$1$1.invoke(KotbotRepro.kt)
at io.heapy.kotbot.MyRequestLogger$Plugin$install$2.invokeSuspend(KotbotRepro.kt:63)
at io.heapy.kotbot.MyRequestLogger$Plugin$install$2.invoke(KotbotRepro.kt)
at io.heapy.kotbot.MyRequestLogger$Plugin$install$2.invoke(KotbotRepro.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.proceedWith(DebugPipelineContext.kt:42)
at io.ktor.client.plugins.DoubleReceivePluginKt$SaveBodyPlugin$2$1.invokeSuspend(DoubleReceivePlugin.kt:72)
at io.ktor.client.plugins.DoubleReceivePluginKt$SaveBodyPlugin$2$1.invoke(DoubleReceivePlugin.kt)
at io.ktor.client.plugins.DoubleReceivePluginKt$SaveBodyPlugin$2$1.invoke(DoubleReceivePlugin.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:86)
at io.ktor.client.HttpClient$2.invokeSuspend(HttpClient.kt:1345)
at io.ktor.client.HttpClient$2.invoke(HttpClient.kt)
at io.ktor.client.HttpClient$2.invoke(HttpClient.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:79)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.proceedWith(DebugPipelineContext.kt:42)
at io.ktor.client.engine.HttpClientEngine$install$1.invokeSuspend(HttpClientEngine.kt:82)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith$$$capture(ContinuationImpl.kt:33)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:165)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:154)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:470)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core(CancellableContinuationImpl.kt:504)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core$default(CancellableContinuationImpl.kt:493)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:359)
at kotlinx.coroutines.ResumeAwaitOnCompletion.invoke(JobSupport.kt:1557)
at kotlinx.coroutines.JobSupport.notifyCompletion(JobSupport.kt:1625)
at kotlinx.coroutines.JobSupport.completeStateFinalization(JobSupport.kt:316)
at kotlinx.coroutines.JobSupport.finalizeFinishingState(JobSupport.kt:233)
at kotlinx.coroutines.JobSupport.tryMakeCompletingSlowPath(JobSupport.kt:946)
at kotlinx.coroutines.JobSupport.tryMakeCompleting(JobSupport.kt:894)
at kotlinx.coroutines.JobSupport.makeCompletingOnce$kotlinx_coroutines_core(JobSupport.kt:859)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:98)
at kotlinx.coroutines.debug.internal.DebugProbesImpl$CoroutineOwner.resumeWith(DebugProbesImpl.kt:545)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith$$$capture(ContinuationImpl.kt:46)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt)
at kotlinx.coroutines.UndispatchedCoroutine.afterResume(CoroutineContext.kt:266)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:100)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith$$$capture(ContinuationImpl.kt:46)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:101)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:113)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:89)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:589)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:823)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:720)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:707)
Caused by: java.io.IOException: lateinit property writerJob has not been initialized
at io.ktor.utils.io.CloseToken.<init>(CloseToken.kt:27)
at io.ktor.utils.io.ByteChannel.cancel(ByteChannel.kt:134)
... 96 more
Caused by: kotlin.UninitializedPropertyAccessException: lateinit property writerJob has not been initialized
at io.ktor.client.plugins.internal.ByteChannelReplay$CopyFromSourceTask.getWriterJob(ByteChannelReplay.kt:47)
at io.ktor.client.plugins.internal.ByteChannelReplay$CopyFromSourceTask.awaitImpatiently(ByteChannelReplay.kt:84)
at io.ktor.client.plugins.internal.ByteChannelReplay$replay$1.invokeSuspend(ByteChannelReplay.kt:33)
at io.ktor.client.plugins.internal.ByteChannelReplay$replay$1.invoke(ByteChannelReplay.kt)
at io.ktor.client.plugins.internal.ByteChannelReplay$replay$1.invoke(ByteChannelReplay.kt)
at io.ktor.utils.io.ByteWriteChannelOperationsKt$writer$job$1.invokeSuspend(ByteWriteChannelOperations.kt:142)
... 95 more
ContentNegotiation client plugin: no way to opt out of Accept on a per-request basis
The Ktor ContentNegotatiation client plugin always adds the accept header for all registered content types.
This makes it difficult to opt out on a per-request basis. For example, lets say a client has json
installed. The server has an endpoint which can return either application/pdf
and application/json
, depending on the Accept
header. We want to use the client to obtain application/pdf
for one particular request.
Out of the box this appears to be impossible to control deterministically because application/json
is always added to the Accept
header by the ContentNegotiation
plugin, with no q
parameter, which means implicitly q=1.0. Therefore specifying accept(ContentType.Application.Pdf)
in the RequestBuilder ends up in an accept header like this:
Accept: application/json,application/pdf
and now the server is free to send a JSON response.
In addition, it is not possible to override the automatically added value with accept(ContentType.Application.Json).withParameter("q", "0.8")
either, because the logic checks for an exact match, and so ends up sending:
Accept: application/json,application/json; q=0.8,application/pdf
Possible Solutions
I would recommend that the ContentNegotiation
plugin add the registered content types with a lower q
setting e.g. 0.8
by default (though this could be made configurable). That way if the user does not specify anything for accept
explicitly, the value will apply, however if the user does specify the accept
explicitly, that value will be preferred by the server over the automatically set accept
value.
Alternatively, a request attribute could be used to opt out of the automatically added Accept header for particular requests, perhaps by exposing a new RequestBuilder
extension called acceptOnly
, which if specified by the user would add the user's Accept
specification, and opt out of the registered specifications.
In addition to the above, the gate for adding the registered content type should match
the content type ignoring parameters, therefore if the user explicitly adds the registered content type with a lower q
value or other parameters, that change is not ignored by the ContentNegotiation
plugin.
Workaround
Perhaps there is an easier workaround, but I ended up creating a plugin to modify the request headers:
import io.ktor.client.plugins.api.*
import io.ktor.client.request.*
import io.ktor.util.*
val RequestModOperationKey = AttributeKey<(HttpRequestBuilder, Any) -> Unit>("request-mod-operation-key")
val RequestModPlugin = createClientPlugin("RequestModPlugin") {
on(SendingRequest) { request, content ->
request.attributes.getOrNull(RequestModOperationKey)?.invoke(request, content)
}
}
fun HttpRequestBuilder.modRequest(block: (HttpRequestBuilder, Any) -> Unit) {
setAttributes {
put(RequestModOperationKey, block)
}
}
and used like this:
httpClient.request {
// workaround ContentNegotiation plugin always inserting the Accept application/json header
// at implicit q=1.0 -- so the server returns JSON instead of PDF
modRequest { request, _ ->
request.headers.remove(HttpHeaders.Accept)
request.accept(ContentType.Application.Pdf)
}
}
Support receiving multipart data with Ktor client
Hey,
please add support for receiving multipart data with the Ktor client. You can send multipart data with the client and receive and send multipart data with the server. So this is the last part missing for complete multipart data support.
Thanks!
Client CIO engine support for wasm-js and js
After implementation of KTOR-6004, it's possible to commonize ktor-client-cio
to work for wasm-js and js targets.
Core
Write readable name of the application to the logs
2024-12-02 15:39:16.843 [main] INFO io.ktor.server.Application - Application started: io.ktor.server.application.Application@71391b3f
Add operator contains to ContentType objects
Currently, checking if a media type is a subtype of some well-known type is not very convenient:
val contentType = "Application/JSON"
// 1. Using `startsWith` (It is easy to forget `ignoreCase`)
contentType.startsWith("application/", ignoreCase = true)
// 2. Using `match(Any)` (We might not really need full parsing)
ContentType.parse(contentType).match(ContentType.Application.Any)
It would be great to hide this logic behind operator fun contains
declared for ContentType
objects:
contentType in ContentType.Application
IOException: Fail to select descriptor for ACCEPT
Some of my users receive IOException: Fail to select descriptor 134 for ACCEPT
. It seems this comes from this line.
https://kotlinlang.slack.com/archives/C0A974TJ9/p1722533815088519
Fail to parse url: file:/path/to/file.txt
Trying to create a Url
from the string file:/path/to/file.txt
fails unexpectedly with:
Url("file:/path/to/file.txt")
io.ktor.http.URLParserException: Fail to parse url: file:/path/to/file.txt
at io.ktor.http.URLParserKt.takeFrom(URLParser.kt:21)
at io.ktor.http.URLUtilsKt.URLBuilder(URLUtils.kt:25)
at io.ktor.http.URLUtilsKt.Url(URLUtils.kt:13)
Changing the URL string to file:///path/to/file.txt
succeeds.
It looks like KTor URL parser is not correct as this syntax is described in Section 2 of the RFC as shown in examples in the Appendix B of the RFC.
This form of simplified file:
URLs is common. For example on the JVM platform many JDK APIs return file:
URL in this simplified syntax.
Encountered in https://github.com/Kamel-Media/Kamel/issues/88
Add media type for Yaml
Currently, Ktor does not provide a default MIME type for YAML. According to RFC 9512, the MIME type application/yaml
can be used for YAML payloads. In practice, several variants are used, including application/yaml
, application/x-yaml
, text/yaml
, and text/x-yaml
.
Proposal:
1. Add application/yaml
as a predefined constant ContentType.Application.Yaml
in Ktor.
2. Include the deprecated MIME type aliases (application/x-yaml
, text/yaml
, text/x-yaml
) in io.ktor.http.FileContentType
for better compatibility with existing applications.
3. Update swagger to serve documentation.yaml
as application/yaml
similar to https://petstore.swagger.io/v2/swagger.yaml
Support conversion between byte channel interfaces and kotlinx-io primitives
The following conversion methods are missing:
- [x]
OutputStream
->ByteWriteChannel
- [x]
RawSink
->ByteWriteChannel
- [x]
Sink
->ByteWriteChannel
- [x]
ByteReadChannel
->RawSource
and maybe toSource
- [x]
ByteWriteChannel
->RawSink
and maybe toSink
Uncaught cannot write to a channel errors from ws-pinger
We migrated a server (websockets) app from Ktor 2 to Ktor 3, and since then we've been getting exceptions like the following logged rather frequently:
java.io.IOException: Cannot write to a channel
at io.ktor.utils.io.CloseToken.getCause(CloseToken.kt:37)
at io.ktor.utils.io.ByteChannel.cancel(ByteChannel.kt:135)
at io.ktor.server.netty.cio.NettyHttpResponsePipeline.respondWithFailure(NettyHttpResponsePipeline.kt:106)
at io.ktor.server.netty.cio.NettyHttpResponsePipeline.respondWithBodyAndTrailerMessage(NettyHttpResponsePipeline.kt:253)
at io.ktor.server.netty.cio.NettyHttpResponsePipeline.access$respondWithBodyAndTrailerMessage(NettyHttpResponsePipeline.kt:24)
at io.ktor.server.netty.cio.NettyHttpResponsePipeline$respondWithBodyAndTrailerMessage$1.invokeSuspend(NettyHttpResponsePipeline.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:99)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:840)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [CoroutineName(ws-pinger), StandaloneCoroutine{Cancelling}@3b00c365, io.ktor.server.netty.EventLoopGroupProxy@7d87fde9]
Caused by: java.io.IOException: Cannot write to a channel
at io.ktor.utils.io.CloseToken.<init>(CloseToken.kt:27)
at io.ktor.utils.io.ByteChannel.cancel(ByteChannel.kt:133)
... 14 more
Caused by: io.ktor.util.cio.ChannelWriteException: Cannot write to a channel
at io.ktor.server.netty.cio.NettyHttpResponsePipeline.respondWithFailure(NettyHttpResponsePipeline.kt:100)
... 13 more
Caused by: io.netty.channel.StacklessClosedChannelException
at io.netty.channel.AbstractChannel.close(ChannelPromise)(Unknown Source)
This is skipping all of our existing error handlers, and getting logged in an uncaught exception handler registered with the threads. The exception mentions ws-pinger, so it seems like this is an issue with the Ktor websocket pinger. Is this a bug? Is there someway to catch and gracefully handle this error in the application code?
ktor-server-core: Test files are part of the distribution code
The file ktorio/ktor/ktor-server/ktor-server-core/jvm/test/io/ktor/tests/config/ConfigJvmTest.kt
refers to 2 custom test files but they are in the resources
instead of the testResources
folder making them part of the distribution, which can cause issues with own config files.
ktor-server/ktor-server-core/jvm/resources/custom.config.conf
ktor-server/ktor-server-core/jvm/resources/custom.config.yaml
ByteWriteChannel is missing writeFloat()/readFloat()
Up until 2.3, ByteWriteChannel API had a writeFloat(). This is now missing in 3.0. I used this extensively in my app for a one-off binary protocol in Ktor 2.3, and this functionality is now missing.
There is no 3.0 migration documentation regarding this removal/deprecation, nor are there documentation entries suggesting a workaround.
As workarounds, I can write to a buffer (excess copy), use Float.toRawValue() to try mimicking the old behavior (undesirable and not stable), switch to a different socket framework, or try and ship updates to use a different serialization format for the protocol (expensive). None of which are very appealing.
What was the rationale for this removal, and what is the official recommended workaround for this removed functionality?
UrlBuilder: Support telephone scheme
Url("tel:1-408-555-5555").toString()
Expected
"tel:1-408-555-5555"
Received
tel://localhost/1-408-555-5555
UDPSocketBuilder missing `bind` overload with `hostName` and `port`
https://api.ktor.io/ktor-network/io.ktor.network.sockets/-tcp-socket-builder/index.html has -
suspend fun bind(hostname: String = "0.0.0.0", port: Int = 0, configure: SocketOptions.AcceptorOptions.() -> Unit = {}): ServerSocket
https://api.ktor.io/ktor-network/io.ktor.network.sockets/-u-d-p-socket-builder/index.html does not
Make Url class @Serializable and JVM Serializable
In our project we had to define our own UrlSerializer
. It would be much nicer to have this in the Ktor library itself, so it works out of the box (similar to how Cookie
was recently extended).
Also, types like Url
and Cookie
should be java.io.Serializable
. Otherwise Android crashes when using those types as e.g. screen arguments. This happens very quickly when Url is used indirectly as part of a data class where we wanted type safety.
Support NodeJs target for ktor-network
As I got no response on KTOR-4950, I am making a new issue.
Currently, ktor-network is not working on NodeJs, but it would be great to have low-level socket APIs on nodejs
Improve parsing of supported media types (MIME types)
The current format has downsides when there are multiple extensions associated with the same media type:
.htmls,text/html
.html,text/html
.htm,text/html
.htx,text/html
For such cases we parse and create a new instance of ContentType
for each extension. It would be better to reorganize this list to use media type as a key corresponding to multiple values:
text/html,.htmls .html .htm .htx
Or without dots:
text/html,htmls html htm htx
With this format we will be able to parse each media type once and add multiple entries for it.
Migrate to kotlin.AutoCloseable
Since AutoClosable
became a part of the common stdlib (KT-31066), we could migrate from our Closable
interface to it.
We should:
- Make our
Closeable
implementAutoCloseable
from stdlib - Deprecate our implementation of
Closeable.use { ... }
and useAutoCloseable.use { ... }
from stdlib instead.
Docs
Migrate Docker Compose to V2
Current supported version of Docker Compose is v2. We should switch from v1.
Changes in v2
-
recommended command changed https://docs.docker.com/compose/releases/migrate/
Unlike Compose V1, Compose V2 integrates into the Docker CLI platform and the recommended command-line syntax is
docker compose
. -
compose file changed https://docs.docker.com/reference/compose-file/
Legacy versions 2.x and 3.x of the Compose file format were merged into the Compose Specification. It is implemented in versions 1.27.0 and above (also known as Compose V2) of the Docker Compose CLI
https://github.com/compose-spec/compose-spec/blob/main/spec.md#compose-file
docker-compose.yaml
,docker-compose.yml
→compose.yaml , compose.yml
-
Compose doesn't use
version
to select an exact schema to validate the Compose file, but prefers the most recent schema when it's implemented.
It seems some docs and snippets uses docker compose v1 and needed to be updated.
https://github.com/search?q=repo%3Aktorio%2Fktor-documentation docker-compose&type=code
cURL Engine: Update documentation on how to install libcurl for different Linux distributions
It is extremely difficult to use Kotlin Native binaries on a Red Hat-based Linux operating system because no package manager provides libcurl-gnutls.so.4
. User must either build cURL from source and apply patches or contact us to use a custom prebuilt .so library.
Neither option is ideal.
Curl: Document how to install libcurl on Ubuntu 22.04
Hi, I am trying to get a Kotlin native client running on Ubuntu 22.04 and am running into errors with the libcurl bindings.
I have confirmed that the code works as expected on MacOS, so I'm fairly confident that this is either an issue with the library, or with my local environment. If the latter, a more descriptive exception would be appreciated, because as far as I can tell, all the necessary dependencies have been installed on my end. Thanks!
Additionally, I tried disabling the cache as suggested, to no effect.
8:31:35 AM: Executing 'runDebugExecutableNative'...
> Task :compileKotlinNative UP-TO-DATE
> Task :linkDebugExecutableNative FAILED
2 actionable tasks: 1 executed, 1 up-to-date
e: /home/ryan/.konan/dependencies/x86_64-unknown-linux-gnu-gcc-8.3.0-glibc-2.19-kernel-4.9-2/x86_64-unknown-linux-gnu/bin/ld.gold invocation reported errors
Please try to disable compiler caches and rerun the build. To disable compiler caches, add the following line to the gradle.properties file in the project's root directory:
kotlin.native.cacheKind.linuxX64=none
Also, consider filing an issue with full Gradle log here: https://kotl.in/issue
The /home/ryan/.konan/dependencies/x86_64-unknown-linux-gnu-gcc-8.3.0-glibc-2.19-kernel-4.9-2/x86_64-unknown-linux-gnu/bin/ld.gold command returned non-zero exit code: 1.
output:
/home/ryan/.konan/dependencies/x86_64-unknown-linux-gnu-gcc-8.3.0-glibc-2.19-kernel-4.9-2/x86_64-unknown-linux-gnu/bin/ld.gold: error: cannot find -lcurl
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_global_init_wrapper23: error: undefined reference to 'curl_global_init'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_slist_append_wrapper27: error: undefined reference to 'curl_slist_append'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_slist_free_all_wrapper28: error: undefined reference to 'curl_slist_free_all'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_easy_strerror_wrapper33: error: undefined reference to 'curl_easy_strerror'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_easy_pause_wrapper35: error: undefined reference to 'curl_easy_pause'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_easy_init_wrapper36: error: undefined reference to 'curl_easy_init'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_easy_cleanup_wrapper38: error: undefined reference to 'curl_easy_cleanup'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_init_wrapper44: error: undefined reference to 'curl_multi_init'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_add_handle_wrapper45: error: undefined reference to 'curl_multi_add_handle'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_remove_handle_wrapper46: error: undefined reference to 'curl_multi_remove_handle'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_poll_wrapper49: error: undefined reference to 'curl_multi_poll'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_wakeup_wrapper50: error: undefined reference to 'curl_multi_wakeup'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_perform_wrapper51: error: undefined reference to 'curl_multi_perform'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_cleanup_wrapper52: error: undefined reference to 'curl_multi_cleanup'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_info_read_wrapper53: error: undefined reference to 'curl_multi_info_read'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:function libcurl_curl_multi_strerror_wrapper54: error: undefined reference to 'curl_multi_strerror'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:knifunptr_libcurl39_curl_easy_setopt: error: undefined reference to 'curl_easy_setopt'
/home/ryan/.konan/kotlin-native-prebuilt-linux-x86_64-1.8.20/klib/cache/linux_x64-gSTATIC/io.ktor:ktor-client-curl-cinterop-libcurl/unspecified/69d1614c6d0a41ecbd731874ee49c865c1c8bac290758825bc4d66cfffd1d903/io.ktor:ktor-client-curl-cinterop-libcurl-cache/bin/libio.ktor:ktor-client-curl-cinterop-libcurl-cache.a(result.o):out:knifunptr_libcurl42_curl_easy_getinfo: error: undefined reference to 'curl_easy_getinfo'
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':linkDebugExecutableNative'.
> Compilation finished with errors
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 1s
8:31:37 AM: Execution finished 'runDebugExecutableNative'.
Infrastructure
ktor-client-curl artifacts aren’t published after EAP 1146
For a while ktor-client-curl-*
artifacts are missing from EAP publishing.
The latest published version is 3.1.0-eap-1146. EAPs for 3.0.3 aren't published at all.
Update to Kotlin 2.1.0
Kotlin 2.1.0 is out! So we should update it in the project.
- [x] Update dependency
- [x] Fix problems and apply migration according to the release notes
- [x] Update "Kotlin Version" badge
Network
EXC_GUARD in SelectorHelper.selectionLoop
Some users of my app get the following crash on iOS:
Crashed: Thread
EXC_GUARD 0x08fd4dbfade2dead
0 libsystem_kernel.dylib 0x1d7c close + 8
1 RemoteGamepad 0x133f0f8 kfun:io.ktor.network.selector.SelectorHelper.selectionLoop#internal + 247 (SelectUtilsNix.kt:247)
2 RemoteGamepad 0x1340abc kfun:io.ktor.network.selector.SelectorHelper.SelectorHelper$start$job$2.invoke#internal + 54 (SelectUtilsNix.kt:54)
3 RemoteGamepad 0x109eb8 kfun:kotlin.coroutines.intrinsics.createCoroutineUnintercepted$$inlined$createCoroutineFromSuspendFunction$2.invokeSuspend#internal + 4365459128 (IntrinsicsNative.kt:4365459128)
4 RemoteGamepad 0x107ad8 kfun:kotlin.coroutines.native.internal.BaseContinuationImpl#resumeWith(kotlin.Result<kotlin.Any?>){} + 50 (ContinuationImpl.kt:50)
5 RemoteGamepad 0x262ad4 kfun:kotlinx.coroutines.DispatchedTask#run(){} + 26 (Continuation.kt:26)
6 RemoteGamepad 0x2825c8 kfun:kotlinx.coroutines.MultiWorkerDispatcher.$workerRunLoop$lambda$2COROUTINE$0.invokeSuspend#internal + 12 (Runnable.kt:12)
7 RemoteGamepad 0x282fa8 kfun:kotlinx.coroutines.MultiWorkerDispatcher.MultiWorkerDispatcher$workerRunLoop$2.invoke#internal + 123 (MultithreadedDispatchers.kt:123)
8 RemoteGamepad 0x109eb8 kfun:kotlin.coroutines.intrinsics.createCoroutineUnintercepted$$inlined$createCoroutineFromSuspendFunction$2.invokeSuspend#internal + 4365459128 (IntrinsicsNative.kt:4365459128)
9 RemoteGamepad 0x107ad8 kfun:kotlin.coroutines.native.internal.BaseContinuationImpl#resumeWith(kotlin.Result<kotlin.Any?>){} + 50 (ContinuationImpl.kt:50)
10 RemoteGamepad 0x262ad4 kfun:kotlinx.coroutines.DispatchedTask#run(){} + 26 (Continuation.kt:26)
11 RemoteGamepad 0x21044c kfun:kotlinx.coroutines.EventLoopImplBase#processNextEvent(){}kotlin.Long + 15 (ObjectiveCUtils.kt:15)
12 RemoteGamepad 0x279f28 kfun:kotlinx.coroutines#runBlocking(kotlin.coroutines.CoroutineContext;kotlin.coroutines.SuspendFunction1<kotlinx.coroutines.CoroutineScope,0:0>){0§<kotlin.Any?>}0:0 + 49 (EventLoop.common.kt:49)
13 RemoteGamepad 0x283298 kfun:kotlinx.coroutines.MultiWorkerDispatcher.MultiWorkerDispatcher$1$$inlined$apply$2.$<bridge-DNN>invoke(){}#internal + 123 (MultithreadedDispatchers.kt:123)
14 RemoteGamepad 0x1e281a4 Worker::processQueueElement(bool) + 46868
15 RemoteGamepad 0x1e27804 (anonymous namespace)::workerRoutine(void*) + 44404
16 libsystem_pthread.dylib 0x17d0 _pthread_start + 136
17 libsystem_pthread.dylib 0x1480 thread_start + 8
From a quick Google search it seems EXC_GUARD is thrown when closing a socket which Ktor does not own (see close
in stack trace). How does this happen?
https://stackoverflow.com/questions/32429431/exc-guard-exception
This issue could maybe be related to https://youtrack.jetbrains.com/issue/KTOR-7299/IOException-Fail-to-select-descriptor-for-ACCEPT
Unable to close socket with open read/write channels on Native
Find below slightly modified version of existing test TCPSocketTest.testDisconnect
which succeeds on JVM but fails on Native (e.g. macosArm64Test). This actually causes the TCP socket (client side) to not close at all even though close is called on it. This is due to a still open channel which is not properly cancelled by a close call.
@Test
fun testDisconnect() = testSockets { selector ->
val tcp = aSocket(selector).tcp()
val server = tcp.bind("127.0.0.1", 8003)
val serverConnectionPromise = async {
server.accept()
}
val clientConnection = tcp.connect("127.0.0.1", 8003)
val serverConnection = serverConnectionPromise.await()
val serverInput = serverConnection.openReadChannel()
// MODIFIED START
launch {
val channel = clientConnection.openReadChannel()
}
// MODIFIED END
// Need to make sure reading from server is done first, which will suspend because there is nothing to read.
// Then close the connection from client side, which should cancel the reading because the socket disconnected.
launch {
delay(100)
clientConnection.close()
}
assertFailsWith<EOFException> {
serverInput.readByte()
}
serverConnection.close()
server.close()
}
Socket.accept doesn't throw an exception on closing a socket on Native
Test which fails on e.g. macosArm64 but succeeds on JVM:
@Test
fun testAcceptErrorOnSocketClose() = testSockets { selector ->
val socket = aSocket(selector)
.tcp()
.bind(InetSocketAddress("127.0.0.1", 0))
launch {
assertFailsWith<IOException> {
socket.accept()
}
}
delay(100) // Make sure socket is awaiting connection using ACCEPT
socket.close()
}
Server
Confusing import error in routing
When importing only io.ktor.server.routing.*
and trying to call call.respond(object)
, you'll get a missing typeInfo parameter
error.
This is a misleading error message on account of the respond function in this package.
The resolution is to include io.ktor.server.response.*
, but we ought to solve this in Ktor by introducing overloads or moving functions around.
SessionStorage.read() is called for non-authenticated routes and static assets
Hello,
I have user authentication configured with a custom session storage that is set up to read and write from an external database. I am running into an issue where a simple GET call to an un-authenticated route causes 10+ external calls to be made from SessionStorage.read() to retrieve the web session. This is especially a problem for web pages that serve many static assets from the server, since each call to /assets tries to retrieve the web session unnecessarily. It seems like if a session cookie exists, ktor will try to read it even if authenticate()
is not used for the route.
Authentication Configuration
fun Application.configureSecurity() {
install(Sessions) {
cookie<UserSessionPrincipal>("user_session", DatabaseSessionStorage()) {
cookie.secure = KTOR_ENV == "prod"
cookie.maxAgeInSeconds = 2629746 // 1 Month
serializer = KotlinxSessionSerializer(Json)
}
}
install(Authentication) {
session<UserSessionPrincipal>("auth-session") {
validate { session ->
if (Clock.System.now() > session.expireTimestampUtc) {
sessions.clear<UserSessionPrincipal>()
throw UnauthorizedException()
}
return@validate session
}
challenge { throw UnauthorizedException() }
}
}
}
Session Storage
// client is a REST client that sends requests to an external service
private class DatabaseSessionStorage : SessionStorage {
override suspend fun write(id: String, value: String) {
val session = Json.decodeFromString<UserSessionPrincipal>(value)
val request = WebSessionPostRequest().JsonBuilder()
.sessionId(id)
.expireTimestampUtc(session.expireTimestampUtc)
.build()
client.upsertWebSessionForUser(session.userId, request)
}
override suspend fun read(id: String): String {
val session = try {
client.getWebSessionBySessionId(id)
} catch (e: Exception) {
throw NoSuchElementException("Session with id $id not found")
}
return Json.encodeToString(
UserSessionPrincipal(session.userId, session.email, session.isEmailVerified, session.expireTimestampUtc)
)
}
override suspend fun invalidate(id: String) {
client.deleteWebSession(id)
}
}
Routing
fun Application.configureRouting() {
routing {
get("/test") {
call.respond(FreeMarkerContent("index.ftl", null))
}
staticResources("/assets", "assets")
authenticate("auth-form") {
// Routes responsible for setting web session from a login page
}
authenticate("auth-session") {
// Routes requiring session authentication
}
}
}
A call to /test with a session cookie set will then result in 10+ calls to the external API to try and retrieve the session details, first for the original call to the un-authenticated route, then a call to each of the requested static assets located in /assets from the web page.
Please let me know if this is just a mis-configuration on my end or if a bug fix is warranted. Either way it is very unexpected behavior that a simple GET request for a web page will result in the server trying to retrieve web session details when they are not required or explicitly asked for.
Thanks!
Ktor Server on Android: java.nio.file.ClosedWatchServiceException
I am running a Ktor sever on Android, I also start up a hotspot from the host device which other devices then use to connect to my server.
The first time I start a server everything works fine, but if I stop the server in the same app session and start it again I get the following exception:
2022-02-22 15:48:19.978 15112-15127/com.example.app E/System: Uncaught exception thrown by finalizer
2022-02-22 15:48:19.980 15112-15127/com.example.app E/System: java.nio.file.ClosedWatchServiceException
at sun.nio.fs.AbstractPoller.invoke(AbstractPoller.java:216)
at sun.nio.fs.AbstractPoller.close(AbstractPoller.java:144)
at sun.nio.fs.LinuxWatchService$Poller.finalize(LinuxWatchService.java:321)
at java.lang.Daemons$FinalizerDaemon.doFinalize(Daemons.java:289)
at java.lang.Daemons$FinalizerDaemon.runInternal(Daemons.java:276)
at java.lang.Daemons$Daemon.run(Daemons.java:137)
at java.lang.Thread.run(Thread.java:919)
The server continues to run but this causes my hotspot to stop working...
I can't see where the exception is occurring and I am unable to catch it, but I am quite sure it has something to do with Ktor as it only happens when a client device hits the server.
I understand I am not providing much info to go one, and that my use case is unique and complicated, so I am just looking for some general advice to solve these kinds of errors.
Any ideas?
CallLogging: Unhelpful log output "Application started: ..."
When my application starts up, the CallLogging feature logs the following unhelpful log message:
INFO Application started: io.ktor.server.application.Application@2e52fb3e
The code that does this is here. This doesn't seem useful and I would much prefer CallLogging did not log any output as part of the application lifecycle.
CIO Server Engine fails for requests with more than 64 headers
64 headers is not nearly enough. Lots of enterprise clients are unable to work because of this limit. This has to be restructured so that map can dynamically grow to the required size as headers are added. CIO can not be used for anything serious with such a limitation.
Jetty idleTimeout not working
After setting idleTimeout on the Jetty server engine, it is not honoured, and idle connections never time out.
https://jetbrains.slack.com/archives/C07U498LLUR/p1738321716261179
Read mutipart upload regression from 2.x to 3.x
We read multipart upload channel by chunks with this code, which worked perfrectly in 2.x:
IoByteBufferPool.useInstance { buffer ->
val bufferCapacity = buffer.capacity().toLong()
while (writtenBytes < limit) {
val bufferLimit = minOf(limit - writtenBytes, bufferCapacity).toInt()
buffer.clear()
buffer.limit(bufferLimit)
val readCount = this.read(buffer)
if (readCount < 0) break
if (readCount == 0) {
continue
}
buffer.flip()
while (buffer.hasRemaining()) {
writtenBytes += channel.write(buffer)
}
}
}
3.x start failing with exception:
kotlinx.coroutines.JobCancellationException Parent job is Cancelling
Caused by: java.io.IOException Limit of 52428800 bytes exceeded while scanning for "
--------------------------b3ed4acffac321e2"
at io.ktor.utils.io.ByteReadChannelOperationsKt.readUntil(ByteReadChannelOperations.kt:549)
at io.ktor.utils.io.ByteReadChannelOperationsKt$readUntil$1.invokeSuspend(ByteReadChannelOperations.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:221)
at kotlinx.coroutines.DispatchedTaskKt.resumeUnconfined(DispatchedTask.kt:177)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:149)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:470)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core(CancellableContinuationImpl.kt:504)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$kotlinx_coroutines_core$default(CancellableContinuationImpl.kt:493)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:359)
at io.ktor.utils.io.ByteChannel$Slot$Task$DefaultImpls.resume(ByteChannel.kt:233)
at io.ktor.utils.io.ByteChannel$Slot$Write.resume(ByteChannel.kt:253)
at io.ktor.utils.io.ByteChannel.moveFlushToReadBuffer(ByteChannel.kt:330)
at io.ktor.utils.io.ByteChannel.getReadBuffer(ByteChannel.kt:41)
at io.ktor.utils.io.jvm.javaio.BlockingKt$toInputStream$1.read(Blocking.kt:30)
at java.base/java.nio.channels.Channels$ReadableByteChannelImpl.read(Unknown Source)
where 52428800 is exactly the limit
in the snippet
DoubleReceive: NullPointerException caused by race condition
We are experiencing random test failures in our CI pipeline with the following error:
java.lang.NullPointerException
at kotlinx.io.Buffer.recycleHead$kotlinx_io_core(Buffer.kt:585)
at kotlinx.io.Buffer.skip(Buffer.kt:295)
at io.ktor.utils.io.core.ByteReadPacketKt.discard(ByteReadPacket.kt:64)
at io.ktor.utils.io.core.ByteReadPacketKt.discard$default(ByteReadPacket.kt:60)
at io.ktor.server.plugins.doublereceive.MemoryCache$dispose$1.invokeSuspend(ByteArrayCache.kt:56)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:586)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:829)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:717)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:704)
After analyzing the issue, we identified that the behavior of MemoryCache.dispose was modified in PR #4231, which introduced the following change:
GlobalScope.launch {
reader.discard()
fullBody?.discard()
}
The issue appears to be a race condition caused by the concurrent execution of a newly launched coroutine. Upon further investigation, we found that the dispose function is being invoked twice in our error scenarios.
The DoubleReceive plugin registers a ResponseSent hook that calls the dispose method. The ResponseSent hook is triggered every time call.response is invoked (as detailed in this StackOverflow post). In our case, the StatusPage plugin makes a second call during error handling, leading to multiple invocations of dispose.
We created a minimal example with a unit test to reproduce the issue (project attached).
@RepeatedTest(1000)
fun `my test`() = testApplication {
application {
install(DoubleReceive) { cacheRawRequest = true }
install(StatusPages) {
status(HttpStatusCode.BadRequest) { call, status ->
call.respondText(text = "400: Bad Request", status = status)
}
}
routing {
post("/") {
val request = call.receiveText()
call.respond(HttpStatusCode.BadRequest, request)
}
}
}
val client = createClient { }
client.post("/") {
setBody("Hello World")
}.also {
assertEquals(HttpStatusCode.BadRequest, it.status)
}
}
Uncaught ClosedWatchServiceException exception thrown by finalizer when closing the server
Subject:
Error: ClosedWatchServiceException
When Stopping Embedded Server
Description:
I am encountering an issue with the embedded server in my project. When the server instance is stopped and the garbage collector is triggered, the following error occurs:
Uncaught exception thrown by finalizer
java.nio.file.ClosedWatchServiceException
at sun.nio.fs.AbstractPoller.invoke(AbstractPoller.java:216)
at sun.nio.fs.AbstractPoller.close(AbstractPoller.java:144)
at sun.nio.fs.LinuxWatchService$Poller.finalize(LinuxWatchService.java:321)
at java.lang.Daemons$FinalizerDaemon.doFinalize(Daemons.java:370)
at java.lang.Daemons$FinalizerDaemon.processReference(Daemons.java:350)
at java.lang.Daemons$FinalizerDaemon.runInternal(Daemons.java:322)
at java.lang.Daemons$Daemon.run(Daemons.java:131)
at java.lang.Thread.run(Thread.java:1012)
Steps to Reproduce:
-
Start the provided project.
-
On the main screen, click the button labeled "Open Second Activity." Wait for a second.
-
Click the button labeled "Go Back." Wait for a few seconds.
-
Observe the exception in the logs.
Notes:
-
If the exception does not appear on the first attempt, repeat steps 2–3 several times until the error occurs.
-
The number of exceptions increases with the number of repetitions.
-
To reproduce the issue faster, I manually trigger the garbage collector during testing. However, the bug can still occur without manual triggering.
Without manually triggering the garbage collector, you may need to repeat steps 2–3 several times.
Attachments:
- A minimal reproducible project is provided to help debug the issue.
- A video demonstration of the issue is included, showcasing how the error occurs
Request for Assistance:
I’m unsure if this behavior is due to a mistake in how I’m managing the embeddedServer
or if it is a bug in the implementation. I would greatly appreciate your help in understanding this issue. If there’s a better approach to handle the WatchService
lifecycle during server shutdown or to avoid the ClosedWatchServiceException
, please let me know.
If this is a bug, I’d be grateful for any guidance on how it might be resolved. Thank you for taking the time to review my issue and for any assistance you can provide!
Support CIO server on WasmJS and JS targets
Swagger: Add deepLinking configuration
A customer asked at work if we could provide deep links to the Swagger documentation. Found that it was a setting in JavaScript (https://swagger.io/docs/open-source-tools/swagger-ui/usage/deep-linking/) that was not exposed by the plugin.
Micrometer: Add UptimeMetrics to standard meterBinders
Lots of standard dashboards include the JVM process uptime / start time. The default Ktor Micrometer Plugin doesn't include this metric, causing panels referencing it to be empty.
MicrometerMetrics: Do not write unknown HTTP method names to metrics
It is possible to write really long values to HTTP method names in Ktor Micrometer metrics with default configuration, like this one
https://github.com/ktorio/ktor-documentation/tree/3.0.0/codeSnippets/snippets/micrometer-metrics Just start this example and send a request with a long method name:
curl -X $(printf 'A%.0s' {1..500}) http://localhost:8080/
This will appear in metrics:
It was unexpected for me that it could be any value, and it is by default:
I might suggest processing only known HTTP request methods by default.
Add heartbeat to SSE
Swagger UI: Missing Favicon while browsing the UI
Description
The Ktor Swagger UI plugin does not include a favicon, leading to a missing resource error in the browser’s network panel and making it harder to locate the Swagger UI tab in the browser when multiple tabs are open. This impacts usability and creates unnecessary noise in the network logs.
Steps to Reproduce
1. Configure the Ktor Swagger UI plugin.
2. Open the Swagger UI in a browser.
3. Observe the browser tab and the network panel.
Expected Behavior
• Swagger UI should include a favicon (default Swagger UI favicon or a configurable option) that displays in the browser tab.
• No missing resource errors should appear in the network panel.
Actual Behavior
• The favicon is missing, resulting in a blank icon in the browser tab.
• A 404 or similar error is shown in the network panel for the missing favicon request.
Impact
• Makes it harder to distinguish the Swagger UI tab among multiple open tabs.
• Creates unnecessary errors in the browser’s network panel, potentially confusing developers.
Suggested Fix
• Include a default Swagger UI favicon in the Ktor Swagger UI plugin.
• Provide an option for developers to configure a custom favicon if desired.
`receiveMultipart` throws IllegalStateException instead of UnsupportedMediaTypeException
Given this routing
fun Route.myRouting() {
post<MyResource> { request ->
call.receiveMultipart()
}
}
Action
A request is sent which does not have a Content-Type header
Expected
UnsupportedMediaException is thrown, 415 code is returned
Actual
IllegalStateException is thrown, 500 code is returned.
java.lang.IllegalStateException: Content-Type header is required for multipart processing
at io.ktor.server.engine.DefaultTransformJvmKt.multiPartData(DefaultTransformJvm.kt:37)
at io.ktor.server.engine.DefaultTransformJvmKt.defaultPlatformTransformations(DefaultTransformJvm.kt:29)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invokeSuspend(DefaultTransform.kt:69)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invoke(DefaultTransform.kt)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invoke(DefaultTransform.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.request.ApplicationReceiveFunctionsKt.receiveNullable(ApplicationReceiveFunctions.kt:103)
...
at io.ktor.server.resources.RoutingKt$handle$2.invokeSuspend(Routing.kt:265)
at io.ktor.server.resources.RoutingKt$handle$2.invoke(Routing.kt)
at io.ktor.server.resources.RoutingKt$handle$2.invoke(Routing.kt)
at io.ktor.server.routing.Route$buildPipeline$1$1.invokeSuspend(Route.kt:116)
at io.ktor.server.routing.Route$buildPipeline$1$1.invoke(Route.kt)
at io.ktor.server.routing.Route$buildPipeline$1$1.invoke(Route.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.routing.Routing.executeResult(Routing.kt:190)
at io.ktor.server.routing.Routing.interceptor(Routing.kt:64)
at io.ktor.server.routing.Routing$Plugin$install$1.invokeSuspend(Routing.kt:140)
at io.ktor.server.routing.Routing$Plugin$install$1.invoke(Routing.kt)
at io.ktor.server.routing.Routing$Plugin$install$1.invoke(Routing.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invokeSuspend(TenantContext.kt:31)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invoke(TenantContext.kt)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invoke(TenantContext.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invokeSuspend(BaseApplicationEngine.kt:124)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invoke(BaseApplicationEngine.kt)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invoke(BaseApplicationEngine.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invokeSuspend(CommonHooks.kt:45)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.CoroutineScopeKt.coroutineScope(CoroutineScope.kt:264)
at io.ktor.server.application.hooks.CallFailed$install$1.invokeSuspend(CommonHooks.kt:44)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invokeSuspend(CommonHooks.kt:45)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.CoroutineScopeKt.coroutineScope(CoroutineScope.kt:264)
at io.ktor.server.application.hooks.CallFailed$install$1.invokeSuspend(CommonHooks.kt:44)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invokeSuspend(DefaultEnginePipeline.kt:123)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invoke(DefaultEnginePipeline.kt)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invoke(DefaultEnginePipeline.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invokeSuspend(NettyApplicationCallHandler.kt:140)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invoke(NettyApplicationCallHandler.kt)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invoke(NettyApplicationCallHandler.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startCoroutineUndispatched(Undispatched.kt:44)
at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:112)
at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:126)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:56)
at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
at io.ktor.server.netty.NettyApplicationCallHandler.handleRequest(NettyApplicationCallHandler.kt:41)
at io.ktor.server.netty.NettyApplicationCallHandler.channelRead(NettyApplicationCallHandler.kt:33)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61)
at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:425)
at io.netty.util.concurrent.AbstractEventExecutor.runTask$$$capture(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.ktor.server.netty.EventLoopGroupProxy$Companion.create$lambda$1$lambda$0(NettyApplicationEngine.kt:296)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:1583)
Similar thing happens with a wrong Content-Type header
Action
A request is sent which has a Content-Type header equal to text/plain
Expected
UnsupportedMediaException is thrown, 415 code is returned
Actual
IOException is thrown, 500 code is returned.
2024-09-19 10:59:23.414 [eventLoopGroupProxy-4-3] ERROR c.d.mosaix.validation.StatusPages - Unhandled exception: Failed to parse multipart: Content-Type should be multipart/* but it is text/plain.
java.io.IOException: Failed to parse multipart: Content-Type should be multipart/* but it is text/plain
at io.ktor.http.cio.MultipartKt.parseMultipart(Multipart.kt:260)
at io.ktor.http.cio.CIOMultipartDataBase.<init>(CIOMultipartDataBase.kt:34)
at io.ktor.http.cio.CIOMultipartDataBase.<init>(CIOMultipartDataBase.kt:26)
at io.ktor.server.engine.DefaultTransformJvmKt.multiPartData(DefaultTransformJvm.kt:40)
at io.ktor.server.engine.DefaultTransformJvmKt.defaultPlatformTransformations(DefaultTransformJvm.kt:29)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invokeSuspend(DefaultTransform.kt:69)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invoke(DefaultTransform.kt)
at io.ktor.server.engine.DefaultTransformKt$installDefaultTransformations$2.invoke(DefaultTransform.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.request.ApplicationReceiveFunctionsKt.receiveNullable(ApplicationReceiveFunctions.kt:103)
...
at io.ktor.server.resources.RoutingKt$handle$2.invokeSuspend(Routing.kt:265)
at io.ktor.server.resources.RoutingKt$handle$2.invoke(Routing.kt)
at io.ktor.server.resources.RoutingKt$handle$2.invoke(Routing.kt)
at io.ktor.server.routing.Route$buildPipeline$1$1.invokeSuspend(Route.kt:116)
at io.ktor.server.routing.Route$buildPipeline$1$1.invoke(Route.kt)
at io.ktor.server.routing.Route$buildPipeline$1$1.invoke(Route.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.routing.Routing$executeResult$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.routing.Routing.executeResult(Routing.kt:190)
at io.ktor.server.routing.Routing.interceptor(Routing.kt:64)
at io.ktor.server.routing.Routing$Plugin$install$1.invokeSuspend(Routing.kt:140)
at io.ktor.server.routing.Routing$Plugin$install$1.invoke(Routing.kt)
at io.ktor.server.routing.Routing$Plugin$install$1.invoke(Routing.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invokeSuspend(TenantContext.kt:31)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invoke(TenantContext.kt)
at com.deviceinsight.mosaix.tenantcontext.TenantContextKt$TenantContextPlugin$2$1.invoke(TenantContext.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invokeSuspend(BaseApplicationEngine.kt:124)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invoke(BaseApplicationEngine.kt)
at io.ktor.server.engine.BaseApplicationEngineKt$installDefaultTransformationChecker$1.invoke(BaseApplicationEngine.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invokeSuspend(CommonHooks.kt:45)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.CoroutineScopeKt.coroutineScope(CoroutineScope.kt:264)
at io.ktor.server.application.hooks.CallFailed$install$1.invokeSuspend(CommonHooks.kt:44)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invokeSuspend(CommonHooks.kt:45)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1$1.invoke(CommonHooks.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.CoroutineScopeKt.coroutineScope(CoroutineScope.kt:264)
at io.ktor.server.application.hooks.CallFailed$install$1.invokeSuspend(CommonHooks.kt:44)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.server.application.hooks.CallFailed$install$1.invoke(CommonHooks.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invokeSuspend(DefaultEnginePipeline.kt:123)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invoke(DefaultEnginePipeline.kt)
at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$1.invoke(DefaultEnginePipeline.kt)
at io.ktor.util.pipeline.DebugPipelineContext.proceedLoop(DebugPipelineContext.kt:80)
at io.ktor.util.pipeline.DebugPipelineContext.proceed(DebugPipelineContext.kt:57)
at io.ktor.util.pipeline.DebugPipelineContext.execute$ktor_utils(DebugPipelineContext.kt:63)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invokeSuspend(Pipeline.kt:478)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1$invokeSuspend$$inlined$execute$1.invoke(Pipeline.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invokeSuspend(ContextUtils.kt:20)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at io.ktor.util.debug.ContextUtilsKt$initContextInDebugMode$2.invoke(ContextUtils.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:78)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:167)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
at io.ktor.util.debug.ContextUtilsKt.initContextInDebugMode(ContextUtils.kt:20)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invokeSuspend(NettyApplicationCallHandler.kt:140)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invoke(NettyApplicationCallHandler.kt)
at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invoke(NettyApplicationCallHandler.kt)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startCoroutineUndispatched(Undispatched.kt:44)
at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:112)
at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:126)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:56)
at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
at io.ktor.server.netty.NettyApplicationCallHandler.handleRequest(NettyApplicationCallHandler.kt:41)
at io.ktor.server.netty.NettyApplicationCallHandler.channelRead(NettyApplicationCallHandler.kt:33)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61)
at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:425)
at io.netty.util.concurrent.AbstractEventExecutor.runTask$$$capture(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.ktor.server.netty.EventLoopGroupProxy$Companion.create$lambda$1$lambda$0(NettyApplicationEngine.kt:296)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:1583)
Implement a suspending version of EmbeddedServer.start(wait=true)
In some cases, such as small local test servers or other utilities that have a short-lived server, I want to suspend on the server instead of blocking on it. Currently, the best way I have for this looks like:
server.start(wait = false) /*I want to wait. However, I want it to be suspending instead of blocking.*/
var applicationStopped = false
val handle = server.monitor.subscribe(ApplicationStopped) {
applicationStopped = true
}
while (true) {
if (applicationStopped) break
delay(50.milliseconds)
}
handle.dispose()
I am requesting a suspend fun EmbeddedServer.startSuspending(wait=true)
which would suspend instead of block when wait
is true. Maybe the server could also inherit the outer coroutine scope, which would make startSuspending(wait=false)
possibly have usages as well. This would essentially launch the server in a new coroutine.
Test Infrastructure
Native Windows tests failing due to port exhaustion
This build configuration consistently fails due to socket failures on Windows:
https://ktor.teamcity.com/buildConfiguration/Ktor_KtorMatrixNativeWindowsX64
We'll need to find a way to fix it.
Don't publish internal test artifacts
These artifacts are not supposed to be published. Users couldn't use them as they depend on modules that are not published, so it should be safe just to drop them from publishing.
ktor-client-tests
ktor-server-test-base
ktor-client-content-negotiation-tests
Engine exclusion from clientTests is confusing
clientTests
function is used extensively in our internal tests, but its syntax is confusing.
For example, clientTests(listOf("Js", "Jetty")) { ... }
reads like "run tests on Js and Jetty clients" which is completely opposite to the truth.
Created from discussion: https://github.com/ktorio/ktor/pull/4441#discussion_r1829332003
Other
Expose EngineMain server instance
We have engine main functions like io.ktor.server.netty.EngineMain
for out-of-the-box execution, but there's no way to access the server instance for integration tests and the like.
We ought to simply have some function like createServer()
that gives you the embedded server instance with the command line environment supplied by the same arguments.
Add serialization for SSE
Right now there is no content negotiation for the individual messages, however, it could be done by analogy with Websocket https://ktor.io/docs/server-websocket-serialization.html
Fix concurrent flush and close in the reader
ByteChannel read issue on min > 1
The JVM ByteReadChannelOperations function for writing to a WritableByteChannel
currently functions as a no-op when there is less than min
bytes available in the buffer, but it should suspend until these bytes are available.
The current behaviour will lead to an infinite loop with this basic pattern:
channel.writeByte(1)
channel.close()
while (!channel.isClosedForRead) {
channel.read(min = 2) { bytes ->
// do stuff
}
}
Expected behaviour: throws EOFException
Introduce ServerSocket.port to simplify port access for the bound server
The ServerSocket is always bound and has localAddress
of a type InetSocketAddress
, so we can introduce an extension to simplify port obtaining:
val ServerSocket.port: Int get() = (localAddress as InetSocketAddress).port
Unix Domain Socket Support for Native Targets
Currently there is no support for unix domain sockets on androidNative, iOS, tvOS, watchOS or Mingw which throws exception when trying to utilize UnixSocketAddress
. There is support though for sockaddr_un
usage from the platform SDKs via sys/un.h
header file. this would allow for working implementations of unpack_sockaddr_un
and pack_sockaddr_un
for those platforms.
ktor-network/nix/interop/un.def
package = io.ktor.network.interop
headers = sys/un.h
headerFilter = sys/un.h
ktor-network/build.gradle.kts
// ...
kotlin {
// ...
createCInterop("un", androidNativeTargets() + iosTargets() + tvosTargets() + watchosTargets()) {
definitionFile = projectDir.resolve("nix/interop/un.def")
}
// ...
}
On Windows it depends on if afunix.h
is available, but the following can be defined
ktor-network/windows/interop/afunix.def
package = io.ktor.network.interop
---
#ifdef KTOR_HAVE_AF_UNIX_H
#include <afunix.h>
#else
#include <winsock2.h>
#define UNIX_PATH_MAX 108
struct sockaddr_un {
ADDRESS_FAMILY sun_family;
char sun_path[UNIX_PATH_MAX];
} SOCKADDR_UN, *PSOCKADDR_UN;
#endif
ktor-network/build.gradle.kts
kotlin {
// ...
createCInterop("afunix", windowsTargets()) {
definitionFile = projectDir.resolve("windows/interop/afunix.def")
}
// ...
}
A runtime check is necessary for Windows to ensure support
internal val isAFUnixSupported: Boolean by lazy {
initSocketsIfNeeded // Currently private function in SocketUtilsWindows.kt
val s = socket(AF_UNIX, SOCK_STREAM, 0)
if (s == INVALID_SOCKET) return@lazy false
closesocket(s)
true
}
Allow to Disable Body Decompression on the Server for a specific call
There is a call.suppreessEncoding()
method, preventing the body from being encoded with the compression plugin.
In the case of writing proxy servers, it is necessary to have a pairing method to prevent body decompression, like:
call.suppressDecoding()