Ktor 3.0.0 Help

Connection pooling and caching

In the previous tutorial, we added persistence to a website using the Exposed framework. In this tutorial, we'll look at how to implement database connection pooling and caching using the HikariCP and Ehcache libraries, respectively.

Add dependencies

First, you need to add dependencies for the HikariCP and Ehcache libraries. Open the gradle.properties file and specify library versions:

hikaricp_version = 5.0.1 ehcache_version = 3.10.8

Then, open build.gradle.kts and add the following dependencies:

val hikaricp_version: String by project val ehcache_version: String by project dependencies { implementation("com.zaxxer:HikariCP:$hikaricp_version") implementation("org.ehcache:ehcache:$ehcache_version") }

Click the Load Gradle Changes icon in the top right corner of the build.gradle.kts file to install newly added dependencies.

Connection pooling

Exposed starts a new JDBC connection inside each transaction call when it performs the first manipulation with the database in the scope of this transaction. But establishing multiple JDBC connections is resource-expensive: reusing the existing connections could help to improve performance. The connection pooling mechanism solves this problem.

In this section, we'll use the HikariCP framework to manage JDBC connection pooling in our application.

Extract connection settings into a configuration file

In the previous tutorial, we used hardcoded driverClassName and jdbcURL in the com/example/dao/DatabaseFactory.kt file to establish a database connection:

fun init() { val driverClassName = "org.h2.Driver" val jdbcURL = "jdbc:h2:file:./build/db" }

Let's extract database connection settings to a custom configuration group.

  1. Open the src/main/resources/application.conf file and add the storage group outside the ktor group as follows:

    storage { driverClassName = "org.h2.Driver" jdbcURL = "jdbc:h2:file:" dbFilePath = build/db }
  2. Open com/example/dao/DatabaseFactory.kt and update the init function to load storage settings from the configuration file:

    import io.ktor.server.config.* import java.io.* object DatabaseFactory { fun init(config: ApplicationConfig) { val driverClassName = config.property("storage.driverClassName").getString() val jdbcURL = config.property("storage.jdbcURL").getString() + (config.propertyOrNull("storage.dbFilePath")?.getString()?.let { File(it).canonicalFile.absolutePath } ?: "") } }

    The init function now accepts ApplicationConfig and uses config.property to load custom settings.

  3. Finally, open com/example/Application.kt and pass environment.config to DatabaseFactory.init to load connection settings on application startup:

    fun Application.module() { DatabaseFactory.init(environment.config) configureRouting() configureTemplating() }

Enable connection pooling

To enable connection pooling in Exposed, you need to provide DataSource as a parameter to the Database.connect function. HikariCP provides the HikariDataSource class that implements the DataSource interface.

  1. To create HikariDataSource, open com/example/dao/DatabaseFactory.kt and add the createHikariDataSource function to the DatabaseFactory object:

    import com.zaxxer.hikari.* object DatabaseFactory { private fun createHikariDataSource( url: String, driver: String ) = HikariDataSource(HikariConfig().apply { driverClassName = driver jdbcUrl = url maximumPoolSize = 3 isAutoCommit = false transactionIsolation = "TRANSACTION_REPEATABLE_READ" validate() }) }

    Here are some notes on the data source settings:

    • The createHikariDataSource function takes the driver class name and database URL as the parameters.

    • The maximumPoolSize property specifies the maximum size the connection pool can reach.

    • isAutoCommit and transactionIsolation are set to sync with the default settings used by Exposed.

  2. To use HikariDataSource, pass it to the Database.connect function:

    object DatabaseFactory { fun init(config: ApplicationConfig) { val database = Database.connect(createHikariDataSource(url = jdbcURL, driver = driverClassName)) } }

    You can now run the application and make sure everything works as before.

Caching

You can supplement a database with a database cache. Caching is a technique that enables storing the frequently used data in temporary memory and can reduce the workload for a database and the time to read the frequently-required data.

In this tutorial, we'll use the Ehcache library to organize the cache in a file.

Add a cache file path to the configuration

Open the src/main/resources/application.conf file and add the ehcacheFilePath property to the storage group:

storage { ehcacheFilePath = build/ehcache }

This property specifies the path to a file used to store the cache data. We'll use it later to configure a DAOFacade implementation for working with a cache.

Implement caching

To implement caching, we need to provide another DAOFacade implementation that returns a value from the cache and delegates it to the database interface if there is no cached value.

  1. Create a new DAOFacadeCacheImpl.kt file in the com.example.dao package and add the following implementation to it:

    package com.example.dao import com.example.models.* import org.ehcache.config.builders.* import org.ehcache.config.units.* import org.ehcache.impl.config.persistence.* import java.io.* class DAOFacadeCacheImpl( private val delegate: DAOFacade, storagePath: File ) : DAOFacade { private val cacheManager = CacheManagerBuilder.newCacheManagerBuilder() .with(CacheManagerPersistenceConfiguration(storagePath)) .withCache( "articlesCache", CacheConfigurationBuilder.newCacheConfigurationBuilder( Int::class.javaObjectType, Article::class.java, ResourcePoolsBuilder.newResourcePoolsBuilder() .heap(1000, EntryUnit.ENTRIES) .offheap(10, MemoryUnit.MB) .disk(100, MemoryUnit.MB, true) ) ) .build(true) private val articlesCache = cacheManager.getCache("articlesCache", Int::class.javaObjectType, Article::class.java) }

    Here is a short overview of this code sample:

    • To initialize and configure the cache, we define an Ehcache CacheManager instance. We provide storagePath as the root directory to be used for disk storage.

    • We create a cache for entries that stores articles by their IDs: articlesCache maps Int keys to Article values.

    • Then we provide size constraints for local memory and disk resources. You can read more about these parameters in the Ehcache documentation.

    • Finally, we obtain the created cache by calling cacheManager.getCache() with the provided name, key, and value types.

  2. To be used in a cache, the Article class should be serializable and implement java.io.Serializable. Open com/example/models/Article.kt and update the code as follows:

    import java.io.Serializable data class Article(val id: Int, val title: String, val body: String): Serializable
  3. Now we're ready to implement the members of DAOFacade. Back in DAOFacadeCacheImpl.kt, add the following methods:

    override suspend fun allArticles(): List<Article> = delegate.allArticles() override suspend fun article(id: Int): Article? = articlesCache[id] ?: delegate.article(id) .also { article -> articlesCache.put(id, article) } override suspend fun addNewArticle(title: String, body: String): Article? = delegate.addNewArticle(title, body) ?.also { article -> articlesCache.put(article.id, article) } override suspend fun editArticle(id: Int, title: String, body: String): Boolean { articlesCache.put(id, Article(id, title, body)) return delegate.editArticle(id, title, body) } override suspend fun deleteArticle(id: Int): Boolean { articlesCache.remove(id) return delegate.deleteArticle(id) }
    • allArticles: we don't try to cache all the articles; we delegate this to the main database.

    • article: when we get an article, we first check whether it's present in the cache, and only if it's not the case, we delegate this to the main DAOFacade and also add this article to the cache.

    • addNewArticle: when we add a new article, we delegate it to the main DAOFacade, but we also add this article to the cache.

    • editArticle: when editing the existing article, we update both the cache and the database.

    • deleteArticle: on delete, we need to delete the article both from the cache and from the main database.

Initialize DAOFacadeCacheImpl

Let's create an instance of DAOFacadeCacheImpl and add a sample article to be inserted into the database before the application is started:

  1. First, open the DAOFacadeImpl.kt file and remove the dao variable initialization at the bottom of the file.

  2. Then, open com/example/plugins/Routing.kt and initialize the dao variable inside the configureRouting block:

    import kotlinx.coroutines.* import java.io.* fun Application.configureRouting() { val dao: DAOFacade = DAOFacadeCacheImpl( DAOFacadeImpl(), File(environment.config.property("storage.ehcacheFilePath").getString()) ).apply { runBlocking { if(allArticles().isEmpty()) { addNewArticle("The drive to develop!", "...it's what keeps me going.") } } } }

    And that's it. You can now run the application and make sure everything works as before.

Last modified: 14 December 2022