Basic concepts 


Gigahorse is a helper object to create many useful things. For AHC backend, use For Akka HTTP backend,


The HttpClient represents an HTTP client that’s able to handle multiple requests. When it’s used it will spawn many threads, so the lifetime of an HttpClient must be managed with care. Otherwise your program will run out of resources.

There are two ways of creating an HttpClient. First is using the loan pattern Gigahorse.withHttp(config) { ... }:

scala> import gigahorse._, support.asynchttpclient.Gigahorse
import gigahorse._
import support.asynchttpclient.Gigahorse

scala> Gigahorse.withHttp(Gigahorse.config) { http =>
         // do something

This will guarantee to close the HttpClient, but the drawback is that it could close prematurely before HTTP process is done, so you would have to block inside to wait for all the futures.

The second way is creating using Gigahorse.http(Gigahourse.config). If you use this, you must close the client yourself:

scala> val http = Gigahorse.http(Gigahorse.config)
http: gigahorse.HttpClient = AchHttpClient(org.asynchttpclient.DefaultAsyncHttpClientConfig@3510671c)

scala> http.close() // must call close()


To create an HttpClient you need to pass in a Config. Gigahorse.config will read from application.conf to configure the settings if it exists. Otherwise, it will pick the default values.

scala> Gigahorse.config
res2: gigahorse.Config = Config(120 seconds, 120 seconds, 120 seconds, 200 milliseconds, true, 5, false, None, None, SSLConfig(None,SSLDebugConfig(false,false,false,None,false,false,false,false,None,false,false,false,false,false),false,List(RSA keySize < 2048, DSA keySize < 2048, EC keySize < 224),List(MD2, MD4, MD5),None,Some(List(TLSv1.2, TLSv1.1, TLSv1)),class com.typesafe.sslconfig.ssl.DefaultHostnameVerifier,KeyManagerConfig(SunX509,List()),SSLLooseConfig(false,None,None,false,false,false,false),TLSv1.2,None,None,SSLParametersConfig(Default,List()),TrustManagerConfig(PKIX,List())), 5, false, true, true, 1 minute, Duration.Inf, -1, -1, ConfigMemorySize(1048576), ConfigMemorySize(1048576))


The Request is an immutable datatype that represents a single HTTP request. Unlike HttpClient this is relativey cheap to create and keep around.

To construct a request, call Gigahorse.url(...) function:

scala> val r = Gigahorse.url("").get.
           "q" -> "1 + 1",
           "format" -> "json"
r: gigahorse.Request = Request(, GET, EmptyBody(), Map(), Map(q -> List(1 + 1), format -> List(json)), None, None, None, None, None, None)

You can chain calls like the above, which keeps returning a new request value., f) 

There are many methods on HttpClient, but probably the most useful one is, f) method:

abstract class HttpClient extends AutoCloseable {
  /** Runs the request and return a Future of A. Errors on non-OK response. */
  def run[A](request: Request, f: FullResponse => A): Future[A]


The first parameter take a Request, and the second parameter takes a function from FullResponse to A. There’s a built-in function called Gigahorse.asString that returns the body content as a String.

Since this is a plain function, you can compose it with some other function using andThen:

scala> import scala.concurrent._, duration._
import scala.concurrent._
import duration._

scala> Gigahorse.withHttp(Gigahorse.config) { http =>
         val r = Gigahorse.url("").get.
             "q" -> "1 + 1",
             "format" -> "json"
         val f =, Gigahorse.asString andThen {_.take(60)})
         Await.result(f, 120.seconds)
res3: String = {"DefinitionSource":"","Heading":"1+1","ImageWidth":0,"Relat


Because run executes a request in a non-blocking fashion, it returns a Future. Normally, you want to keep the Future value as long as you can, but here, we will block it to see the value.

One motivation for keeping the Future value as long as you can is working with multiple Futures (HTTP requests) in parallel. See Futures and Promises to learn more about Futures.

http.runStream(r, f) 

Instead of running on the full reponse, Gigahorse can also treat the incoming response as a Reactive Stream, and process them by chunk, for example line by line.