Gigahorse 0.1.0


Update: please use Gigahorse 0.1.1

Gigahorse 0.1.0 is now released. It is an HTTP client for Scala with Async Http Client underneath. Please see Gigahorse docs for the details. Here's an example snippet to get the feel of the library.

scala> import gigahorse._
scala> import scala.concurrent._, duration._
scala> Gigahorse.withHttp(Gigahorse.config) { http =>
         val r = Gigahorse.url("").get.
             "q" -> "1 + 1",
             "format" -> "json"
         val f =, Gigahorse.asString andThen {_.take(60)})
         Await.result(f, 120.seconds)

registry and reference pattern


There's a "pattern" that I've been thinking about, which arises in some situation while persisting/serializing objects.

To motivate this, consider the following case class:

scala> case class User(name: String, parents: List[User])
defined class User
scala> val alice = User("Alice", Nil)
alice: User = User(Alice,List())
scala> val bob = User("Bob", alice :: Nil)
bob: User = User(Bob,List(User(Alice,List())))
scala> val charles = User("Charles", bob :: Nil)
charles: User = User(Charles,List(User(Bob,List(User(Alice,List())))))
scala> val users = List(alice, bob, charles)
users: List[User] = List(User(Alice,List()), User(Bob,List(User(Alice,List()))),

The important part is that it contains parents field, which contains a list of other users.
Now let's say you want to turn users list of users into JSON.

[{ "name": "Alice", "parents": [] },
{ "name": "Bob",
  "parents": [{ "name": "Alice", "parents": [] }] },
{ "name": "Charles",
  "parents": [{ "name": "Bob", "parents": [{ "name": "Alice", "parents": [] }] }] }]

sjson-new and the prisoner of Azkaban


This is part 3 on the topic of sjson-new. See also part 1 and part 2.

Within the sbt code base there are a few places where the persisted data is in the order of hundreds of megabytes that I suspect it becomes a performance bottleneck, especially on machines without an SSD drive.
Naturally, my first instinct was to start reading up on the encoding of Google Protocol Buffers to implement my own custom binary format.

microbenchmark using sbt-jmh

What I should've done first, is start benchmarking. Using @ktosopl (Konrad Malawski)'s sbt-jmh, setting up a microbenchmark is easy. All you have to do is pop that plugin into your build. and create a subproject that enables JmhPlugin.

sjson-new and custom codecs using LList


Two months ago, I wrote about sjson-new. I was working on that again over the weekend, so here's the update.
In the earlier post, I've introduced the family tree of JSON libraries in Scala ecosystem, the notion of backend independent, typeclass based JSON codec library. I concluded that we need some easy way of defining a custom codec for it to be usable.

roll your own shapeless

In between the April post and the last weekend, there were flatMap(Oslo) 2016 and Scala Days New York 2016. Unfortunately I wasn't able to attend flatMap, but I was able to catch Daniel Spiewak's "Roll Your Own Shapeless" talk in New York. The full flatMap version is available on vimeo, so I recommend you check it out.

sbt internally uses HList for caching using sbinary:

implicit def mavenCacheToHL = (m: MavenCache) => :*: m.rootFile.getAbsolutePath :*: HNil
implicit def mavenRToHL = (m: MavenRepository) => :*: m.root :*: HNil

and I've been thinking something like an HList or Shapeless's LabelledGeneric would be a good intermediate datatype to represent JSON object, so Daniel's talk became the last push on my back.
In this post, I will introduce a special purpose HList called LList.


sjson-new comes with a datatype called LList, which stands for labelled heterogeneous list.
List[A] that comes with the Standard Library can only store values of one type, namely A. Unlike the standard List[A], LList can store values of different types per cell, and it can also store a label per cell. Because of this reason, each LList has its own type. Here's how it looks in the REPL:

scala> import sjsonnew._, LList.:*:
import sjsonnew._
import LList.$colon$plus$colon
scala> import BasicJsonProtocol._
import BasicJsonProtocol._
scala> val x = ("name", "A") :*: ("value", 1) :*: LNil
x: sjsonnew.LList.:*:[String,sjsonnew.LList.:*:[Int,sjsonnew.LNil]] = (name, A) :*: (value, 1) :*: LNil
scala> val y: String :*: Int :*: LNil = x
y: sjsonnew.LList.:*:[String,sjsonnew.LList.:*:[Int,sjsonnew.LNil]] = (name, A) :*: (value, 1) :*: LNil

Can you find String and Int mentioned in that long type name of x? String :*: Int :*: LNil is a short form of writing that as demonstrated by y.

BasicJsonProtocol is able to convert all LList values into a JSON object.



I made a new Github organization called foundweekends for people who like coding in the weekends. If you want to join, or have project ideas ping me on twitter or come talk to us on Gitter.

As the starter, it will pick up maintenance of conscript, giter8, and pamflet from @n8han.



As a favorite weekend activity for the Scala programmers, I wrote my own JSON library called sjson-new.
sjson-new is a typeclass based JSON codec library, or wit for that Jawn. In other words, it aims to provide sjson-like codec facility in a backend independent way.

In terms of the codebase I based it off of spray-json, but conceptually it's close to Scala Pickling in the way it deals with data. Unlike Pickling, however, sjson-new-core is free of macros and runtime reflection beyond normal pattern matching.

Here's how to use with Json4s-AST:

libraryDependencies += "com.eed3si9n" %%  "sjson-new-json4s" % "0.1.0"

Here's how to use with Spray:

libraryDependencies += "com.eed3si9n" %%  "sjson-new-spray" % "0.1.0"

sbt server reboot


This is a continuation from the sbt 1.0 roadmap that I wrote recently. In this post, I'm going to introduce a new implementation of sbt server. Please post on sbt-dev mailing list for feedback.

The motivation for sbt server is better IDE integration.

A build is a giant, mutable, shared, state, device. It's called disk! The build works with disk. You cannot get away from disk.

-- Josh Suereth in The road to sbt 1.0 is paved with server

The disk on your machine is fundamentally a stateful thing, and sbt can execute the tasks in parallel only because it has the full control of the effects. Any time you are running both sbt and an IDE, or you're running multiple instances of sbt against the same build, sbt cannot guarantee the state of the build.

The original concept of sbt server was proposed in 2013. Around the same time sbt-remote-control project was also started as the implementation of the idea. At some point, sbt 0.13 stabilized and Activator became the driver of sbt-remote-control, adding to it more constraints such as not changing sbt itself, and supporting JavaScript as the client.

With sbt 1.0 in mind, I have rebooted the sbt server effort. Instead of building something outside of sbt, I want to underengineer the whole thing. This means throwing out previously made assumptions that I think are non-essential such as automatic discovery and automatic serialization. Instead I want to make something small that we can comfortably merge into sbt/sbt codebase. Lightbend holds Engineering Meeting a few times a year where we all fly to a location and have discussions face to face, and also do an internal "hackathon." During the Februay code retreat in beautiful Budapest, Johan Andrén (@apnylle), Toni Cunei, and Martin Duhem joined my proposal to work on the sbt server reboot. The goal was to make a button on IntelliJ IDEA that can trigger a build in sbt.

sbt 1.0 roadmap


There’s been some discussions around sbt 1.0 lately, so here is a writeup to discuss it. This document is intended to be a mid-term mission statement. A refocus to get something out. Please post on sbt-dev mailing list for feedback.


I don’t have a good idea on the timing of when sbt 1.0 will ship.
The biggest feature of sbt 1.0 is code reorganization, which is is already in progress:

ScalaMatsuri as a lifestyle


For me (and for many of the 27 organizers, I imagine) ScalaMatsuri is a lifestyle. It's true that there was a successful two-day conference in Tokyo with 550 participants. But for us the organizers, the preparation has been going on since February 28th, for 11 months. Despite the fact that my contribution was small, planning ScalaMatsuri 2016 was by far the most amount of involvement I've committed to. Through the course of planning months, there were many discussions over Slack, Hangouts, and occasionally even face-to-face. The fun part was coming up with the ideas together, and seeing them materialize. Sometimes, I was the one coming up with radical ideas that were being executed by someone else, while other times, it was the opposite case and I was getting my hands dirty.

I've already written a lot of what I wanted to say in A regional tech conference that's also global, so there might be some overlap over here.

a regional tech conference that's also global

Syndicate content