herding cats 

This is a log of me going through Cats, a functional programming library for Scala that is currently experimental and under active development. I’ll try to follow the structure of learning Scalaz that I wrote in 2012 (!).

Cats’ website says the name is a playful shortening of the word “category.” Some people say coordinating programmers is like herding cats (as in attempting to move dozens of them in a direction like cattle, while cats have their own ideas), and it seems to be true at least for those of us using Scala. Keenly aware of this situation, the cats list approachability as their first motivation.

Cats also looks interesting from the technical perspective. With its approachability and Erik Asheim (@d6/@non)’s awesomeness, people are gathering around them with new ideas such as Michael Pilquist (@mpilquist)’s simulacrum and Miles Sabin (@milessabin)’s deriving typeclasses. Hopefully I’ll learn more in the coming days.

gif

day 1 

typeclasses 101 

Learn You a Haskell for Great Good says:

A typeclass is a sort of interface that defines some behavior. If a type is a part of a typeclass, that means that it supports and implements the behavior the typeclass describes.

Cats says:

We are trying to make the library modular. It will have a tight core which will contain only the typeclasses and the bare minimum of data structures that are needed to support them. Support for using these typeclasses with the Scala standard library will be in the stdproject.

Let’s keep going with learning me a Haskell.

sbt 

Cats is a new project under active development. Feedback and contributions are welcomed as we look to improve it. This project is evolving quickly and we are making no guarantees about stability until a 1.0 release is made (current est. around Q3 2016).

A released version of Cats is now available.

After that, you can test it using build.sbt this:

val catsVersion = "0.7.2"
val catsAll = "org.typelevel" %% "cats" % catsVersion
val macroParadise = compilerPlugin("org.scalamacros" % "paradise" % "2.1.0" cross CrossVersion.full)
val kindProjector = compilerPlugin("org.spire-math" %% "kind-projector" % "0.6.3")
val resetAllAttrs = "org.scalamacros" %% "resetallattrs" % "1.0.0-M1"

val specs2Version = "3.6" // use the version used by discipline
val specs2Core  = "org.specs2" %% "specs2-core" % specs2Version
val specs2Scalacheck = "org.specs2" %% "specs2-scalacheck" % specs2Version
val scalacheck = "org.scalacheck" %% "scalacheck" % "1.12.4"

lazy val root = (project in file(".")).
  settings(
    organization := "com.example",
    name := "something",
    scalaVersion := "2.11.6",
    libraryDependencies ++= Seq(
      catsAll,
      specs2Core % Test, specs2Scalacheck % Test, scalacheck % Test,
      macroParadise, kindProjector, resetAllAttrs
    ),
    scalacOptions ++= Seq(
      "-deprecation",
      "-encoding", "UTF-8",
      "-feature",
      "-language:_"
    )
  )

You can then open the REPL using sbt 0.13.12:

$ sbt
> console
Welcome to Scala version 2.11.6 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79).
Type in expressions to have them evaluated.
Type :help for more information.

scala>

There’s also API docs generated for Cats.

Eq 

LYAHFGG:

Eq is used for types that support equality testing. The functions its members implement are == and /=.

Cats’ equivalent for the Eq typeclass is also called Eq. Technically speaking, `cats.Eq` is actually a type alias of `algebra.Eq` from [non/algebra][algebra]. Not sure if it matters, but it's probably a good idea that it's being reused Eq was moved from non/algebra into cats-kernel subproject, and became part of Cats:

scala> import cats._, cats.instances.all._, cats.syntax.eq._
import cats._
import cats.instances.all._
import cats.syntax.eq._

scala> 1 === 1
res0: Boolean = true

scala> 1 === "foo"
<console>:21: error: type mismatch;
 found   : String("foo")
 required: Int
       1 === "foo"
             ^

scala> 1 == "foo"
<console>:21: warning: comparing values of types Int and String using `==' will always yield false
       1 == "foo"
         ^
res2: Boolean = false

scala> (Some(1): Option[Int]) =!= (Some(2): Option[Int])
res3: Boolean = true

Instead of the standard ==, Eq enables === and =!= syntax by declaring eqv method. The main difference is that === would fail compilation if you tried to compare Int and String.

In algebra, neqv is implemented based on eqv.

/**
 * A type class used to determine equality between 2 instances of the same
 * type. Any 2 instances `x` and `y` are equal if `eqv(x, y)` is `true`.
 * Moreover, `eqv` should form an equivalence relation.
 */
trait Eq[@sp A] extends Any with Serializable { self =>

  /**
   * Returns `true` if `x` and `y` are equivalent, `false` otherwise.
   */
  def eqv(x: A, y: A): Boolean

  /**
   * Returns `false` if `x` and `y` are equivalent, `true` otherwise.
   */
  def neqv(x: A, y: A): Boolean = !eqv(x, y)

  ....
}

This is an example of polymorphism. Whatever equality means for the type A, neqv is the opposite of it. It does not matter if it’s String, Int, or whatever. Another way of looking at it is that given Eq[A], === is universally the opposite of =!=.

I’m a bit concerned that Eq seems to be using the terms “equal” and “equivalent” interchangably. Equivalence relationship could include “having the same birthday” whereas equality also requires substitution property.

Order 

LYAHFGG:

Ord is for types that have an ordering. Ord covers all the standard comparing functions such as >, <, >= and <=.

Cats’ equivalent for the Ord typeclass is Order.

scala> import cats._, cats.instances.all._, cats.syntax.order._
import cats._
import cats.instances.all._
import cats.syntax.order._

scala> 1 > 2.0
res0: Boolean = false

scala> 1 compare 2.0
<console>:21: error: type mismatch;
 found   : Double(2.0)
 required: Int
       1 compare 2.0
                 ^

scala> 1.0 compare 2.0
res2: Int = -1

scala> 1.0 max 2.0
res3: Double = 2.0

Order enables compare syntax which returns Int: negative, zero, or positive. It also enables min and max operators. Similar to Eq, comparing Int and Double fails compilation.

PartialOrder 

In addition to Order, Cats also defines PartialOrder.

scala> import cats._, cats.instances.all._, cats.syntax.partialOrder._
import cats._
import cats.instances.all._
import cats.syntax.partialOrder._

scala> 1 tryCompare 2
res0: Option[Int] = Some(-1)

scala> 1.0 tryCompare Double.NaN
res1: Option[Int] = Some(-1)

PartialOrder enables tryCompare syntax which returns Option[Int]. According to algebra, it’ll return None if operands are not comparable. It’s returning Some(-1) when comparing 1.0 and Double.NaN, so I’m not sure when things are incomparable.

scala> def lt[A: PartialOrder](a1: A, a2: A): Boolean = a1 <= a2
lt: [A](a1: A, a2: A)(implicit evidence$1: cats.PartialOrder[A])Boolean

scala> lt[Int](1, 2.0)
<console>:22: error: type mismatch;
 found   : Double(2.0)
 required: Int
       lt[Int](1, 2.0)
                  ^

scala> lt(1, 2)
res3: Boolean = true

PartialOrder also enables >, >=, <, and <= operators, but these are tricky to use because if you’re not careful you could end up using the built-in comparison operators.

Show 

LYAHFGG:

Members of Show can be presented as strings.

Cats’ equivalent for the Show typeclass is Show:

scala> import cats._, cats.instances.all._, cats.syntax.show._
import cats._
import cats.instances.all._
import cats.syntax.show._

scala> 3.show
res0: String = 3

scala> "hello".show
res1: String = hello

Here’s the typeclass contract:

@typeclass trait Show[T] {
  def show(f: T): String
}

At first, it might seem silly to define Show because Scala already has toString on Any. Any also means anything would match the criteria, so you lose type safety. The toString could be junk supplied by some parent class:

scala> (new {}).toString
res2: String = $anon$1@3d50801

scala> (new {}).show
<console>:21: error: value show is not a member of AnyRef
       (new {}).show
                ^

object Show provides two functions to create a Show instance:

object Show {
  /** creates an instance of [[Show]] using the provided function */
  def show[A](f: A => String): Show[A] = new Show[A] {
    def show(a: A): String = f(a)
  }

  /** creates an instance of [[Show]] using object toString */
  def fromToString[A]: Show[A] = new Show[A] {
    def show(a: A): String = a.toString
  }

  implicit val catsContravariantForShow: Contravariant[Show] = new Contravariant[Show] {
    def contramap[A, B](fa: Show[A])(f: B => A): Show[B] =
      show[B](fa.show _ compose f)
  }
}

Let’s try using them:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Person(name: String)
case class Car(model: String)


// Exiting paste mode, now interpreting.

defined class Person
defined class Car

scala> implicit val personShow = Show.show[Person](_.name)
personShow: cats.Show[Person] = cats.Show$$anon$2@1bbdc034

scala> Person("Alice").show
res4: String = Alice

scala> implicit val carShow = Show.fromToString[Car]
carShow: cats.Show[Car] = cats.Show$$anon$3@778bc812

scala> Car("CR-V")
res5: Car = Car(CR-V)

Read 

LYAHFGG:

Read is sort of the opposite typeclass of Show. The read function takes a string and returns a type which is a member of Read.

I could not find Cats’ equivalent for this typeclass.

I find myself defining Read and its variant ReadJs time and time again. Stringly typed programming is ugly. At the same time, String is a robust data format to cross platform boundaries (e.g. JSON). Also we humans know how to deal with them directly (e.g. command line options), so it’s hard to get away from String parsing. If we’re going to do it anyway, having Read makes it easier.

Enum 

LYAHFGG:

Enum members are sequentially ordered types — they can be enumerated. The main advantage of the Enum typeclass is that we can use its types in list ranges. They also have defined successors and predecessors, which you can get with the succ and pred functions.

I could not find Cats’ equivalent for this typeclass.

It’s not an Enum or range, but non/spire has an interesting data structure called Interval. Check out Erik’s Intervals: Unifying Uncertainty, Ranges, and Loops talk from nescala 2015.

Numeric 

LYAHFGG:

Num is a numeric typeclass. Its members have the property of being able to act like numbers.

I could not find Cats’ equivalent for this typeclass, but spire defines Numeric. Cats doesn’t define Bounds either.

So far we’ve seen some of typeclasses that are not defined in Cats. This is not necessarily a bad thing because having a tight core is part of the design goal of Cats.

typeclasses 102 

I am now going to skip over to Chapter 8 Making Our Own Types and Typeclasses (Chapter 7 if you have the book) since the chapters in between are mostly about Haskell syntax.

A traffic light datatype 

data TrafficLight = Red | Yellow | Green

In Scala this would be:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait TrafficLight
object TrafficLight {
  case object Red extends TrafficLight
  case object Yellow extends TrafficLight
  case object Green extends TrafficLight
}

// Exiting paste mode, now interpreting.

defined trait TrafficLight
defined object TrafficLight

Now let’s define an instance for Eq.

scala> implicit val trafficLightEq: Eq[TrafficLight] =
  new Eq[TrafficLight] {
    def eqv(a1: TrafficLight, a2: TrafficLight): Boolean = a1 == a2
  }
trafficLightEq: cats.Eq[TrafficLight] = $anon$1@1eff7ecd

Note: The latest algebra.Equal includes Equal.instance and Equal.fromUniversalEquals.

Can I use the Eq?

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> TrafficLight.Red === TrafficLight.Yellow
<console>:26: error: value === is not a member of object TrafficLight.Red
       TrafficLight.Red === TrafficLight.Yellow
                        ^

So apparently Eq[TrafficLight] doesn’t get picked up because Eq has nonvariant subtyping: Eq[A]. One way to workaround this issue is to define helper functions to cast them up to TrafficLight:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait TrafficLight
object TrafficLight {
  def red: TrafficLight = Red
  def yellow: TrafficLight = Yellow
  def green: TrafficLight = Green
  case object Red extends TrafficLight
  case object Yellow extends TrafficLight
  case object Green extends TrafficLight
}

// Exiting paste mode, now interpreting.

defined trait TrafficLight
defined object TrafficLight

scala> implicit val trafficLightEq: Eq[TrafficLight] =
  new Eq[TrafficLight] {
    def eqv(a1: TrafficLight, a2: TrafficLight): Boolean = a1 == a2
  }
trafficLightEq: cats.Eq[TrafficLight] = $anon$1@79df5bb7

scala> TrafficLight.red === TrafficLight.yellow
res1: Boolean = false

It is a bit of boilerplate, but it works.

day 2 

Yesterday we reviewed a few basic typeclasses from Cats like Eq by using Learn You a Haskell for Great Good as the guide.

Making our own typeclass with simulacrum 

LYAHFGG:

In JavaScript and some other weakly typed languages, you can put almost anything inside an if expression. …. Even though strictly using Bool for boolean semantics works better in Haskell, let’s try and implement that JavaScript-ish behavior anyway. For fun!

The conventional steps of defining a modular typeclass in Scala used to look like:

  1. Define typeclass contract trait Foo.
  2. Define a companion object Foo with a helper method apply that acts like implcitly, and a way of defining Foo instances typically from a function.
  3. Define FooOps class that defines unibary or binary operators.
  4. Define FooSyntax trait that implicitly provides FooOps from a Foo instance.

Frankly, these steps are mostly copy-paste boilerplate except for the first one. Enter Michael Pilquist (@mpilquist)’s simulacrum. simulacrum magically generates most of steps 2-4 just by putting @typeclass annotation. By chance, Stew O’Connor (@stewoconnor/@stew)’s #294 got merged, which refactors Cats to use it.

Yes-No typeclass 

In any case, let’s see if we can make our own truthy value typeclass. Note the @typeclass annotation:

scala> import simulacrum._
import simulacrum._

scala> :paste
// Entering paste mode (ctrl-D to finish)
@typeclass trait CanTruthy[A] { self =>
  /** Return true, if `a` is truthy. */
  def truthy(a: A): Boolean
}
object CanTruthy {
  def fromTruthy[A](f: A => Boolean): CanTruthy[A] = new CanTruthy[A] {
    def truthy(a: A): Boolean = f(a)
  }
}

// Exiting paste mode, now interpreting.

defined trait CanTruthy
defined object CanTruthy

According to the README, the macro will generate all the operator enrichment stuff:

// This is the supposed generated code. You don't have to write it!
object CanTruthy {
  def fromTruthy[A](f: A => Boolean): CanTruthy[A] = new CanTruthy[A] {
    def truthy(a: A): Boolean = f(a)
  }

  def apply[A](implicit instance: CanTruthy[A]): CanTruthy[A] = instance

  trait Ops[A] {
    def typeClassInstance: CanTruthy[A]
    def self: A
    def truthy: A = typeClassInstance.truthy(self)
  }

  trait ToCanTruthyOps {
    implicit def toCanTruthyOps[A](target: A)(implicit tc: CanTruthy[A]): Ops[A] = new Ops[A] {
      val self = target
      val typeClassInstance = tc
    }
  }

  trait AllOps[A] extends Ops[A] {
    def typeClassInstance: CanTruthy[A]
  }

  object ops {
    implicit def toAllCanTruthyOps[A](target: A)(implicit tc: CanTruthy[A]): AllOps[A] = new AllOps[A] {
      val self = target
      val typeClassInstance = tc
    }
  }
}

To make sure it works, let’s define an instance for Int and use it. The eventual goal is to get 1.truthy to return true:

scala> implicit val intCanTruthy: CanTruthy[Int] = CanTruthy.fromTruthy({
         case 0 => false
         case _ => true
       })
intCanTruthy: CanTruthy[Int] = CanTruthy$$anon$1@37dbbd35

scala> import CanTruthy.ops._
import CanTruthy.ops._

scala> 10.truthy
res0: Boolean = true

It works. This is quite nifty. One caveat is that this requires Macro Paradise plugin to compile. Once it’s compiled the user of CanTruthy can use it without Macro Paradise.

Symbolic operators 

For CanTruthy the injected operator happened to be unary, and it matched the name of the function on the typeclass contract. simulacrum can also define operator with symbolic names using @op annotation:

scala> @typeclass trait CanAppend[A] {
  @op("|+|") def append(a1: A, a2: A): A
}
defined trait CanAppend
defined object CanAppend

scala> implicit val intCanAppend: CanAppend[Int] = new CanAppend[Int] {
  def append(a1: Int, a2: Int): Int = a1 + a2
}
intCanAppend: CanAppend[Int] = $anon$1@7837f88

scala> import CanAppend.ops._
import CanAppend.ops._

scala> 1 |+| 2
res1: Int = 3

Functor 

LYAHFGG:

And now, we’re going to take a look at the Functor typeclass, which is basically for things that can be mapped over.

Like the book let’s look how it's implemented:

/**
 * Functor.
 *
 * The name is short for "covariant functor".
 *
 * Must obey the laws defined in cats.laws.FunctorLaws.
 */
@typeclass trait Functor[F[_]] extends functor.Invariant[F] { self =>
  def map[A, B](fa: F[A])(f: A => B): F[B]

  ....
}

Here’s how we can use this:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> Functor[List].map(List(1, 2, 3)) { _ + 1 }
res0: List[Int] = List(2, 3, 4)

Let’s call the above usage the function syntax.

We now know that @typeclass annotation will automatically turn a map function into a map operator. The fa part turns into the this of the method, and the second parameter list will now be the parameter list of map operator:

// Supposed generated code
object Functor {
  trait Ops[F[_], A] {
    def typeClassInstance: Functor[F]
    def self: F[A]
    def map[B](f: A => B): F[B] = typeClassInstance.map(self)(f)
  }
}

This looks almost like the map method on Scala collection library, except this map doesn’t do the CanBuildFrom auto conversion.

Either as a functor 

Cats defines a Functor instance for Either[A, B].

scala> import cats.syntax.functor._
import cats.syntax.functor._

scala> (Right(1): Either[String, Int]) map { _ + 1 }
res1: scala.util.Either[String,Int] = Right(2)

scala> (Left("boom!"): Either[String, Int]) map { _ + 1 }
res2: scala.util.Either[String,Int] = Left(boom!)

Note that the above demonstration only works because Either[A, B] at the moment does not implement its own map. If I used List(1, 2, 3) it will call List’s implementation of map instead of the Functor[List]’s map. Therefore, even though the operator syntax looks familiar, we should either avoid using it unless you’re sure that standard library doesn’t implement the map or you’re using it from a polymorphic function. One workaround is to opt for the function syntax.

Function as a functor 

Cats also defines a Functor instance for Function1.

scala> val h = ((x: Int) => x + 1) map {_ * 7}
h: Int => Int = <function1>

scala> h(3)
res3: Int = 28

This is interesting. Basically map gives us a way to compose functions, except the order is in reverse from f compose g. Another way of looking at Function1 is that it’s an infinite map from the domain to the range. Now let’s skip the input and output stuff and go to Functors, Applicative Functors and Monoids.

How are functions functors? …

What does the type fmap :: (a -> b) -> (r -> a) -> (r -> b) for this instance tell us? Well, we see that it takes a function from a to b and a function from r to a and returns a function from r to b. Does this remind you of anything? Yes! Function composition!

Oh man, LYAHFGG came to the same conclusion as I did about the function composition. But wait…

ghci> fmap (*3) (+100) 1
303
ghci> (*3) . (+100) $ 1  
303

In Haskell, the fmap seems to be working in the same order as f compose g. Let’s check in Scala using the same numbers:

scala> (((_: Int) * 3) map {_ + 100}) (1)
res4: Int = 103

Something is not right. Let’s compare the declaration of fmap and Cats’ map function:

fmap :: (a -> b) -> f a -> f b

and here’s Cats:

def map[A, B](fa: F[A])(f: A => B): F[B]

So the order is flipped. Here’s Paolo Giarrusso (@blaisorblade)’s explanation:

That’s a common Haskell-vs-Scala difference.

In Haskell, to help with point-free programming, the “data” argument usually comes last. For instance, I can write map f . map g . map h and get a list transformer, because the argument order is map f list. (Incidentally, map is an restriction of fmap to the List functor).

In Scala instead, the “data” argument is usually the receiver. That’s often also important to help type inference, so defining map as a method on functions would not bring you very far: think the mess Scala type inference would make of (x => x + 1) map List(1, 2, 3).

This seems to be the popular explanation.

Lifting a function 

LYAHFGG:

[We can think of fmap as] a function that takes a function and returns a new function that’s just like the old one, only it takes a functor as a parameter and returns a functor as the result. It takes an a -> b function and returns a function f a -> f b. This is called lifting a function.

ghci> :t fmap (*2)  
fmap (*2) :: (Num a, Functor f) => f a -> f a  
ghci> :t fmap (replicate 3)  
fmap (replicate 3) :: (Functor f) => f a -> f [a]  

If the parameter order has been flipped, are we going to miss out on this lifting goodness? Fortunately, Cats implements derived functions under the Functor typeclass:

@typeclass trait Functor[F[_]] extends functor.Invariant[F] { self =>
  def map[A, B](fa: F[A])(f: A => B): F[B]

  ....

  // derived methods

  /**
   * Lift a function f to operate on Functors
   */
  def lift[A, B](f: A => B): F[A] => F[B] = map(_)(f)

  /**
   * Empty the fa of the values, preserving the structure
   */
  def void[A](fa: F[A]): F[Unit] = map(fa)(_ => ())

  /**
   * Tuple the values in fa with the result of applying a function
   * with the value
   */
  def fproduct[A, B](fa: F[A])(f: A => B): F[(A, B)] = map(fa)(a => a -> f(a))

  /**
   * Replaces the `A` value in `F[A]` with the supplied value.
   */
  def as[A, B](fa: F[A], b: B): F[B] = map(fa)(_ => b)
}

As you see, we have lift!

scala> val lifted = Functor[List].lift {(_: Int) * 2}
lifted: List[Int] => List[Int] = <function1>

scala> lifted(List(1, 2, 3))
res5: List[Int] = List(2, 4, 6)

We’ve just lifted the function {(_: Int) * 2} to List[Int] => List[Int]. Here the other derived functions using the operator syntax:

scala> List(1, 2, 3).void
res6: List[Unit] = List((), (), ())

scala> List(1, 2, 3) fproduct {(_: Int) * 2}
res7: List[(Int, Int)] = List((1,2), (2,4), (3,6))

scala> List(1, 2, 3) as "x"
res8: List[String] = List(x, x, x)

Functor Laws 

LYAHFGG:

In order for something to be a functor, it should satisfy some laws. All functors are expected to exhibit certain kinds of functor-like properties and behaviors. … The first functor law states that if we map the id function over a functor, the functor that we get back should be the same as the original functor.

We can check this for Either[A, B].

scala> val x: Either[String, Int] = Right(1)
x: Either[String,Int] = Right(1)

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> assert { (x map identity) === x }

The second law says that composing two functions and then mapping the resulting function over a functor should be the same as first mapping one function over the functor and then mapping the other one.

In other words,

scala> val f = {(_: Int) * 3}
f: Int => Int = <function1>

scala> val g = {(_: Int) + 1}
g: Int => Int = <function1>

scala> assert { (x map (f map g)) === (x map f map g) }

These are laws the implementer of the functors must abide, and not something the compiler can check for you.

Checking laws with Discipline 

The compiler can’t check for the laws, but Cats ships with a FunctorLaws trait that describes this in code:

/**
 * Laws that must be obeyed by any [[Functor]].
 */
trait FunctorLaws[F[_]] extends InvariantLaws[F] {
  implicit override def F: Functor[F]

  def covariantIdentity[A](fa: F[A]): IsEq[F[A]] =
    fa.map(identity) <-> fa

  def covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =
    fa.map(f).map(g) <-> fa.map(f andThen g)
}

Checking laws from the REPL 

This is based on a library called Discipline, which is a wrapper around ScalaCheck. We can run these tests from the REPL with ScalaCheck.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.laws.discipline.FunctorTests
import cats.laws.discipline.FunctorTests

scala> val rs = FunctorTests[Either[Int, ?]].functor[Int, Int, Int]
rs: cats.laws.discipline.FunctorTests[[X_kp1]scala.util.Either[Int,X_kp1]]#RuleSet = cats.laws.discipline.FunctorTests$$anon$2@7993373d

scala> rs.all.check
+ functor.covariant composition: OK, passed 100 tests.
+ functor.covariant identity: OK, passed 100 tests.
+ functor.invariant composition: OK, passed 100 tests.
+ functor.invariant identity: OK, passed 100 tests.

rs.all returns org.scalacheck.Properties, which implements check method.

Checking laws with Discipline + Specs2 

You can also bake your own cake pattern into a test framework of choice. Here’s for specs2:

package example

import org.specs2.Specification
import org.typelevel.discipline.specs2.Discipline
import cats.instances.AllInstances
import cats.syntax.AllSyntax

trait CatsSpec extends Specification with Discipline with AllInstances with AllSyntax

Cats’ source include one for ScalaTest.

The spec to check the functor law for Either[Int, Int] looks like this:

package example

import cats._
import cats.laws.discipline.FunctorTests

class EitherSpec extends CatsSpec { def is = s2"""
  Either[Int, ?] forms a functor                           $e1
  """

  def e1 = checkAll("Either[Int, Int]", FunctorTests[Either[Int, ?]].functor[Int, Int, Int])
}

The Either[Int, ?] is using non/kind-projector. Running the test from sbt displays the following output:

> test
[info] EitherSpec
[info]   
[info] 
[info] functor laws must hold for Either[Int, Int]
[info] 
[info]  + functor.covariant composition
[info]  + functor.covariant identity
[info]  + functor.invariant composition
[info]  + functor.invariant identity
[info] 
[info]   
[info] Total for specification EitherSpec
[info] Finished in 14 ms
[info] 4 examples, 400 expectations, 0 failure, 0 error
[info] Passed: Total 4, Failed 0, Errors 0, Passed 4

Breaking the law 

LYAHFGG:

Let’s take a look at a pathological example of a type constructor being an instance of the Functor typeclass but not really being a functor, because it doesn’t satisfy the laws.

Let’s try breaking the law.

package example

import cats._

sealed trait COption[+A]
case class CSome[A](counter: Int, a: A) extends COption[A]
case object CNone extends COption[Nothing]

object COption {
  implicit def coptionEq[A]: Eq[COption[A]] = new Eq[COption[A]] {
    def eqv(a1: COption[A], a2: COption[A]): Boolean = a1 == a2
  }
  implicit val coptionFunctor = new Functor[COption] {
    def map[A, B](fa: COption[A])(f: A => B): COption[B] =
      fa match {
        case CNone => CNone
        case CSome(c, a) => CSome(c + 1, f(a))
      }
  }
}

Here’s how we can use this:

scala> import cats._, cats.syntax.functor._
import cats._
import cats.syntax.functor._

scala> import example._
import example._

scala> (CSome(0, "ho"): COption[String]) map {identity}
res0: example.COption[String] = CSome(1,ho)

This breaks the first law because the result of the identity function is not equal to the input. To catch this we need to supply an “arbitrary” COption[A] implicitly:

package example

import cats._
import cats.laws.discipline.{ FunctorTests }
import org.scalacheck.{ Arbitrary, Gen }

class COptionSpec extends CatsSpec {
  implicit def coptionArbiterary[A](implicit arbA: Arbitrary[A]): Arbitrary[COption[A]] =
    Arbitrary {
      val arbSome = for {
        i <- implicitly[Arbitrary[Int]].arbitrary
        a <- arbA.arbitrary
      } yield (CSome(i, a): COption[A])
      val arbNone = Gen.const(CNone: COption[Nothing])
      Gen.oneOf(arbSome, arbNone)
    }

  def is = s2"""
  COption[Int] forms a functor                             $e1
  """

  def e1 = checkAll("COption[Int]", FunctorTests[COption].functor[Int, Int, Int])
}

Here’s the output:

[info] COptionSpec
[info]   
[info] 
[info] functor laws must hold for COption[Int]
[info] 
[info]  x functor.covariant composition
[error]    A counter-example is [CSome(-1,-1), <function1>, <function1>] (after 0 try)
[error]    (CSome(1,1358703086) ?== CSome(0,1358703086)) failed
[info] 
[info]  x functor.covariant identity
[error]    A counter-example is 'CSome(1781926821,82888113)' (after 0 try)
[error]    (CSome(1781926822,82888113) ?== CSome(1781926821,82888113)) failed
[info] 
[info]  x functor.invariant composition
[error]    A counter-example is [CSome(-17878015,0), <function1>, <function1>, <function1>, <function1>] (after 1 try)
[error]    (CSome(-17878013,-1351608161) ?== CSome(-17878014,-1351608161)) failed
[info] 
[info]  x functor.invariant identity
[error]    A counter-example is 'CSome(-1699259031,1)' (after 0 try)
[error]    (CSome(-1699259030,1) ?== CSome(-1699259031,1)) failed
[info] 
[info] 
[info]   
[info] Total for specification COptionSpec
[info] Finished in 13 ms
[info] 4 examples, 4 failures, 0 error

The tests failed as expected.

day 3 

Yesterday we started with defining our own typeclasses using simulacrum, and ended with checking for functor laws using Discipline.

Kinds and some type-foo 

Learn You a Haskell For Great Good says:

Types are little labels that values carry so that we can reason about the values. But types have their own little labels, called kinds. A kind is more or less the type of a type. … What are kinds and what are they good for? Well, let’s examine the kind of a type by using the :k command in GHCI.

Scala 2.10 didn’t have a :k command, so I wrote kind.scala. Thanks to George Leontiev (@folone) and others, :kind command is now part of Scala 2.11 (scala/scala#2340). Let’s try using it:

scala> :k Int
scala.Int's kind is A

scala> :k -v Int
scala.Int's kind is A
*
This is a proper type.

Int and every other types that you can make a value out of are called a proper type and denoted with a * symbol (read “type”). This is analogous to the value 1 at value-level. Using Scala’s type variable notation this could be written as A.

scala> :k -v Option
scala.Option's kind is F[+A]
* -(+)-> *
This is a type constructor: a 1st-order-kinded type.

scala> :k -v Either
scala.util.Either's kind is F[+A1,+A2]
* -(+)-> * -(+)-> *
This is a type constructor: a 1st-order-kinded type.

These are normally called type constructors. Another way of looking at it is that it’s one step removed from a proper type. So we can call it a first-order-kinded type. This is analogous to a first-order value (_: Int) + 3, which we would normally call a function at the value level.

The curried notation uses arrows like * -> * and * -> * -> *. Note, Option[Int] is *; Option is * -> *. Using Scala’s type variable notation they could be written as F[+A] and F[+A1,+A2].

scala> :k -v Eq
algebra.Eq's kind is F[A]
* -> *
This is a type constructor: a 1st-order-kinded type.

Scala encodes (or complects) the notion of typeclasses using type constructors. When looking at this, think of it as Eq is a typeclass for A, a proper type. This should make sense because you would pass Int into Eq.

scala> :k -v Functor
cats.Functor's kind is X[F[A]]
(* -> *) -> *
This is a type constructor that takes type constructor(s): a higher-kinded type.

Again, Scala encodes typeclasses using type constructors, so when looking at this, think of it as Functor is a typeclass for F[A], a type constructor. This should also make sense because you would pass List into Functor.

In other words, this is a type constructor that accepts another type constructor. This is analogous to a higher-order function, and thus called higher-kinded type. These are denoted as (* -> *) -> *. Using Scala’s type variable notation this could be written as X[F[A]].

forms-a vs is-a 

The terminology around typeclasses tends to get jumbled up. For example, the pair (Int, +) forms a typeclass called monoid. Colloquially, we say things like “is X a monoid?” to mean “can X form a monoid under some operation?”

An example of this is Either[A, B], which we implied “is-a” functor yesterday. This is not completely accurate because, even though it might not be useful, we could have defined another left-biased functor.

Cartesian 

Functors, Applicative Functors and Monoids:

So far, when we were mapping functions over functors, we usually mapped functions that take only one parameter. But what happens when we map a function like *, which takes two parameters, over a functor?

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> val hs = Functor[List].map(List(1, 2, 3, 4)) ({(_: Int) * (_:Int)}.curried)
hs: List[Int => Int] = List(<function1>, <function1>, <function1>, <function1>)

scala> Functor[List].map(hs) {_(9)}
res1: List[Int] = List(9, 18, 27, 36)

LYAHFGG:

But what if we have a functor value of Just (3 *) and a functor value of Just 5, and we want to take out the function from Just(3 *) and map it over Just 5?

Meet the Applicative typeclass. It lies in the Control.Applicative module and it defines two methods, pure and <*>.

Cats splits this into Cartesian, Apply, and Applicative. Here’s the contract for Cartesian:

/**
 * [[Cartesian]] captures the idea of composing independent effectful values.
 * It is of particular interest when taken together with [[Functor]] - where [[Functor]]
 * captures the idea of applying a unary pure function to an effectful value,
 * calling `product` with `map` allows one to apply a function of arbitrary arity to multiple
 * independent effectful values.
 *
 * That same idea is also manifested in the form of [[Apply]], and indeed [[Apply]] extends both
 * [[Cartesian]] and [[Functor]] to illustrate this.
 */
@typeclass trait Cartesian[F[_]] {
  def product[A, B](fa: F[A], fb: F[B]): F[(A, B)]
}

Cartesian defines product function, which produces a pair of (A, B) wrapped in effect F[_] out of F[A] and F[B]. The symbolic alias for product is |@| also known as the applicative style.

Catnip 

Before we move on, let’s port Scalaz’s DSL to create Option values typed to Option.

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

This allows us to shorten (Some(9): Option[Int]) to 9.some.

scala> 9.some
res2: Option[Int] = Some(9)

scala> none[Int]
res3: Option[Int] = None

The Applicative Style 

LYAHFGG:

With the Applicative type class, we can chain the use of the <*> function, thus enabling us to seamlessly operate on several applicative values instead of just one.

Here’s an example in Haskell:

ghci> pure (-) <*> Just 3 <*> Just 5
Just (-2)

Cats comes with the CartesianBuilder syntax.

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> (3.some |@| 5.some) map { _ - _ }
res4: Option[Int] = Some(-2)

scala> (none[Int] |@| 5.some) map { _ - _ }
res5: Option[Int] = None

scala> (3.some |@| none[Int]) map { _ - _ }
res6: Option[Int] = None

This shows that Option forms Cartesian.

List as a Cartesian 

LYAHFGG:

Lists (actually the list type constructor, []) are applicative functors. What a surprise!

Let’s see if we can use the CartesianBuilder sytax:

scala> (List("ha", "heh", "hmm") |@| List("?", "!", ".")) map {_ + _}
res7: List[String] = List(ha?, ha!, ha., heh?, heh!, heh., hmm?, hmm!, hmm.)

> and < operators 

Cartesian enables two operators, <* and *>, which are special cases of Apply[F].product:

abstract class CartesianOps[F[_], A] extends Cartesian.Ops[F, A] {
  def |@|[B](fb: F[B]): CartesianBuilder[F]#CartesianBuilder2[A, B] =
    new CartesianBuilder[F] |@| self |@| fb

  def *>[B](fb: F[B])(implicit F: Functor[F]): F[B] = F.map(typeClassInstance.product(self, fb)) { case (a, b) => b }

  def <*[B](fb: F[B])(implicit F: Functor[F]): F[A] = F.map(typeClassInstance.product(self, fb)) { case (a, b) => a }
}

The definition looks simple enough, but the effect is cool:

scala> 1.some <* 2.some
res8: Option[Int] = Some(1)

scala> none[Int] <* 2.some
res9: Option[Int] = None

scala> 1.some *> 2.some
res10: Option[Int] = Some(2)

scala> none[Int] *> 2.some
res11: Option[Int] = None

If either side fails, we get None.

Cartesian law 

Cartesian has a single law called associativity:

trait CartesianLaws[F[_]] {
  implicit def F: Cartesian[F]

  def cartesianAssociativity[A, B, C](fa: F[A], fb: F[B], fc: F[C]): (F[(A, (B, C))], F[((A, B), C)]) =
    (F.product(fa, F.product(fb, fc)), F.product(F.product(fa, fb), fc))
}

Apply 

Functors, Applicative Functors and Monoids:

LYAHFGG:

But what if we have a functor value of Just (3 *) and a functor value of Just 5, and we want to take out the function from Just(3 *) and map it over Just 5?

Meet the Applicative typeclass. It lies in the Control.Applicative module and it defines two methods, pure and <*>.

Cats splits Applicative into Cartesian, Apply, and Applicative. Here’s the contract for Apply:

/**
 * Weaker version of Applicative[F]; has apply but not pure.
 *
 * Must obey the laws defined in cats.laws.ApplyLaws.
 */
@typeclass(excludeParents = List("ApplyArityFunctions"))
trait Apply[F[_]] extends Functor[F] with Cartesian[F] with ApplyArityFunctions[F] { self =>

  /**
   * Given a value and a function in the Apply context, applies the
   * function to the value.
   */
  def ap[A, B](ff: F[A => B])(fa: F[A]): F[B]

  ....
}

Note that Apply extends Functor, Cartesian, and ApplyArityFunctions. The <*> function is called ap in Cats’ Apply. (This was originally called apply, but was renamed to ap. +1)

LYAHFGG:

You can think of <*> as a sort of a beefed-up fmap. Whereas fmap takes a function and a functor and applies the function inside the functor value, <*> takes a functor that has a function in it and another functor and extracts that function from the first functor and then maps it over the second one.

Option as an Apply 

Here’s how we can use it with Apply[Option].ap:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> Apply[Option].ap({{(_: Int) + 3}.some })(9.some)
res12: Option[Int] = Some(12)

scala> Apply[Option].ap({{(_: Int) + 3}.some })(10.some)
res13: Option[Int] = Some(13)

scala> Apply[Option].ap({{(_: String) + "hahah"}.some })(none[String])
res14: Option[String] = None

scala> Apply[Option].ap({ none[String => String] })("woot".some)
res15: Option[String] = None

If either side fails, we get None.

If you remember Making our own typeclass with simulacrum from yesterday, simulacrum will automatically transpose the function defined on the typeclass contract into an operator, magically.

scala> import cats.syntax.apply._
import cats.syntax.apply._

scala> ({(_: Int) + 3}.some) ap 9.some
res16: Option[Int] = Some(12)

scala> ({(_: Int) + 3}.some) ap 10.some
res17: Option[Int] = Some(13)

scala> ({(_: String) + "hahah"}.some) ap none[String]
res18: Option[String] = None

scala> (none[String => String]) ap "woot".some
res19: Option[String] = None

Useful functions for Apply 

LYAHFGG:

Control.Applicative defines a function that’s called liftA2, which has a type of

liftA2 :: (Applicative f) => (a -> b -> c) -> f a -> f b -> f c .

Remember parameters are flipped around in Scala. What we have is a function that takes F[B] and F[A], then a function (A, B) => C. This is called map2 on Apply.

@typeclass(excludeParents = List("ApplyArityFunctions"))
trait Apply[F[_]] extends Functor[F] with Cartesian[F] with ApplyArityFunctions[F] { self =>

  /**
   * Given a value and a function in the Apply context, applies the
   * function to the value.
   */
  def ap[A, B](ff: F[A => B])(fa: F[A]): F[B]

  override def product[A, B](fa: F[A], fb: F[B]): F[(A, B)] =
    ap(map(fa)(a => (b: B) => (a, b)))(fb)

  /**
   * ap2 is a binary version of ap, defined in terms of ap.
   */
  def ap2[A, B, Z](ff: F[(A, B) => Z])(fa: F[A], fb: F[B]): F[Z] =
    map(product(fa, product(fb, ff))) { case (a, (b, f)) => f(a, b) }

  /**
   * Applies the pure (binary) function f to the effectful values fa and fb.
   *
   * map2 can be seen as a binary version of [[cats.Functor]]#map.
   */
  def map2[A, B, Z](fa: F[A], fb: F[B])(f: (A, B) => Z): F[Z] =
    map(product(fa, fb)) { case (a, b) => f(a, b) }

  ....
}

For binary operators, map2 can be used to hide the applicative style. Here we can write the same thing in two different ways:

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> (3.some |@| List(4).some) map { _ :: _ }
res20: Option[List[Int]] = Some(List(3, 4))

scala> Apply[Option].map2(3.some, List(4).some) { _ :: _ }
res21: Option[List[Int]] = Some(List(3, 4))

The results match up.

The 2-parameter version of Apply[F].ap is called Apply[F].ap2:

scala> Apply[Option].ap2({{ (_: Int) :: (_: List[Int]) }.some })(3.some, List(4).some)
res22: Option[List[Int]] = Some(List(3, 4))

There’s a special case of map2 called tuple2, which works like this:

scala> Apply[Option].tuple2(1.some, 2.some)
res23: Option[(Int, Int)] = Some((1,2))

scala> Apply[Option].tuple2(1.some, none[Int])
res24: Option[(Int, Int)] = None

If you are wondering what happens when you have a function with more than two parameters, note that Apply[F[_]] extends ApplyArityFunctions[F]. This is auto-generated code that defines ap3, map3, tuple3, … up to ap22, map22, tuple22.

Apply law 

Apply has a single law called composition:

trait ApplyLaws[F[_]] extends FunctorLaws[F] {
  implicit override def F: Apply[F]

  def applyComposition[A, B, C](fa: F[A], fab: F[A => B], fbc: F[B => C]): IsEq[F[C]] = {
    val compose: (B => C) => (A => B) => (A => C) = _.compose
    fa.ap(fab).ap(fbc) <-> fa.ap(fab.ap(fbc.map(compose)))
  }
}

Applicative 

Note: If you jumped to this page because you’re interested in applicative functors, you should definitely read Cartesian and Apply first.

Functors, Applicative Functors and Monoids:

Meet the Applicative typeclass. It lies in the Control.Applicative module and it defines two methods, pure and <*>.

Let’s see Cats’ Applicative:

@typeclass trait Applicative[F[_]] extends Apply[F] { self =>
  /**
   * `pure` lifts any value into the Applicative Functor
   *
   * Applicative[Option].pure(10) = Some(10)
   */
  def pure[A](x: A): F[A]

  ....
}

It’s an extension of Apply with pure.

LYAHFGG:

pure should take a value of any type and return an applicative value with that value inside it. … A better way of thinking about pure would be to say that it takes a value and puts it in some sort of default (or pure) context—a minimal context that still yields that value.

It seems like it’s basically a constructor that takes value A and returns F[A].

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> Applicative[List].pure(1)
res0: List[Int] = List(1)

scala> Applicative[Option].pure(1)
res1: Option[Int] = Some(1)

This actually comes in handy using Apply[F].ap so we can avoid calling {{...}.some}.

scala> val F = Applicative[Option]
F: cats.Applicative[Option] = cats.instances.OptionInstances$$anon$1@419e22d5

scala> F.ap({ F.pure((_: Int) + 3) })(F.pure(9))
res2: Option[Int] = Some(12)

We’ve abstracted Option away from the code.

Useful functions for Applicative 

LYAHFGG:

Let’s try implementing a function that takes a list of applicatives and returns an applicative that has a list as its result value. We’ll call it sequenceA.

sequenceA :: (Applicative f) => [f a] -> f [a]  
sequenceA [] = pure []  
sequenceA (x:xs) = (:) <$> x <*> sequenceA xs  

Let’s try implementing this with Cats!

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> def sequenceA[F[_]: Applicative, A](list: List[F[A]]): F[List[A]] = list match {
         case Nil     => Applicative[F].pure(Nil: List[A])
         case x :: xs => (x |@| sequenceA(xs)) map {_ :: _} 
       }
sequenceA: [F[_], A](list: List[F[A]])(implicit evidence$1: cats.Applicative[F])F[List[A]]

Let’s test it:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> sequenceA(List(1.some, 2.some))
res3: Option[List[Int]] = Some(List(1, 2))

scala> sequenceA(List(3.some, none[Int], 1.some))
res4: Option[List[Int]] = None

scala> sequenceA(List(List(1, 2, 3), List(4, 5, 6)))
res5: List[List[Int]] = List(List(1, 4), List(1, 5), List(1, 6), List(2, 4), List(2, 5), List(2, 6), List(3, 4), List(3, 5), List(3, 6))

We got the right answers. What’s interesting here is that we did end up needing Applicative after all, and sequenceA is generic in a typeclassy way.

Using sequenceA is useful when we have a list of functions and we want to feed the same input to all of them and then view the list of results.

For Function1 with Int fixed example, we need some type annotation:

scala> val f = sequenceA[Function1[Int, ?], Int](List((_: Int) + 3, (_: Int) + 2, (_: Int) + 1))
f: Int => List[Int] = <function1>

scala> f(3)
res6: List[Int] = List(6, 5, 4)

Applicative Laws 

Here are the laws for Applicative:

Cats defines another law

  def applicativeMap[A, B](fa: F[A], f: A => B): IsEq[F[B]] =
    fa.map(f) <-> fa.ap(F.pure(f))

This seem to say that if you combine F.ap and F.pure, you should get the same effect as F.map.

It took us a while, but I am glad we got this far. We’ll pick it up from here later.

day 4 

Yesterday we reviewed kinds and types, explored Apply, applicative style, and ended with sequenceA.

Let’s move on to Semigroup and Monoid today.

Semigroup 

If you have the book Learn You a Haskell for Great Good you get to start a new chapter: “Monoids.” For the website, it’s still Functors, Applicative Functors and Monoids.

First, it seems like Cats is missing newtype/tagged type facility. We’ll implement our own later.

Haskell’s Monoid is split into Semigroup and Monoid in Cats. They are also type aliases of algebra.Semigroup and algebra.Monoid. As with Apply and Applicative, Semigroup is a weaker version of Monoid. If you can solve the same problem, weaker is cooler because you’re making fewer assumptions.

LYAHFGG:

It doesn’t matter if we do (3 * 4) * 5 or 3 * (4 * 5). Either way, the result is 60. The same goes for ++. …

We call this property associativity. * is associative, and so is ++, but -, for example, is not.

Let’s check this:

scala> import cats._, cats.instances.all._, cats.syntax.eq._
import cats._
import cats.instances.all._
import cats.syntax.eq._

scala> assert { (3 * 2) * (8 * 5) === 3 * (2 * (8 * 5)) }

scala> assert { List("la") ++ (List("di") ++ List("da")) === (List("la") ++ List("di")) ++ List("da") }

No error means, they are equal.

The Semigroup typeclass 

Here’s the typeclass contract for algebra.Semigroup.

/**
 * A semigroup is any set `A` with an associative operation (`combine`).
 */
trait Semigroup[@sp(Int, Long, Float, Double) A] extends Any with Serializable {

  /**
   * Associative operation taking which combines two values.
   */
  def combine(x: A, y: A): A

  ....
}

This enables combine operator and its symbolic alias |+|. Let’s try using this.

scala> import cats._, cats.instances.all._, cats.syntax.semigroup._
import cats._
import cats.instances.all._
import cats.syntax.semigroup._

scala> List(1, 2, 3) |+| List(4, 5, 6)
res2: List[Int] = List(1, 2, 3, 4, 5, 6)

scala> "one" |+| "two"
res3: String = onetwo

The Semigroup Laws 

Associativity is the only law for Semigroup.

Here’s how we can check the Semigroup laws from the REPL. Review Checking laws with discipline for the details:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.kernel.laws.GroupLaws
import cats.kernel.laws.GroupLaws

scala> val rs1 = GroupLaws[Int].semigroup(Semigroup[Int])
rs1: cats.kernel.laws.GroupLaws[Int]#GroupProperties = cats.kernel.laws.GroupLaws$GroupProperties@5a077d1d

scala> rs1.all.check
+ semigroup.associativity: OK, passed 100 tests.
+ semigroup.combineN(a, 1) == a: OK, passed 100 tests.
+ semigroup.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ semigroup.serializable: OK, proved property.

Lists are Semigroups 

scala> List(1, 2, 3) |+| List(4, 5, 6)
res4: List[Int] = List(1, 2, 3, 4, 5, 6)

Product and Sum 

For Int a semigroup can be formed under both + and *. Instead of tagged types, cats provides only the instance additive.

Trying to use operator syntax here is tricky.

scala> def doSomething[A: Semigroup](a1: A, a2: A): A =
         a1 |+| a2
doSomething: [A](a1: A, a2: A)(implicit evidence$1: cats.Semigroup[A])A

scala> doSomething(3, 5)(Semigroup[Int])
res5: Int = 8

I might as well stick to function syntax:

scala> Semigroup[Int].combine(3, 5)
res6: Int = 8

Monoid 

LYAHFGG:

It seems that both * together with 1 and ++ along with [] share some common properties:

  • The function takes two parameters.
  • The parameters and the returned value have the same type.
  • There exists such a value that doesn’t change other values when used with the binary function.

Let’s check it out in Scala:

scala> 4 * 1
res0: Int = 4

scala> 1 * 9
res1: Int = 9

scala> List(1, 2, 3) ++ Nil
res2: List[Int] = List(1, 2, 3)

scala> Nil ++ List(0.5, 2.5)
res3: List[Double] = List(0.5, 2.5)

Looks right.

Monoid typeclass 

Here’s the typeclass contract of algebra.Monoid:

/**
 * A monoid is a semigroup with an identity. A monoid is a specialization of a
 * semigroup, so its operation must be associative. Additionally,
 * `combine(x, empty) == combine(empty, x) == x`. For example, if we have `Monoid[String]`,
 * with `combine` as string concatenation, then `empty = ""`.
 */
trait Monoid[@sp(Int, Long, Float, Double) A] extends Any with Semigroup[A] {

  /**
   * Return the identity element for this monoid.
   */
  def empty: A

  ...
}

Monoid laws 

In addition to the semigroup law, monoid must satify two more laws:

Here’s how we can check monoid laws from the REPL:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.kernel.laws.GroupLaws
import cats.kernel.laws.GroupLaws

scala> val rs1 = GroupLaws[Int].monoid(Monoid[Int])
rs1: cats.kernel.laws.GroupLaws[Int]#GroupProperties = cats.kernel.laws.GroupLaws$GroupProperties@17a695f0

scala> rs1.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
+ monoid.serializable: OK, proved property.

Here’s the spec2 specification of the above:

package example

import cats._
import algebra.laws.GroupLaws

class IntSpec extends CatsSpec { def is = s2"""
  (Int, +) should
     form a monoid                                         $e1
  """

  def e1 = checkAll("Int", GroupLaws[Int].monoid(Monoid[Int]))
}

Value classes 

LYAHFGG:

The newtype keyword in Haskell is made exactly for these cases when we want to just take one type and wrap it in something to present it as another type.

Cats does not ship with a tagged-type facility, but Scala now has value classes. This will remain unboxed under certain conditions, so it should work for simple examples.

scala> :paste
// Entering paste mode (ctrl-D to finish)
class Wrapper(val unwrap: Int) extends AnyVal

// Exiting paste mode, now interpreting.

defined class Wrapper

Disjunction and Conjunction 

LYAHFGG:

Another type which can act like a monoid in two distinct but equally valid ways is Bool. The first way is to have the or function || act as the binary function along with False as the identity value. … The other way for Bool to be an instance of Monoid is to kind of do the opposite: have && be the binary function and then make True the identity value.

Cats does not provide this, but we can implement it ourselves.

scala> import cats._, cats.instances.all._, cats.syntax.semigroup._
import cats._
import cats.instances.all._
import cats.syntax.semigroup._

scala> :paste
// Entering paste mode (ctrl-D to finish)
class Disjunction(val unwrap: Boolean) extends AnyVal
object Disjunction {
  @inline def apply(b: Boolean): Disjunction = new Disjunction(b)
  implicit val disjunctionMonoid: Monoid[Disjunction] = new Monoid[Disjunction] {
    def combine(a1: Disjunction, a2: Disjunction): Disjunction =
      Disjunction(a1.unwrap || a2.unwrap)
    def empty: Disjunction = Disjunction(false)
  }
  implicit val disjunctionEq: Eq[Disjunction] = new Eq[Disjunction] {
    def eqv(a1: Disjunction, a2: Disjunction): Boolean =
      a1.unwrap == a2.unwrap
  }
}

// Exiting paste mode, now interpreting.

defined class Disjunction
defined object Disjunction

scala> val x1 = Disjunction(true) |+| Disjunction(false)
x1: Disjunction = Disjunction@4cf

scala> x1.unwrap
res0: Boolean = true

scala> val x2 = Monoid[Disjunction].empty |+| Disjunction(true)
x2: Disjunction = Disjunction@4cf

scala> x2.unwrap
res1: Boolean = true

Here’s conjunction:

scala> :paste
// Entering paste mode (ctrl-D to finish)
class Conjunction(val unwrap: Boolean) extends AnyVal
object Conjunction {
  @inline def apply(b: Boolean): Conjunction = new Conjunction(b)
  implicit val conjunctionMonoid: Monoid[Conjunction] = new Monoid[Conjunction] {
    def combine(a1: Conjunction, a2: Conjunction): Conjunction =
      Conjunction(a1.unwrap && a2.unwrap)
    def empty: Conjunction = Conjunction(true)    
  }
  implicit val conjunctionEq: Eq[Conjunction] = new Eq[Conjunction] {
    def eqv(a1: Conjunction, a2: Conjunction): Boolean =
      a1.unwrap == a2.unwrap
  }
}

// Exiting paste mode, now interpreting.

defined class Conjunction
defined object Conjunction

scala> val x3 = Conjunction(true) |+| Conjunction(false)
x3: Conjunction = Conjunction@4d5

scala> x3.unwrap
res2: Boolean = false

scala> val x4 = Monoid[Conjunction].empty |+| Conjunction(true)
x4: Conjunction = Conjunction@4cf

scala> x4.unwrap
res3: Boolean = true

We should check if our custom new types satisfy the the monoid laws.

scala> import algebra.laws.GroupLaws
import algebra.laws.GroupLaws

scala> import org.scalacheck.{ Arbitrary, Gen }
import org.scalacheck.{Arbitrary, Gen}

scala> implicit def arbDisjunction(implicit ev: Arbitrary[Boolean]): Arbitrary[Disjunction] =
         Arbitrary { ev.arbitrary map { Disjunction(_) } }
arbDisjunction: (implicit ev: org.scalacheck.Arbitrary[Boolean])org.scalacheck.Arbitrary[Disjunction]

scala> val rs1 = GroupLaws[Disjunction].monoid
rs1: algebra.laws.GroupLaws[Disjunction]#GroupProperties = algebra.laws.GroupLaws$GroupProperties@77663edf

scala> rs1.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
! monoid.serializable: Falsified after 0 passed tests.

The test failed because our monoid is not Serializable. I’m confused as to why the monoid law is checking for Serializable. non/algebra#13 says it’s convenient for Spark. I feel like this should be a separate thing.

Update: It turns out, the failure is due to the fact I’m using REPL to define the typeclass instances!

scala> implicit def arbConjunction(implicit ev: Arbitrary[Boolean]): Arbitrary[Conjunction] =
         Arbitrary { ev.arbitrary map { Conjunction(_) } }
arbConjunction: (implicit ev: org.scalacheck.Arbitrary[Boolean])org.scalacheck.Arbitrary[Conjunction]

scala> val rs2 = GroupLaws[Conjunction].monoid
rs2: algebra.laws.GroupLaws[Conjunction]#GroupProperties = algebra.laws.GroupLaws$GroupProperties@15f279d

scala> rs2.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
! monoid.serializable: Falsified after 0 passed tests.

Apart from the serializable rule, Conjunction looks ok.

Option as Monoids 

LYAHFGG:

One way is to treat Maybe a as a monoid only if its type parameter a is a monoid as well and then implement mappend in such a way that it uses the mappend operation of the values that are wrapped with Just.

Let’s see if this is how Cats does it.

  implicit def optionMonoid[A](implicit ev: Semigroup[A]): Monoid[Option[A]] =
    new Monoid[Option[A]] {
      def empty: Option[A] = None
      def combine(x: Option[A], y: Option[A]): Option[A] =
        x match {
          case None => y
          case Some(xx) => y match {
            case None => x
            case Some(yy) => Some(ev.combine(xx,yy))
          }
        }
    }

If we replace mappend with the equivalent combine, the rest is just pattern matching. Let’s try using it.

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> none[String] |+| "andy".some
res4: Option[String] = Some(andy)

scala> 1.some |+| none[Int]
res5: Option[Int] = Some(1)

It works.

LYAHFGG:

But if we don’t know if the contents are monoids, we can’t use mappend between them, so what are we to do? Well, one thing we can do is to just discard the second value and keep the first one. For this, the First a type exists.

Haskell is using newtype to implement First type constructor. Since we can’t prevent allocation for generic value class, we can just make a normal case class.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class First[A: Eq](val unwrap: Option[A])
object First {
  implicit def firstMonoid[A: Eq]: Monoid[First[A]] = new Monoid[First[A]] {
    def combine(a1: First[A], a2: First[A]): First[A] =
      First((a1.unwrap, a2.unwrap) match {
        case (Some(x), _) => Some(x)
        case (None, y)    => y
      })
    def empty: First[A] = First(None: Option[A])
  }
  implicit def firstEq[A: Eq]: Eq[First[A]] = new Eq[First[A]] {
    def eqv(a1: First[A], a2: First[A]): Boolean =
      Eq[Option[A]].eqv(a1.unwrap, a2.unwrap)
  }
}

// Exiting paste mode, now interpreting.

defined class First
defined object First

scala> First('a'.some) |+| First('b'.some)
res6: First[Char] = First(Some(a))

scala> First(none[Char]) |+| First('b'.some)
res7: First[Char] = First(Some(b))

Let’s check the laws:

scala> implicit def arbFirst[A: Eq](implicit ev: Arbitrary[Option[A]]): Arbitrary[First[A]] =
         Arbitrary { ev.arbitrary map { First(_) } }
arbFirst: [A](implicit evidence$1: cats.Eq[A], implicit ev: org.scalacheck.Arbitrary[Option[A]])org.scalacheck.Arbitrary[First[A]]

scala> val rs3 = GroupLaws[First[Int]].monoid
rs3: algebra.laws.GroupLaws[First[Int]]#GroupProperties = algebra.laws.GroupLaws$GroupProperties@44736530

scala> rs3.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
! monoid.serializable: Falsified after 0 passed tests.

It thinks First is not serializable either.

LYAHFGG:

If we want a monoid on Maybe a such that the second parameter is kept if both parameters of mappend are Just values, Data.Monoid provides a the Last a type.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Last[A: Eq](val unwrap: Option[A])
object Last {
  implicit def lastMonoid[A: Eq]: Monoid[Last[A]] = new Monoid[Last[A]] {
    def combine(a1: Last[A], a2: Last[A]): Last[A] =
      Last((a1.unwrap, a2.unwrap) match {
        case (_, Some(y)) => Some(y)
        case (x, None)    => x
      })
    def empty: Last[A] = Last(None: Option[A])
  }
  implicit def lastEq[A: Eq]: Eq[Last[A]] = new Eq[Last[A]] {
    def eqv(a1: Last[A], a2: Last[A]): Boolean =
      Eq[Option[A]].eqv(a1.unwrap, a2.unwrap)
  }
}

// Exiting paste mode, now interpreting.

defined class Last
defined object Last

scala> Last('a'.some) |+| Last('b'.some)
res8: Last[Char] = Last(Some(b))

scala> Last('a'.some) |+| Last(none[Char])
res9: Last[Char] = Last(Some(a))

More law checking:

scala> implicit def arbLast[A: Eq](implicit ev: Arbitrary[Option[A]]): Arbitrary[Last[A]] =
         Arbitrary { ev.arbitrary map { Last(_) } }
arbLast: [A](implicit evidence$1: cats.Eq[A], implicit ev: org.scalacheck.Arbitrary[Option[A]])org.scalacheck.Arbitrary[Last[A]]

scala> val rs4 = GroupLaws[Last[Int]].monoid
rs4: algebra.laws.GroupLaws[Last[Int]]#GroupProperties = algebra.laws.GroupLaws$GroupProperties@121fd6d9

scala> rs4.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
! monoid.serializable: Falsified after 0 passed tests.

I think we got a pretty good feel for monoids.

About Laws 

We covered lots of things around laws today. Why do we need laws anyway?

Laws are important because laws are important is a tautological statement, but there’s a grain of truth to it. Like traffic laws dictating that we drive on one side within a patch of land, some laws are convenient if everyone follows them.

What Cats/Haskell-style function programming allows us to do is write code that abstracts out the data, container, execution model, etc. The abstraction will make only the assumptions stated in the laws, thus each A: Monoid needs to satisfy the laws for the abstracted code to behave properly. We can call this the utilitarian view.

Even if we could accept the utility, why those laws? It’s because it’s on HaskellWiki or one of SPJ’s papers. They might offer a starting point with an existing implementation, which we can mimic. We can call this the traditionalist view. However, there is a danger of inheriting the design choices or even limitations that were made specifically for Haskell. Functor in category theory, for instance, is a more general term than Functor[F]. At least fmap is a function that returns F[A] => F[B], which is related. By the time we get to map in Scala, we lose that because of type inference.

Eventually we should tie our understanding back to math. Monoid laws correspond with the mathematical definition of monoids, and from there we can reap the benefits of the known properties of monoids. This is relevant especially for monoid laws, because the three laws are the same as the axioms of the category, because a monoid is a special case of a category.

For the sake of learning, I think it’s ok to start out with cargo cult. We all learn language through imitation and pattern recognition.

Using monoids to fold data structures 

LYAHFGG:

Because there are so many data structures that work nicely with folds, the Foldable type class was introduced. Much like Functor is for things that can be mapped over, Foldable is for things that can be folded up!

The equivalent in Cats is also called Foldable. Here’s the typeclass contract:

/**
 * Data structures that can be folded to a summary value.
 *
 * In the case of a collection (such as `List` or `Set`), these
 * methods will fold together (combine) the values contained in the
 * collection to produce a single result. Most collection types have
 * `foldLeft` methods, which will usually be used by the associationed
 * `Fold[_]` instance.
 *
 * Foldable[F] is implemented in terms of two basic methods:
 *
 *  - `foldLeft(fa, b)(f)` eagerly folds `fa` from left-to-right.
 *  - `foldLazy(fa, b)(f)` lazily folds `fa` from right-to-left.
 *
 * Beyond these it provides many other useful methods related to
 * folding over F[A] values.
 *
 * See: [[https://www.cs.nott.ac.uk/~gmh/fold.pdf A tutorial on the universality and expressiveness of fold]]
 */
@typeclass trait Foldable[F[_]] extends Serializable { self =>

  /**
   * Left associative fold on 'F' using the function 'f'.
   */
  def foldLeft[A, B](fa: F[A], b: B)(f: (B, A) => B): B

  /**
   * Right associative lazy fold on `F` using the folding function 'f'.
   *
   * This method evaluates `b` lazily (in some cases it will not be
   * needed), and returns a lazy value. We are using `A => Fold[B]` to
   * support laziness in a stack-safe way.
   *
   * For more detailed information about how this method works see the
   * documentation for `Fold[_]`.
   */
  def foldLazy[A, B](fa: F[A], lb: Lazy[B])(f: A => Fold[B]): Lazy[B] =
    Lazy(partialFold[A, B](fa)(f).complete(lb))

  /**
   * Low-level method that powers `foldLazy`.
   */
  def partialFold[A, B](fa: F[A])(f: A => Fold[B]): Fold[B]
  ....
}

We can use this as follows:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> Foldable[List].foldLeft(List(1, 2, 3), 1) {_ * _}
res0: Int = 6

Foldable comes with some useful functions/operators, many of them taking advantage of the typeclasses. Let’s try the fold. Monoid[A] gives us empty and combine, so that’s enough information to fold over things.

  /**
   * Fold implemented using the given Monoid[A] instance.
   */
  def fold[A](fa: F[A])(implicit A: Monoid[A]): A =
    foldLeft(fa, A.empty) { (acc, a) =>
      A.combine(acc, a)
    }

Let’s try this out.

scala> Foldable[List].fold(List(1, 2, 3))(Monoid[Int])
res1: Int = 6

There’s a variant called foldMap that accepts a function.

  /**
   * Fold implemented by mapping `A` values into `B` and then
   * combining them using the given `Monoid[B]` instance.
   */
  def foldMap[A, B](fa: F[A])(f: A => B)(implicit B: Monoid[B]): B =
    foldLeft(fa, B.empty) { (b, a) =>
      B.combine(b, f(a))
    }

Since the standard collection library doesn’t implement foldMap, we can now use this as an operator.

scala> import cats.syntax.foldable._
import cats.syntax.foldable._

scala> List(1, 2, 3).foldMap(identity)(Monoid[Int])
res2: Int = 6

Another useful thing is that we can use this to convert the values into newtype.

scala> :paste
// Entering paste mode (ctrl-D to finish)
class Conjunction(val unwrap: Boolean) extends AnyVal
object Conjunction {
  @inline def apply(b: Boolean): Conjunction = new Conjunction(b)
  implicit val conjunctionMonoid: Monoid[Conjunction] = new Monoid[Conjunction] {
    def combine(a1: Conjunction, a2: Conjunction): Conjunction =
      Conjunction(a1.unwrap && a2.unwrap)
    def empty: Conjunction = Conjunction(true)
  }
  implicit val conjunctionEq: Eq[Conjunction] = new Eq[Conjunction] {
    def eqv(a1: Conjunction, a2: Conjunction): Boolean =
      a1.unwrap == a2.unwrap
  }
}

// Exiting paste mode, now interpreting.

defined class Conjunction
defined object Conjunction

scala> val x = List(true, false, true) foldMap {Conjunction(_)}
x: Conjunction = Conjunction@4d5

scala> x.unwrap
res3: Boolean = false

This surely beats writing Conjunction(true) for each of them and connecting them with |+|.

We will pick it up from here later.

day 5 

Derived from Bello Nock's Sky Walk by Chris Phutully

Yesterday we reviewed Semigroup and Monoid, implementing custom monoids along the way. We also looked at Foldable that can foldMap etc.

Apply.ap 

Starting with a few updates today. First, Apply.apply that we looked at in day 3 has been renamed to Apply.ap #308.

Serializable typeclass instance 

Second, do you remember that checking the monoid laws on a value class kept tripping on Serializable? That turned out not to be Cats’ fault. I went into Cats’ gitter chat and Erik (@d6/@non) kindly pointed out that the reason my typeclass instances are not serializable is because they are defined on the REPL. Once I moved First to src/ the laws passed fine:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import algebra.laws.GroupLaws
import algebra.laws.GroupLaws

scala> import org.scalacheck.{ Arbitrary, Gen }
import org.scalacheck.{Arbitrary, Gen}

scala> import example.First
import example.First

scala> implicit def arbFirst[A: Eq](implicit ev: Arbitrary[Option[A]]): Arbitrary[First[A]] =
     |          Arbitrary { ev.arbitrary map { First(_) } }
arbFirst: [A](implicit evidence$1: cats.Eq[A], implicit ev: org.scalacheck.Arbitrary[Option[A]])org.scalacheck.Arbitrary[example.First[A]]

scala> val rs = GroupLaws[First[Int]].monoid
rs: algebra.laws.GroupLaws[example.First[Int]]#GroupProperties = algebra.laws.GroupLaws$GroupProperties@77fac6ab

scala> rs.all.check
+ monoid.associativity: OK, passed 100 tests.
+ monoid.combineAll(Nil) == id: OK, passed 100 tests.
+ monoid.combineN(a, 0) == id: OK, passed 100 tests.
+ monoid.combineN(a, 1) == a: OK, passed 100 tests.
+ monoid.combineN(a, 2) == a |+| a: OK, passed 100 tests.
+ monoid.isEmpty: OK, passed 100 tests.
+ monoid.leftIdentity: OK, passed 100 tests.
+ monoid.rightIdentity: OK, passed 100 tests.
+ monoid.serializable: OK, proved property.

Jason Zaugg (@retronym) also pointed that, to support serialization beyond precisely the same version of Cats on both sides of the wire, we need to do more things like:

FlatMap 

We get to start a new chapter today on Learn You a Haskell for Great Good.

Monads are a natural extension applicative functors, and they provide a solution to the following problem: If we have a value with context, m a, how do we apply it to a function that takes a normal a and returns a value with a context.

Cats breaks down the Monad typeclass into two typeclasses: FlatMap and Monad. Here’s the typeclass contract for FlatMap:

@typeclass trait FlatMap[F[_]] extends Apply[F] {
  def flatMap[A, B](fa: F[A])(f: A => F[B]): F[B]

  def tailRecM[A, B](a: A)(f: A => F[Either[A, B]]): F[B]

  ....
}

Note that FlatMap extends Apply, the weaker version of Applicative. And here are the operators:

class FlatMapOps[F[_], A](fa: F[A])(implicit F: FlatMap[F]) {
  def flatMap[B](f: A => F[B]): F[B] = F.flatMap(fa)(f)
  def mproduct[B](f: A => F[B]): F[(A, B)] = F.mproduct(fa)(f)
  def >>=[B](f: A => F[B]): F[B] = F.flatMap(fa)(f)
  def >>[B](fb: F[B]): F[B] = F.flatMap(fa)(_ => fb)
}

It introduces the flatMap operator and its symbolic alias >>=. We’ll worry about the other operators later. We are used to flapMap from the standard library:

scala> import cats._, cats.instances.all._, cats.syntax.flatMap._
import cats._
import cats.instances.all._
import cats.syntax.flatMap._

scala> (Right(3): Either[String, Int]) flatMap { x => Right(x + 1) }
res0: scala.util.Either[String,Int] = Right(4)

Getting our feet wet with Option 

Following the book, let’s explore Option. In this section I’ll be less fussy about whether it’s using Cats’ typeclass or standard library’s implementation. Here’s Option as a functor:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> "wisdom".some map { _ + "!" }
res1: Option[String] = Some(wisdom!)

scala> none[String] map { _ + "!" }
res2: Option[String] = None

Here’s Option as an Apply:

scala> import cats.syntax.apply._
import cats.syntax.apply._

scala> ({(_: Int) + 3}.some) ap 3.some
res3: Option[Int] = Some(6)

scala> none[String => String] ap "greed".some
res4: Option[String] = None

scala> ({(_: String).toInt}.some) ap none[String]
res5: Option[Int] = None

Here’s Option as a FlatMap:

scala> 3.some flatMap { (x: Int) => (x + 1).some }
res6: Option[Int] = Some(4)

scala> "smile".some flatMap { (x: String) =>  (x + " :)").some }
res7: Option[String] = Some(smile :))

scala> none[Int] flatMap { (x: Int) => (x + 1).some }
res8: Option[Int] = None

scala> none[String] flatMap { (x: String) =>  (x + " :)").some }
res9: Option[String] = None

Just as expected, we get None if the monadic value is None.

FlatMap laws 

FlatMap has a single law called associativity:

Cats defines two more laws in FlatMapLaws:

trait FlatMapLaws[F[_]] extends ApplyLaws[F] {
  implicit override def F: FlatMap[F]

  def flatMapAssociativity[A, B, C](fa: F[A], f: A => F[B], g: B => F[C]): IsEq[F[C]] =
    fa.flatMap(f).flatMap(g) <-> fa.flatMap(a => f(a).flatMap(g))

  def flatMapConsistentApply[A, B](fa: F[A], fab: F[A => B]): IsEq[F[B]] =
    fab.ap(fa) <-> fab.flatMap(f => fa.map(f))

  /**
   * The composition of `cats.data.Kleisli` arrows is associative. This is
   * analogous to [[flatMapAssociativity]].
   */
  def kleisliAssociativity[A, B, C, D](f: A => F[B], g: B => F[C], h: C => F[D], a: A): IsEq[F[D]] = {
    val (kf, kg, kh) = (Kleisli(f), Kleisli(g), Kleisli(h))
    ((kf andThen kg) andThen kh).run(a) <-> (kf andThen (kg andThen kh)).run(a)
  }
}

Monad 

Earlier I wrote that Cats breaks down the Monad typeclass into two typeclasses: FlatMap and Monad. The FlatMap-Monad relationship forms a parallel with the Apply-Applicative relationship:

@typeclass trait Monad[F[_]] extends FlatMap[F] with Applicative[F] {
  ....
}

Monad is a FlatMap with pure. Unlike Haskell, Monad[F] extends Applicative[F] so there’s no return vs pure discrepancies.

Walk the line 

Derived from Bello Nock's Sky Walk by Chris Phutully

LYAHFGG:

Let’s say that [Pierre] keeps his balance if the number of birds on the left side of the pole and on the right side of the pole is within three. So if there’s one bird on the right side and four birds on the left side, he’s okay. But if a fifth bird lands on the left side, then he loses his balance and takes a dive.

Now let’s try implementing the Pole example from the book.

scala> import cats._, cats.instances.all._, cats.syntax.flatMap._
import cats._
import cats.instances.all._
import cats.syntax.flatMap._

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> type Birds = Int
defined type alias Birds

scala> case class Pole(left: Birds, right: Birds)
defined class Pole

I don’t think it’s common to alias Int like this in Scala, but we’ll go with the flow. I am going to turn Pole into a case class so I can implement landLeft and landRight as methods:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Pole(left: Birds, right: Birds) {
  def landLeft(n: Birds): Pole = copy(left = left + n)
  def landRight(n: Birds): Pole = copy(right = right + n)
}

// Exiting paste mode, now interpreting.

defined class Pole

I think it looks better with some OO:

scala> Pole(0, 0).landLeft(2)
res0: Pole = Pole(2,0)

scala> Pole(1, 2).landRight(1)
res1: Pole = Pole(1,3)

scala> Pole(1, 2).landRight(-1)
res2: Pole = Pole(1,1)

We can chain these too:

scala> Pole(0, 0).landLeft(1).landRight(1).landLeft(2)
res3: Pole = Pole(3,1)

scala> Pole(0, 0).landLeft(1).landRight(4).landLeft(-1).landRight(-2)
res4: Pole = Pole(0,2)

As the book says, an intermediate value has failed but the calculation kept going. Now let’s introduce failures as Option[Pole]:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Pole(left: Birds, right: Birds) {
  def landLeft(n: Birds): Option[Pole] =
    if (math.abs((left + n) - right) < 4) copy(left = left + n).some
    else none[Pole]
  def landRight(n: Birds): Option[Pole] =
    if (math.abs(left - (right + n)) < 4) copy(right = right + n).some
    else none[Pole]
  }

// Exiting paste mode, now interpreting.

defined class Pole

scala> Pole(0, 0).landLeft(2)
res5: Option[Pole] = Some(Pole(2,0))

scala> Pole(0, 3).landLeft(10)
res6: Option[Pole] = None

Now we can chain the landLeft/landRight using flatMap or its symbolic alias >>=.

scala> val rlr = Monad[Option].pure(Pole(0, 0)) >>= {_.landRight(2)} >>=
  {_.landLeft(2)} >>= {_.landRight(2)}
rlr: Option[Pole] = Some(Pole(2,4))

Let’s see if monadic chaining simulates the pole balancing better:

scala> val lrlr = Monad[Option].pure(Pole(0, 0)) >>= {_.landLeft(1)} >>=
  {_.landRight(4)} >>= {_.landLeft(-1)} >>= {_.landRight(-2)}
lrlr: Option[Pole] = None

It works. Take time to understand this example because this example highlights what a monad is.

  1. First, pure puts Pole(0, 0) into a default context: Pole(0, 0).some.
  2. Then, Pole(0, 0).some >>= {_.landLeft(1)} happens. Since it’s a Some value, _.landLeft(1) gets applied to Pole(0, 0), resulting to Pole(1, 0).some.
  3. Next, Pole(1, 0).some >>= {_.landRight(4)} takes place. The result is Pole(1, 4).some. Now we at at the max difference between left and right.
  4. Pole(1, 4).some >>= {_.landLeft(-1)} happens, resulting to none[Pole]. The difference is too great, and pole becomes off balance.
  5. none[Pole] >>= {_.landRight(-2)} results automatically to none[Pole].

In this chain of monadic functions, the effect from one function is carried over to the next.

Banana on wire 

LYAHFGG:

We may also devise a function that ignores the current number of birds on the balancing pole and just makes Pierre slip and fall. We can call it banana.

Here’s the banana that always fails:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Pole(left: Birds, right: Birds) {
  def landLeft(n: Birds): Option[Pole] =
    if (math.abs((left + n) - right) < 4) copy(left = left + n).some
    else none[Pole]
  def landRight(n: Birds): Option[Pole] =
    if (math.abs(left - (right + n)) < 4) copy(right = right + n).some
    else none[Pole]
  def banana: Option[Pole] = none[Pole]
}

// Exiting paste mode, now interpreting.

defined class Pole

scala> val lbl = Monad[Option].pure(Pole(0, 0)) >>= {_.landLeft(1)} >>=
  {_.banana} >>= {_.landRight(1)}
lbl: Option[Pole] = None

LYAHFGG:

Instead of making functions that ignore their input and just return a predetermined monadic value, we can use the >> function.

Here’s how >> behaves with Option:

scala> none[Int] >> 3.some
res7: Option[Int] = None

scala> 3.some >> 4.some
res8: Option[Int] = Some(4)

scala> 3.some >> none[Int]
res9: Option[Int] = None

Let’s try replacing banana with >> none[Pole]:

scala> val lbl = Monad[Option].pure(Pole(0, 0)) >>= {_.landLeft(1)} >>
  none[Pole] >>= {_.landRight(1)}
<console>:27: error: missing parameter type for expanded function ((x$1) => x$1.landLeft(1))
       val lbl = Monad[Option].pure(Pole(0, 0)) >>= {_.landLeft(1)} >>
                                                     ^

The type inference broke down all the sudden. The problem is likely the operator precedence. Programming in Scala says:

The one exception to the precedence rule, alluded to above, concerns assignment operators, which end in an equals character. If an operator ends in an equals character (=), and the operator is not one of the comparison operators <=, >=, ==, or !=, then the precedence of the operator is the same as that of simple assignment (=). That is, it is lower than the precedence of any other operator.

Note: The above description is incomplete. Another exception from the assignment operator rule is if it starts with (=) like ===.

Because >>= (bind) ends in the equals character, its precedence is the lowest, which forces ({_.landLeft(1)} >> (none: Option[Pole])) to evaluate first. There are a few unpalatable work arounds. First we can use dot-and-parens like normal method calls:

scala> Monad[Option].pure(Pole(0, 0)).>>=({_.landLeft(1)}).>>(none[Pole]).>>=({_.landRight(1)})
res10: Option[Pole] = None

Or we can recognize the precedence issue and place parens around just the right place:

scala> (Monad[Option].pure(Pole(0, 0)) >>= {_.landLeft(1)}) >> none[Pole] >>= {_.landRight(1)}
res11: Option[Pole] = None

Both yield the right result.

for comprehension 

LYAHFGG:

Monads in Haskell are so useful that they got their own special syntax called do notation.

First, let’s write the nested lambda:

scala> import cats._, cats.syntax.show._
import cats._
import cats.syntax.show._

scala> 3.some >>= { x => "!".some >>= { y => (x.show + y).some } }
res12: Option[String] = Some(3!)

By using >>=, any part of the calculation can fail:

scala> 3.some >>= { x => none[String] >>= { y => (x.show + y).some } }
res13: Option[String] = None

scala> (none: Option[Int]) >>= { x => "!".some >>= { y => (x.show + y).some } }
res14: Option[String] = None

scala> 3.some >>= { x => "!".some >>= { y => none[String] } }
res15: Option[String] = None

Instead of the do notation in Haskell, Scala has the for comprehension, which does similar things:

scala> for {
         x <- 3.some
         y <- "!".some
       } yield (x.show + y)
res16: Option[String] = Some(3!)

LYAHFGG:

In a do expression, every line that isn’t a let line is a monadic value.

That’s not quite accurate for for, but we can come back to this later.

Pierre returns 

LYAHFGG:

Our tightwalker’s routine can also be expressed with do notation.

scala> def routine: Option[Pole] =
         for {
           start <- Monad[Option].pure(Pole(0, 0))
           first <- start.landLeft(2)
           second <- first.landRight(2)
           third <- second.landLeft(1)
         } yield third
routine: Option[Pole]

scala> routine
res17: Option[Pole] = Some(Pole(3,2))

We had to extract third since yield expects Pole not Option[Pole].

LYAHFGG:

If we want to throw the Pierre a banana peel in do notation, we can do the following:

scala> def routine: Option[Pole] =
         for {
           start <- Monad[Option].pure(Pole(0, 0))
           first <- start.landLeft(2)
           _ <- none[Pole]
           second <- first.landRight(2)
           third <- second.landLeft(1)
         } yield third
routine: Option[Pole]

scala> routine
res18: Option[Pole] = None

Pattern matching and failure 

LYAHFGG:

In do notation, when we bind monadic values to names, we can utilize pattern matching, just like in let expressions and function parameters.

scala> def justH: Option[Char] =
         for {
           (x :: xs) <- "hello".toList.some
         } yield x
justH: Option[Char]

scala> justH
res19: Option[Char] = Some(h)

When pattern matching fails in a do expression, the fail function is called. It’s part of the Monad type class and it enables failed pattern matching to result in a failure in the context of the current monad instead of making our program crash.

scala> def wopwop: Option[Char] =
         for {
           (x :: xs) <- "".toList.some
         } yield x
wopwop: Option[Char]

scala> wopwop
res20: Option[Char] = None

The failed pattern matching returns None here. This is an interesting aspect of for syntax that I haven’t thought about, but totally makes sense.

Monad laws 

Monad had three laws:

LYAHFGG:

The first monad law states that if we take a value, put it in a default context with return and then feed it to a function by using >>=, it’s the same as just taking the value and applying the function to it.

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> assert { (Monad[Option].pure(3) >>= { x => (x + 100000).some }) ===
         ({ (x: Int) => (x + 100000).some })(3) }

LYAHFGG:

The second law states that if we have a monadic value and we use >>= to feed it to return, the result is our original monadic value.

scala> assert { ("move on up".some >>= {Monad[Option].pure(_)}) === "move on up".some }

LYAHFGG:

The final monad law says that when we have a chain of monadic function applications with >>=, it shouldn’t matter how they’re nested.

scala> Monad[Option].pure(Pole(0, 0)) >>= {_.landRight(2)} >>= {_.landLeft(2)} >>= {_.landRight(2)}
res23: Option[Pole] = Some(Pole(2,4))

scala> Monad[Option].pure(Pole(0, 0)) >>= { x =>
       x.landRight(2) >>= { y =>
       y.landLeft(2) >>= { z =>
       z.landRight(2)
       }}}
res24: Option[Pole] = Some(Pole(2,4))

These laws look might look familiar if you remember monoid laws from day 4. That’s because monad is a special kind of a monoid.

You might be thinking, “But wait. Isn’t Monoid for kind A (or *)?” Yes, you’re right. And that’s the difference between monoid with lowercase m and Monoid[A]. Haskell-style functional programming allows you to abstract out the container and execution model. In category theory, a notion like monoid can be generalized to A, F[A], F[A] => F[B] and all sorts of things. Instead of thinking “omg so many laws,” know that there’s an underlying structure that connects many of them.

Here’s how to check Monad laws using Discipline:

scala> import cats._, cats.instances.all._, cats.laws.discipline.MonadTests
import cats._
import cats.instances.all._
import cats.laws.discipline.MonadTests

scala> val rs = MonadTests[Option].monad[Int, Int, Int]
rs: cats.laws.discipline.MonadTests[Option]#RuleSet = cats.laws.discipline.MonadTests$$anon$2@35e8de37

scala> rs.all.check
+ monad.applicative homomorphism: OK, passed 100 tests.
+ monad.applicative identity: OK, passed 100 tests.
+ monad.applicative interchange: OK, passed 100 tests.
+ monad.applicative map: OK, passed 100 tests.
+ monad.apply composition: OK, passed 100 tests.
+ monad.covariant composition: OK, passed 100 tests.
+ monad.covariant identity: OK, passed 100 tests.
+ monad.flatMap associativity: OK, passed 100 tests.
+ monad.flatMap consistent apply: OK, passed 100 tests.
+ monad.invariant composition: OK, passed 100 tests.
+ monad.invariant identity: OK, passed 100 tests.
+ monad.monad left identity: OK, passed 100 tests.
+ monad.monad right identity: OK, passed 100 tests.

List datatype 

LYAHFGG:

On the other hand, a value like [3,8,9] contains several results, so we can view it as one value that is actually many values at the same time. Using lists as applicative functors showcases this non-determinism nicely.

Let’s look at using List as Applicatives again:

scala> import cats._, cats.instances.all._, cats.syntax.cartesian._
import cats._
import cats.instances.all._
import cats.syntax.cartesian._

scala> (List(1, 2, 3) |@| List(10, 100, 100)) map { _ * _ }
res0: List[Int] = List(10, 100, 100, 20, 200, 200, 30, 300, 300)

let’s try feeding a non-deterministic value to a function:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> List(3, 4, 5) >>= { x => List(x, -x) }
res1: List[Int] = List(3, -3, 4, -4, 5, -5)

So in this monadic view, a List context represents a mathematical value that could have multiple solutions. Other than that manipulating Lists using for notation is just like plain Scala:

scala> for {
         n <- List(1, 2)
         ch <- List('a', 'b')
       } yield (n, ch)
res2: List[(Int, Char)] = List((1,a), (1,b), (2,a), (2,b))

FunctorFilter 

Scala’s for comprehension allows filtering:

scala> import cats._, cats.instances.all._, cats.syntax.show._
import cats._
import cats.instances.all._
import cats.syntax.show._

scala> for {
         x <- (1 to 50).toList if x.show contains '7'
       } yield x
res0: List[Int] = List(7, 17, 27, 37, 47)

Here’s the typeclass contract for `FunctorFilter`:

@typeclass trait FunctorFilter[F[_]] extends Functor[F] {
  /**
   * A combined [[map]] and [[filter]]. Filtering is handled via `Option`
   * instead of `Boolean` such that the output type `B` can be different than
   * the input type `A`.
   * ....
   *
   **/
  def mapFilter[A, B](fa: F[A])(f: A => Option[B]): F[B]
}

We can use this like this:

scala> import cats.syntax.functorFilter._
import cats.syntax.functorFilter._

scala> val english = Map(1 -> "one", 3 -> "three", 10 -> "ten")
english: scala.collection.immutable.Map[Int,String] = Map(1 -> one, 3 -> three, 10 -> ten)

scala> (1 to 50).toList mapFilter { english.get(_) }
res1: List[String] = List(one, three, ten)

This enables the derivative functions/operators collect, flattenOption, and filter:

@typeclass trait FunctorFilter[F[_]] extends Functor[F] {

  def mapFilter[A, B](fa: F[A])(f: A => Option[B]): F[B]

  /**
   * Similar to [[mapFilter]] but uses a partial function instead of a function
   * that returns an `Option`.
   */
  def collect[A, B](fa: F[A])(f: PartialFunction[A, B]): F[B] =
    mapFilter(fa)(f.lift)

  /**
   * "Flatten" out a structure by collapsing `Option`s.
   */
  def flattenOption[A](fa: F[Option[A]]): F[A] = mapFilter(fa)(identity)

  /**
   * Apply a filter to a structure such that the output structure contains all
   * `A` elements in the input structure that satisfy the predicate `f` but none
   * that don't.
   */
  def filter[A](fa: F[A])(f: A => Boolean): F[A] =
    mapFilter(fa)(a => if (f(a)) Some(a) else None)
}

We can use this like this:

scala> def collectEnglish[F[_]: FunctorFilter](f: F[Int]): F[String] =
         f collect {
           case 1  => "one"
           case 3  => "three"
           case 10 => "ten"
         }
collectEnglish: [F[_]](f: F[Int])(implicit evidence$1: cats.FunctorFilter[F])F[String]

scala> collectEnglish((1 to 50).toList)
res2: List[String] = List(one, three, ten)

scala> def filterSeven[F[_]: FunctorFilter](f: F[Int]): F[Int] =
         f filter { _.show contains '7' }
filterSeven: [F[_]](f: F[Int])(implicit evidence$1: cats.FunctorFilter[F])F[Int]

scala> filterSeven((1 to 50).toList)
res3: List[Int] = List(7, 17, 27, 37, 47)

MonadFilter 

Here’s the typeclass contract for `MonadFilter`:

@typeclass trait MonadFilter[F[_]] extends Monad[F] with FunctorFilter[F] {

  def empty[A]: F[A]

  ...
}

We can use this to get an empty value of a datatype that supports it.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> MonadFilter[List].empty[Int]
res0: List[Int] = List()

A knight’s quest 

LYAHFGG:

Here’s a problem that really lends itself to being solved with non-determinism. Say you have a chess board and only one knight piece on it. We want to find out if the knight can reach a certain position in three moves.

Instead of type aliasing a pair, let’s make this into a case class again:

scala> case class KnightPos(c: Int, r: Int)
defined class KnightPos

Here’s the function to calculate all of the knight’s next positions:

scala> case class KnightPos(c: Int, r: Int) {
         def move: List[KnightPos] =
           for {
             KnightPos(c2, r2) <- List(KnightPos(c + 2, r - 1), KnightPos(c + 2, r + 1),
               KnightPos(c - 2, r - 1), KnightPos(c - 2, r + 1),
               KnightPos(c + 1, r - 2), KnightPos(c + 1, r + 2),
               KnightPos(c - 1, r - 2), KnightPos(c - 1, r + 2)) if (
               ((1 to 8).toList contains c2) && ((1 to 8).toList contains r2))
           } yield KnightPos(c2, r2)
       }
defined class KnightPos

scala> KnightPos(6, 2).move
res1: List[KnightPos] = List(KnightPos(8,1), KnightPos(8,3), KnightPos(4,1), KnightPos(4,3), KnightPos(7,4), KnightPos(5,4))

scala> KnightPos(8, 1).move
res2: List[KnightPos] = List(KnightPos(6,2), KnightPos(7,3))

The answers look good. Now we implement chaining this three times:

scala> case class KnightPos(c: Int, r: Int) {
         def move: List[KnightPos] =
           for {
             KnightPos(c2, r2) <- List(KnightPos(c + 2, r - 1), KnightPos(c + 2, r + 1),
             KnightPos(c - 2, r - 1), KnightPos(c - 2, r + 1),
             KnightPos(c + 1, r - 2), KnightPos(c + 1, r + 2),
             KnightPos(c - 1, r - 2), KnightPos(c - 1, r + 2)) if (
             ((1 to 8).toList contains c2) && ((1 to 8).toList contains r2))
           } yield KnightPos(c2, r2)
         def in3: List[KnightPos] =
           for {
             first <- move
             second <- first.move
             third <- second.move
           } yield third
         def canReachIn3(end: KnightPos): Boolean = in3 contains end
       }
defined class KnightPos

scala> KnightPos(6, 2) canReachIn3 KnightPos(6, 1)
res3: Boolean = true

scala> KnightPos(6, 2) canReachIn3 KnightPos(7, 3)
res4: Boolean = false

As it turns out, from (6, 2) you can reach (6, 1) in three moves, but not (7, 3). As with Pierre’s bird example, one of key aspect of the monadic calculation is that the effect of one move can be passed on to the next.

We’ll pick up from here.

day 6 

Yesterday we looked at FlatMap and Monad typeclass. We looked at how monadic chaining can add contexts to values. Because both Option and List already have flatMap in the standard library, it was more about changing the way we see things rather than introducing new code. We also reviewed for syntax as a way of chaining monadic operations.

Before I jump into the topic, I want to mention Pamflet, the Scala-based blog/book platform I’m using here. Nathan Hamblen (@n8han) started Pamflet and I’ve been contributing features to it too. Speaking of which, the source of these posts are also available on eed3si9n/herding-cats if you’re curious how this is built. A special thanks to Leif Wickland (@leifwickland) for proofreading all of the posts and sending me pull requests with fixes!

do vs for 

There are subtle differences in Haskell’s do notation and Scala’s for syntax. Here’s an example of do notation:

foo = do
  x <- Just 3
  y <- Just "!"
  Just (show x ++ y)

Typically one would write return (show x ++ y), but I wrote out Just, so it’s clear that the last line is a monadic value. On the other hand, Scala would look as follows:

scala> def foo = for {
         x <- Some(3)
         y <- Some("!")
       } yield x.toString + y
foo: Option[String]

Looks similar, but there are some differences.

Here are some demonstration of these points:

scala> import collection.immutable.BitSet
import collection.immutable.BitSet

scala> val bits = BitSet(1, 2, 3)
bits: scala.collection.immutable.BitSet = BitSet(1, 2, 3)

scala> for {
         x <- bits
       } yield x.toFloat
res5: scala.collection.immutable.SortedSet[Float] = TreeSet(1.0, 2.0, 3.0)

scala> for {
         i <- List(1, 2, 3)
         j <- Some(1)
       } yield i + j
res6: List[Int] = List(2, 3, 4)

scala> for {
         i <- Map(1 -> 2)
         j <- Some(3)
       } yield j
res7: scala.collection.immutable.Iterable[Int] = List(3)

Implementing actM 

There are several DSLs around in Scala that transforms imperative-looking code into monadic or applicative function calls using macros:

Covering full array of Scala syntax in the macro is hard work, but by copy-pasting code from Async and Effectful I put together a toy macro that supports only simple expressions and vals. I’ll omit the details, but the key function is this:

  def transform(group: BindGroup, isPure: Boolean): Tree =
    group match {
      case (binds, tree) =>
        binds match {
          case Nil =>
            if (isPure) q"""$monadInstance.pure($tree)"""
            else tree
          case (name, unwrappedFrom) :: xs =>
            val innerTree = transform((xs, tree), isPure)
            val param = ValDef(Modifiers(Flag.PARAM), name, TypeTree(), EmptyTree)
            q"""$monadInstance.flatMap($unwrappedFrom) { $param => $innerTree }"""
        }
    }

Here’s how we can use actM:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> import example.MonadSyntax._
import example.MonadSyntax._

scala> actM[Option, String] {
         val x = 3.some.next
         val y = "!".some.next
         x.toString + y
       }
res8: Option[String] = Some(3!)

fa.next expands to a Monad[F].flatMap(fa)() call. So the above code expands into:

scala> Monad[Option].flatMap[String, String]({
         val fa0: Option[Int] = 3.some
         Monad[Option].flatMap[Int, String](fa0) { (arg0: Int) => {
           val next0: Int = arg0
           val x: Int = next0
           val fa1: Option[String] = "!".some
           Monad[Option].flatMap[String, String](fa1)((arg1: String) => {
             val next1: String = arg1
             val y: String = next1
             Monad[Option].pure[String](x.toString + y)
           })
         }}
       }) { (arg2: String) => Monad[Option].pure[String](arg2) }
res9: Option[String] = Some(3!)

Let’s see if this can prevent auto conversion from Option to List.

scala> actM[List, Int] {
         val i = List(1, 2, 3).next
         val j = 1.some.next
         i + j
       }
<console>:32: error: exception during macro expansion:
scala.reflect.macros.TypecheckException: type mismatch;
 found   : fa$macro$15.type (with underlying type Option[Int] @scala.reflect.internal.annotations.uncheckedBounds)
 required: List[?]
	at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:34)
	at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:28)
	at scala.reflect.macros.contexts.Typers$$anonfun$3.apply(Typers.scala:24)
	at scala.reflect.macros.contexts.Typers$$anonfun$3.apply(Typers.scala:24)
	at scala.reflect.macros.contexts.Typers$$anonfun$withContext$1$1.apply(Typers.scala:25)
	at scala.reflect.macros.contexts.Typers$$anonfun$withContext$1$1.apply(Typers.scala:25)
	at scala.reflect.macros.contexts.Typers$$anonfun$1.apply(Typers.scala:23)
	at scala.reflect.macros.contexts.Typers$$anonfun$1.apply(Typers.scala:23)
	at scala.reflect.macros.contexts.Typers$class.withContext$1(Typers.scala:25)
	at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2.apply(Typers.scala:28)
	at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2.apply(Typers.scala:28)
	at scala.reflect.internal.Trees$class.wrappingIntoTerm(Trees.scala:1716)
	at scala.reflect.internal.SymbolTable.wrappingIntoTerm(SymbolTable.scala:16)
	at scala.reflect.macros.contexts.Typers$class.withWrapping$1(Typers.scala:26)
	at scala.reflect.macros.contexts.Typers$class.typecheck(Typers.scala:28)
	at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
	at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
	at example.internal.ActMTransform$class.actMTransform(ActMTransform.scala:27)
	at example.internal.ActMMacro$$anon$1.actMTransform(ActMMacro.scala:8)
	at example.internal.ActMBase.actMImpl(ActMBase.scala:13)
	at example.internal.ActMImpl$.actMImpl(ActMImpl.scala:9)

       actM[List, Int] {
                       ^

The error message is a bit rough, but we were able to catch this at compile-time. This will also work for any monads including Future.

scala> :paste
// Entering paste mode (ctrl-D to finish)
val x = {
  import scala.concurrent.{ExecutionContext, Future}
  import ExecutionContext.Implicits.global
  actM[Future, Int] {
    val i = Future { 1 }.next
    val j = Future { 2 }.next
    i + j
  }
}

// Exiting paste mode, now interpreting.

x: scala.concurrent.Future[Int] = List()

scala> x.value
res11: Option[scala.util.Try[Int]] = Some(Success(3))

This macro is incomplete toy code, but it demonstrates potential usefulness for having something like this.

Writer datatype 

Learn You a Haskell for Great Good says:

Whereas the Maybe monad is for values with an added context of failure, and the list monad is for nondeterministic values, Writer monad is for values that have another value attached that acts as a sort of log value.

Let’s follow the book and implement applyLog function:

scala> def isBigGang(x: Int): (Boolean, String) =
         (x > 9, "Compared gang size to 9.")
isBigGang: (x: Int)(Boolean, String)

scala> implicit class PairOps[A](pair: (A, String)) {
         def applyLog[B](f: A => (B, String)): (B, String) = {
           val (x, log) = pair
           val (y, newlog) = f(x)
           (y, log ++ newlog)
         }
       }
defined class PairOps

scala> (3, "Smallish gang.") applyLog isBigGang
res0: (Boolean, String) = (false,Smallish gang.Compared gang size to 9.)

Since method injection is a common use case for implicits, Scala 2.10 adds a syntax sugar called implicit class to make the promotion from a class to an enriched class easier. Here’s how we can generalize the log to a Semigroup:

scala> import cats._, cats.instances.all._, cats.syntax.semigroup._
import cats._
import cats.instances.all._
import cats.syntax.semigroup._

scala> implicit class PairOps[A, B: Semigroup](pair: (A, B)) {
         def applyLog[C](f: A => (C, B)): (C, B) = {
           val (x, log) = pair
           val (y, newlog) = f(x)
           (y, log |+| newlog)
         }
       }
defined class PairOps

Writer 

LYAHFGG:

To attach a monoid to a value, we just need to put them together in a tuple. The Writer w a type is just a newtype wrapper for this.

In Cats, the equivalent is called `Writer`:

type Writer[L, V] = WriterT[Id, L, V]
object Writer {
  def apply[L, V](l: L, v: V): WriterT[Id, L, V] = WriterT[Id, L, V]((l, v))

  def value[L:Monoid, V](v: V): Writer[L, V] = WriterT.value(v)

  def tell[L](l: L): Writer[L, Unit] = WriterT.tell(l)
}

Writer[L, V] is a type alias for WriterT[Id, L, V]

WriterT 

Here’s the simplified version of `WriterT`:

final case class WriterT[F[_], L, V](run: F[(L, V)]) {
  def tell(l: L)(implicit functorF: Functor[F], semigroupL: Semigroup[L]): WriterT[F, L, V] =
    mapWritten(_ |+| l)

  def written(implicit functorF: Functor[F]): F[L] =
    functorF.map(run)(_._1)

  def value(implicit functorF: Functor[F]): F[V] =
    functorF.map(run)(_._2)

  def mapBoth[M, U](f: (L, V) => (M, U))(implicit functorF: Functor[F]): WriterT[F, M, U] =
    WriterT { functorF.map(run)(f.tupled) }

  def mapWritten[M](f: L => M)(implicit functorF: Functor[F]): WriterT[F, M, V] =
    mapBoth((l, v) => (f(l), v))
}

Here’s how we can create Writer values:

scala> import cats.data.Writer
import cats.data.Writer

scala> val w = Writer("Smallish gang.", 3)
w: cats.data.WriterT[cats.Id,String,Int] = WriterT((Smallish gang.,3))

scala> val v = Writer.value[String, Int](3)
v: cats.data.Writer[String,Int] = WriterT((,3))

scala> val l = Writer.tell[String]("Log something")
l: cats.data.Writer[String,Unit] = WriterT((Log something,()))

To run the Writer datatype you can call its run method:

scala> w.run
res1: cats.Id[(String, Int)] = (Smallish gang.,3)

Using for syntax with Writer 

LYAHFGG:

Now that we have a Monad instance, we’re free to use do notation for Writer values.

scala> import cats.syntax.show._
import cats.syntax.show._

scala> def logNumber(x: Int): Writer[List[String], Int] =
         Writer(List("Got number: " + x.show), 3)
logNumber: (x: Int)cats.data.Writer[List[String],Int]

scala> def multWithLog: Writer[List[String], Int] =
         for {
           a <- logNumber(3)
           b <- logNumber(5)
         } yield a * b
multWithLog: cats.data.Writer[List[String],Int]

scala> multWithLog.run
res2: cats.Id[(List[String], Int)] = (List(Got number: 3, Got number: 5),9)

Adding logging to program 

Here’s the gcd example:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> :paste
// Entering paste mode (ctrl-D to finish)
def gcd(a: Int, b: Int): Writer[List[String], Int] = {
  if (b == 0) for {
      _ <- Writer.tell(List("Finished with " + a.show))
    } yield a
  else
    Writer.tell(List(s"${a.show} mod ${b.show} = ${(a % b).show}")) >>= { _ =>
      gcd(b, a % b)
    }
}

// Exiting paste mode, now interpreting.

gcd: (a: Int, b: Int)cats.data.Writer[List[String],Int]

scala> gcd(12, 16).run
res3: cats.Id[(List[String], Int)] = (List(12 mod 16 = 12, 16 mod 12 = 4, 12 mod 4 = 0, Finished with 4),4)

Inefficient List construction 

LYAHFGG:

When using the Writer monad, you have to be careful which monoid to use, because using lists can sometimes turn out to be very slow. That’s because lists use ++ for mappend and using ++ to add something to the end of a list is slow if that list is really long.

Here’s the table of performance characteristics for major collections. What stands out for immutable collection is Vector since it has effective constant for all operations. Vector is a tree structure with the branching factor of 32, and it’s able to achieve fast updates by structure sharing.

Here’s the Vector version of gcd:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def gcd(a: Int, b: Int): Writer[Vector[String], Int] = {
  if (b == 0) for {
      _ <- Writer.tell(Vector("Finished with " + a.show))
    } yield a
  else
    Writer.tell(Vector(s"${a.show} mod ${b.show} = ${(a % b).show}")) >>= { _ =>
      gcd(b, a % b)
    }
}

// Exiting paste mode, now interpreting.

gcd: (a: Int, b: Int)cats.data.Writer[Vector[String],Int]

scala> gcd(12, 16).run
res4: cats.Id[(Vector[String], Int)] = (Vector(12 mod 16 = 12, 16 mod 12 = 4, 12 mod 4 = 0, Finished with 4),4)

Comparing performance 

Like the book let’s write a microbenchmark to compare the performance:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def vectorFinalCountDown(x: Int): Writer[Vector[String], Unit] = {
  import annotation.tailrec
  @tailrec def doFinalCountDown(x: Int, w: Writer[Vector[String], Unit]): Writer[Vector[String], Unit] = x match {
    case 0 => w >>= { _ => Writer.tell(Vector("0")) }
    case x => doFinalCountDown(x - 1, w >>= { _ =>
      Writer.tell(Vector(x.show))
    })
  }
  val t0 = System.currentTimeMillis
  val r = doFinalCountDown(x, Writer.tell(Vector[String]()))
  val t1 = System.currentTimeMillis
  r >>= { _ => Writer.tell(Vector((t1 - t0).show + " msec")) }
}

def listFinalCountDown(x: Int): Writer[List[String], Unit] = {
  import annotation.tailrec
  @tailrec def doFinalCountDown(x: Int, w: Writer[List[String], Unit]): Writer[List[String], Unit] = x match {
    case 0 => w >>= { _ => Writer.tell(List("0")) }
    case x => doFinalCountDown(x - 1, w >>= { _ =>
      Writer.tell(List(x.show))
    })
  }
  val t0 = System.currentTimeMillis
  val r = doFinalCountDown(x, Writer.tell(List[String]()))
  val t1 = System.currentTimeMillis
  r >>= { _ => Writer.tell(List((t1 - t0).show + " msec")) }
}

// Exiting paste mode, now interpreting.

vectorFinalCountDown: (x: Int)cats.data.Writer[Vector[String],Unit]
listFinalCountDown: (x: Int)cats.data.Writer[List[String],Unit]

Here’s the result I got on my machine:

scala> vectorFinalCountDown(10000).run._1.last
res17: String = 6 msec

scala> listFinalCountDown(10000).run._1.last
res18: String = 630 msec

As you can see, List is 100 times slower.

Reader datatype 

Learn You a Haskell for Great Good says:

In the chapter about applicatives, we saw that the function type, (->) r is an instance of Functor.

scala> import cats._, cats.instances.all._, cats.syntax.functor._
import cats._
import cats.instances.all._
import cats.syntax.functor._

scala> val f = (_: Int) * 2
f: Int => Int = <function1>

scala> val g = (_: Int) + 10
g: Int => Int = <function1>

scala> (g map f)(8)
res0: Int = 36

We’ve also seen that functions are applicative functors. They allow us to operate on the eventual results of functions as if we already had their results.

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> val h = (f |@| g) map {_ + _}
h: Int => Int = <function1>

scala> h(3)
res1: Int = 19

Not only is the function type (->) r a functor and an applicative functor, but it’s also a monad. Just like other monadic values that we’ve met so far, a function can also be considered a value with a context. The context for functions is that that value is not present yet and that we have to apply that function to something in order to get its result value.

Let’s try implementing the example:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> val addStuff: Int => Int = for {
         a <- (_: Int) * 2
         b <- (_: Int) + 10
       } yield a + b
addStuff: Int => Int = <function1>

scala> addStuff(3)
res2: Int = 19

Both (*2) and (+10) get applied to the number 3 in this case. return (a+b) does as well, but it ignores it and always presents a+b as the result. For this reason, the function monad is also called the reader monad. All the functions read from a common source.

The Reader monad lets us pretend the value is already there. I am guessing that this works only for functions that accepts one parameter.

Dependency injection 

At nescala 2012 on March 9th, Rúnar (@runarorama) gave a talk Dead-Simple Dependency Injection. One of the ideas presented there was to use the Reader monad for dependency injection. Later that year, he also gave a longer version of the talk Lambda: The Ultimate Dependency Injection Framework in YOW 2012. In 2013, Jason Arhart wrote Scrap Your Cake Pattern Boilerplate: Dependency Injection Using the Reader Monad, which I’m going to base my example on.

Imagine we have a case class for a user, and a trait that abstracts the data store to get them.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class User(id: Long, parentId: Long, name: String, email: String)
trait UserRepo {
  def get(id: Long): User
  def find(name: String): User
}

// Exiting paste mode, now interpreting.

defined class User
defined trait UserRepo

Next we define a primitive reader for each operation defined in the UserRepo trait:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Users {
  def getUser(id: Long): UserRepo => User = {
    case repo => repo.get(id)
  }
  def findUser(name: String): UserRepo => User = {
    case repo => repo.find(name)
  }
}

// Exiting paste mode, now interpreting.

defined trait Users

That looks like boilerplate. (I thought we are scrapping it.) Moving on.

Based on the primitive readers, we can compose other readers, including the application.

scala> :paste
// Entering paste mode (ctrl-D to finish)
object UserInfo extends Users {
  def userInfo(name: String): UserRepo => Map[String, String] =
    for {
      user <- findUser(name)
      boss <- getUser(user.parentId)
    } yield Map(
      "name" -> s"${user.name}",
      "email" -> s"${user.email}",
      "boss_name" -> s"${boss.name}"
    )
}
trait Program {
  def app: UserRepo => String =
    for {
      fredo <- UserInfo.userInfo("Fredo")
    } yield fredo.toString
}

// Exiting paste mode, now interpreting.

defined object UserInfo
defined trait Program

To run this app, we need something that provides an implementation for UserRepo:

scala> :paste
// Entering paste mode (ctrl-D to finish)
import cats.syntax.eq._

val testUsers = List(User(0, 0, "Vito", "vito@example.com"),
  User(1, 0, "Michael", "michael@example.com"),
  User(2, 0, "Fredo", "fredo@example.com"))
object Main extends Program {
  def run: String = app(mkUserRepo)
  def mkUserRepo: UserRepo = new UserRepo {
    def get(id: Long): User = (testUsers find { _.id === id }).get
    def find(name: String): User = (testUsers find { _.name === name }).get
  }
}
Main.run

// Exiting paste mode, now interpreting.

import cats.syntax.eq._
testUsers: List[User] = List(User(0,0,Vito,vito@example.com), User(1,0,Michael,michael@example.com), User(2,0,Fredo,fredo@example.com))
defined object Main
res3: String = Map(name -> Fredo, email -> fredo@example.com, boss_name -> Vito)

We got the boss man’s name.

We can try using actM instead of a for comprehension:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object UserInfo extends Users {
  import example.MonadSyntax._
  def userInfo(name: String): UserRepo => Map[String, String] =
    actM[UserRepo => ?, Map[String, String]] {
      val user = findUser(name).next
      val boss = getUser(user.parentId).next
      Map(
        "name" -> s"${user.name}",
        "email" -> s"${user.email}",
        "boss_name" -> s"${boss.name}"
      )
    }
}
trait Program {
  import example.MonadSyntax._
  def app: UserRepo => String =
    actM[UserRepo => ?, String] {
      val fredo = UserInfo.userInfo("Fredo").next
      fredo.toString
    }
}
object Main extends Program {
  def run: String = app(mkUserRepo)
  def mkUserRepo: UserRepo = new UserRepo {
    def get(id: Long): User = (testUsers find { _.id === id }).get
    def find(name: String): User = (testUsers find { _.name === name }).get
  }
}
Main.run

// Exiting paste mode, now interpreting.

defined object UserInfo
defined trait Program
defined object Main
res4: String = Map(name -> Fredo, email -> fredo@example.com, boss_name -> Vito)

The code inside of the actM block looks more natural than the for version, but the type annotations probably make it more difficult to use.

That’s all for today.

day 7 

On day 6 we compared Scala’s for comprehension with Haskell’s do, and implemented the actM macro. We also looked at the Reader datatype, which is another way of thinking about Function1[A, B].

I had been a bit sloppy by describing List and Reader as monads, but I want to correctly say List datatype and Reader datatype, which form a monad under some operation.

State datatype 

When writing code using an immutable data structure, one pattern that arises often is passing of a value that represents some state. The example I like to use is Tetris. Imagine a functional implementation of Tetris where Tetrix.init creates the initial state, and then various transition functions return a transformed state and some return value:

val (s0, _) = Tetrix.init()
val (s1, _) = Tetrix.nextBlock(s0)
val (s2, moved0) = Tetrix.moveBlock(s1, LEFT)
val (s3, moved1) =
  if (moved0) Tetrix.moveBlock(s2, LEFT)
  else (s2, moved0)

The passing of the state objects (s0, s1, s2, …) becomes error-prone boilerplate. The goal is to automate the explicit passing of the states.

To follow along the book, we’ll use the stack example from the book. Here’s an implementation without using State.

scala> type Stack = List[Int]
defined type alias Stack

scala> def pop(s0: Stack): (Stack, Int) =
         s0 match {
           case x :: xs => (xs, x)
           case Nil     => sys.error("stack is empty")
         }
pop: (s0: Stack)(Stack, Int)

scala> def push(s0: Stack, a: Int): (Stack, Unit) = (a :: s0, ())
push: (s0: Stack, a: Int)(Stack, Unit)

scala> def stackManip(s0: Stack): (Stack, Int) = {
         val (s1, _) = push(s0, 3)
         val (s2, a) = pop(s1)
         pop(s2)
       }
stackManip: (s0: Stack)(Stack, Int)

scala> stackManip(List(5, 8, 2, 1))
res0: (Stack, Int) = (List(8, 2, 1),5)

State and StateT datatype 

Learn You a Haskell for Great Good says:

Haskell features the State monad, which makes dealing with stateful problems a breeze while still keeping everything nice and pure. ….

We’ll say that a stateful computation is a function that takes some state and returns a value along with some new state. That function would have the following type:

s -> (a, s)

State is a datatype that encapsulates a stateful computation: S => (S, A). State forms a monad which passes along the states represented by the type S. Haskell should’ve named this Stater or Program to avoid the confusion, but now people already know this by State, so it’s too late.

Cody Allen (@ceedubs) had a pull request open for State/StateT on Cats #302, which was merged recently. (Thanks, Erik) As it happens, State is just a type alias:

package object data {
  ....
  type State[S, A] = StateT[Eval, S, A]
  object State extends StateFunctions
}

StateT is a monad transformer, a type constructor for other datatypes. State partially applies StateT with Eval, which emulates a call stack with heap memory to prevent overflow. Here’s the definition of StateT:

final class StateT[F[_], S, A](val runF: F[S => F[(S, A)]]) {
  ....
}

object StateT extends StateTInstances {
  def apply[F[_], S, A](f: S => F[(S, A)])(implicit F: Applicative[F]): StateT[F, S, A] =
    new StateT(F.pure(f))

  def applyF[F[_], S, A](runF: F[S => F[(S, A)]]): StateT[F, S, A] =
    new StateT(runF)

  /**
   * Run with the provided initial state value
   */
  def run(initial: S)(implicit F: FlatMap[F]): F[(S, A)] =
    F.flatMap(runF)(f => f(initial))

  ....
}


To construct a State value, you pass the state transition function to State.apply.

private[data] abstract class StateFunctions {
  def apply[S, A](f: S => (S, A)): State[S, A] =
    StateT.applyF(Now((s: S) => Now(f(s))))
  
  ....
}

As the State implementation is fresh, a few bumps on the road are expected. When I tried using State on the REPL, I ran into an odd behavior where I can create one state, but not the second. @retronym pointed me to SI-7139: Type alias and object with the same name cause type mismatch in REPL, which I was able to workaround as #322.

Let’s consider how to implement stack with State:

scala> type Stack = List[Int]
defined type alias Stack

scala> import cats._, cats.data.State, cats.instances.all._
import cats._
import cats.data.State
import cats.instances.all._

scala> val pop = State[Stack, Int] {
         case x :: xs => (xs, x)
         case Nil     => sys.error("stack is empty")
       }
pop: cats.data.State[Stack,Int] = cats.data.StateT@42968f0a

scala> def push(a: Int) = State[Stack, Unit] {
         case xs => (a :: xs, ())
       }
push: (a: Int)cats.data.State[Stack,Unit]

These are the primitive programs. Now we can construct compound programs by composing the monad.

scala> def stackManip: State[Stack, Int] = for {
         _ <- push(3)
         a <- pop
         b <- pop
       } yield(b)
stackManip: cats.data.State[Stack,Int]

scala> stackManip.run(List(5, 8, 2, 1)).value
res0: (Stack, Int) = (List(8, 2, 1),5)

The first run is for StateT, and the second is to run until the end Eval.

Both push and pop are still purely functional, and we were able to eliminate explicitly passing the state object (s0, s1, …).

Getting and setting the state 

LYAHFGG:

The Control.Monad.State module provides a type class that’s called MonadState and it features two pretty useful functions, namely get and put.

The State object defines a few helper functions:

private[data] abstract class StateFunctions {

  def apply[S, A](f: S => (S, A)): State[S, A] =
    StateT.applyF(Now((s: S) => Now(f(s))))

  /**
   * Return `a` and maintain the input state.
   */
  def pure[S, A](a: A): State[S, A] = State(s => (s, a))

  /**
   * Modify the input state and return Unit.
   */
  def modify[S](f: S => S): State[S, Unit] = State(s => (f(s), ()))

  /**
   * Inspect a value from the input state, without modifying the state.
   */
  def inspect[S, T](f: S => T): State[S, T] = State(s => (s, f(s)))

  /**
   * Return the input state without modifying it.
   */
  def get[S]: State[S, S] = inspect(identity)

  /**
   * Set the state to `s` and return Unit.
   */
  def set[S](s: S): State[S, Unit] = State(_ => (s, ()))
}

These are confusing at first. But remember that the State monad encapsulates a pair of a state transition function and a return value. So State.get keeps the state as is, and returns it.

Similarly, State.set(s) in this context means to overwrite the state with s and return ().

Let’s try using them with the stackyStack example from the book:

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> def stackyStack: State[Stack, Unit] = for {
         stackNow <- State.get[Stack]
         r <- if (stackNow === List(1, 2, 3)) State.set[Stack](List(8, 3, 1))
              else State.set[Stack](List(9, 2, 1))
       } yield r
stackyStack: cats.data.State[Stack,Unit]

scala> stackyStack.run(List(1, 2, 3)).value
res1: (Stack, Unit) = (List(8, 3, 1),())

We can also implement both pop and push in terms of get and put:

scala> val pop: State[Stack, Int] = for {
         s <- State.get[Stack]
         (x :: xs) = s
       } yield x
pop: cats.data.State[Stack,Int] = cats.data.StateT@72c7416c

scala> def push(x: Int): State[Stack, Unit] = for {
         xs <- State.get[Stack]
         r <- State.set(x :: xs)
       } yield r
push: (x: Int)cats.data.State[Stack,Unit]

As you can see, the State monad on its own doesn’t do much (encapsulate a state transition function and a return value), but by chaining them we can remove some boilerplates.

Extracting and modifying the state 

State.extract(f) and State.modify(f) are slightly more advanced variants of State.get and State.set(s).

State.extract(f) applies the function f: S => T to s, and returns the result without modifying the state itself.

State.modify applies the function f: S => T to s, saves the result as the new state, and returns ().

Xor datatype 

LYAHFGG:

The Either e a type on the other hand, allows us to incorporate a context of possible failure to our values while also being able to attach values to the failure, so that they can describe what went wrong or provide some other useful info regarding the failure.

We know Either[A, B] from the standard library, and we’ve covered that Cats implements a right-biased functor for it too. Cats also implements its own Either equivalent datatype named Xor:

/** Represents a right-biased disjunction that is either an `A` or a `B`.
 *
 * An instance of `A [[Xor]] B` is either a `[[Xor.Left Left]][A]` or a `[[Xor.Right Right]][B]`.
 *
 * A common use of [[Xor]] is to explicitly represent the possibility of failure in a result as opposed to
 * throwing an exception.  By convention, [[Xor.Left Left]] is used for errors and [[Xor.Right Right]] is reserved for successes.
 * For example, a function that attempts to parse an integer from a string may have a return type of
 * `NumberFormatException [[Xor]] Int`. However, since there is no need to actually throw an exception, the type (`A`)
 * chosen for the "left" could be any type representing an error and has no need to actually extend `Exception`.
 *
 * `A [[Xor]] B` is isomorphic to `scala.Either[A, B]`, but [[Xor]] is right-biased, so methods such as `map` and
 * `flatMap` apply only in the context of the "right" case. This right bias makes [[Xor]] more convenient to use
 * than `scala.Either` in a monadic context. Methods such as `swap`, and `leftMap` provide functionality
 * that `scala.Either` exposes through left projections.
 */
sealed abstract class Xor[+A, +B] extends Product with Serializable {

  def fold[C](fa: A => C, fb: B => C): C = this match {
    case Xor.Left(a) => fa(a)
    case Xor.Right(b) => fb(b)
  }

  def isLeft: Boolean = fold(_ => true, _ => false)

  def isRight: Boolean = fold(_ => false, _ => true)

  def swap: B Xor A = fold(Xor.right, Xor.left)

  ....
}

object Xor extends XorInstances with XorFunctions {
  final case class Left[+A](a: A) extends (A Xor Nothing)
  final case class Right[+B](b: B) extends (Nothing Xor B)
}

These values are created using the right and left methods on Xor:

scala> import cats._, cats.data.Xor, cats.instances.all._
import cats._
import cats.data.Xor
import cats.instances.all._

scala> Xor.right[String, Int](1)
res0: cats.data.Xor[String,Int] = Right(1)

scala> Xor.left[String, Int]("error")
res1: cats.data.Xor[String,Int] = Left(error)

Unlike standard library’s Either[A, B], Cats’ Xor assumes that you’d want to mostly want a right projection:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> Xor.left[String, Int]("boom") >>=
         { x => Xor.right[String, Int](x + 1) }
res2: cats.data.Xor[String,Int] = Left(boom)

Let’s try using it in for comprehension:

scala> import cats.syntax.semigroup._
import cats.syntax.semigroup._

scala> for {
         e1 <- Xor.right[String, String]("event 1 ok")
         e2 <- Xor.left[String, String]("event 2 failed!")
         e3 <- Xor.left[String, String]("event 3 failed!")
       } yield (e1 |+| e2 |+| e3)
res3: cats.data.Xor[String,String] = Left(event 2 failed!)

As you can see, the first failure rolls up as the final result. How do we get the value out of Xor?

First there’s fold:

scala> val e1 = Xor.right[String, String]("event 1 ok")
e1: cats.data.Xor[String,String] = Right(event 1 ok)

scala> e1.fold(
         { l => l },
         { r => r + "!" })
res4: String = event 1 ok!

We can also use isRight and isLeft method to check which side we are on:

scala> e1.isRight
res5: Boolean = true

scala> e1.isLeft
res6: Boolean = false

To extract the value from the right, use getOrElse(x):

scala> e1.getOrElse("something good")
res7: String = event 1 ok

To extract the value from the left, use swap to make it right first:

scala> val e2 = Xor.left[String, String]("event 2 failed!")
e2: cats.data.Xor[String,String] = Left(event 2 failed!)

scala> e2.swap.getOrElse("something good")
res8: String = event 2 failed!

As expected, map is also right-biased:

scala> e1 map { _ + "!" }
res9: cats.data.Xor[String,String] = Right(event 1 ok!)

scala> e2 map { _ + "!" }
res10: cats.data.Xor[String,String] = Left(event 2 failed!)

To chain on the left side, there’s orElse, which accepts => AA Xor BB where [AA >: A, BB >: B]:

scala> e2 orElse Xor.right[String, String]("event 2 retry ok")
res11: cats.data.Xor[String,String] = Right(event 2 retry ok)

The Xor datatype has more methods like toEither etc.

Validated datatype 

There’s another datatype in Cats that we can use in place of Either called Validated:

sealed abstract class Validated[+E, +A] extends Product with Serializable {

  def fold[B](fe: E => B, fa: A => B): B =
    this match {
      case Invalid(e) => fe(e)
      case Valid(a) => fa(a)
    }

  def isValid: Boolean = fold(_ => false, _ => true)
  def isInvalid: Boolean = fold(_ => true, _ => false)

  ....
}

object Validated extends ValidatedInstances with ValidatedFunctions{
  final case class Valid[+A](a: A) extends Validated[Nothing, A]
  final case class Invalid[+E](e: E) extends Validated[E, Nothing]
}

Here’s how to create the values:

scala> import cats._, cats.data.Validated, cats.instances.all._
import cats._
import cats.data.Validated
import cats.instances.all._

scala> import Validated.{ valid, invalid }
import Validated.{valid, invalid}

scala> valid[String, String]("event 1 ok")
res0: cats.data.Validated[String,String] = Valid(event 1 ok)

scala> invalid[String, String]("event 1 failed!")
res1: cats.data.Validated[String,String] = Invalid(event 1 failed!)

What’s different about Validation is that it is does not form a monad, but forms an applicative functor. Instead of chaining the result from first event to the next, Validated validates all events:

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> val result = (valid[String, String]("event 1 ok") |@|
        invalid[String, String]("event 2 failed!") |@|
        invalid[String, String]("event 3 failed!")) map {_ + _ + _}
result: cats.data.Validated[String,String] = Invalid(event 2 failed!event 3 failed!)

The final result is Invalid(event 3 failed!event 2 failed!). Unlike the Xor’s monad, which cuts the calculation short, Validated keeps going to report back all failures. This would be useful for validating user input on an online bacon shop.

The problem, however, is that the error messages are mushed together into one string. Shouldn’t it be something like a list?

Using NonEmptyList to accumulate failures 

This is where NonEmptyList datatype comes in handy. For now, think of it as a list that’s guaranteed to have at least one element.

scala> import cats.data.{ NonEmptyList => NEL }
import cats.data.{NonEmptyList=>NEL}

scala> NEL.of(1)
res2: cats.data.NonEmptyList[Int] = NonEmptyList(1)

We can use NEL[A] on the invalid side to accumulate the errors:

scala> val result =
         (valid[NEL[String], String]("event 1 ok") |@|
           invalid[NEL[String], String](NEL.of("event 2 failed!")) |@|
           invalid[NEL[String], String](NEL.of("event 3 failed!"))) map {_ + _ + _}
result: cats.data.Validated[cats.data.NonEmptyList[String],String] = Invalid(NonEmptyList(event 2 failed!, event 3 failed!))

Inside Invalid, we were able to accumulate all failed messages.

We can use the fold method to extract the values:

scala> val errs: NEL[String] = result.fold(
         { l => l },
         { r => sys.error("invalid is expected") }
       )
errs: cats.data.NonEmptyList[String] = NonEmptyList(event 2 failed!, event 3 failed!)

Ior datatype 

In Cats there is yet another datatype that represents an A-B pair called Ior.

/** Represents a right-biased disjunction that is either an `A`, or a `B`, or both an `A` and a `B`.
 *
 * An instance of `A [[Ior]] B` is one of:
 *  - `[[Ior.Left Left]][A]`
 *  - `[[Ior.Right Right]][B]`
 *  - `[[Ior.Both Both]][A, B]`
 *
 * `A [[Ior]] B` is similar to `A [[Xor]] B`, except that it can represent the simultaneous presence of
 * an `A` and a `B`. It is right-biased like [[Xor]], so methods such as `map` and `flatMap` operate on the
 * `B` value. Some methods, like `flatMap`, handle the presence of two [[Ior.Both Both]] values using a
 * `[[Semigroup]][A]`, while other methods, like [[toXor]], ignore the `A` value in a [[Ior.Both Both]].
 *
 * `A [[Ior]] B` is isomorphic to `(A [[Xor]] B) [[Xor]] (A, B)`, but provides methods biased toward `B`
 * values, regardless of whether the `B` values appear in a [[Ior.Right Right]] or a [[Ior.Both Both]].
 * The isomorphic [[Xor]] form can be accessed via the [[unwrap]] method.
 */
sealed abstract class Ior[+A, +B] extends Product with Serializable {

  final def fold[C](fa: A => C, fb: B => C, fab: (A, B) => C): C = this match {
    case Ior.Left(a) => fa(a)
    case Ior.Right(b) => fb(b)
    case Ior.Both(a, b) => fab(a, b)
  }

  final def isLeft: Boolean = fold(_ => true, _ => false, (_, _) => false)
  final def isRight: Boolean = fold(_ => false, _ => true, (_, _) => false)
  final def isBoth: Boolean = fold(_ => false, _ => false, (_, _) => true)

  ....
}

object Ior extends IorInstances with IorFunctions {
  final case class Left[+A](a: A) extends (A Ior Nothing)
  final case class Right[+B](b: B) extends (Nothing Ior B)
  final case class Both[+A, +B](a: A, b: B) extends (A Ior B)
}

These values are created using the left, right, and both methods on Ior:

scala> import cats._, cats.data.{ Ior, NonEmptyList => NEL }, cats.instances.all._
import cats._
import cats.data.{Ior, NonEmptyList=>NEL}
import cats.instances.all._

scala> Ior.right[NEL[String], Int](1)
res0: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Right(1)

scala> Ior.left[NEL[String], Int](NEL.of("error"))
res1: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(error))

scala> Ior.both[NEL[String], Int](NEL.of("warning"), 1)
res2: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Both(NonEmptyList(warning),1)

As noted in the scaladoc comment, Ior’s flatMap uses Semigroup[A] to accumulate failures when it sees an Ior.both(...) value. So we could probably use this as a hybrid of Xor and Validated.

Here’s how flatMap behaves for all nine combinations:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> Ior.right[NEL[String], Int](1) >>=
         { x => Ior.right[NEL[String], Int](x + 1) }
res3: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Right(2)

scala> Ior.left[NEL[String], Int](NEL.of("error 1")) >>=
         { x => Ior.right[NEL[String], Int](x + 1) }
res4: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(error 1))

scala> Ior.both[NEL[String], Int](NEL.of("warning 1"), 1) >>=
         { x => Ior.right[NEL[String], Int](x + 1) }
res5: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Both(NonEmptyList(warning 1),2)

scala> Ior.right[NEL[String], Int](1) >>=
         { x => Ior.left[NEL[String], Int](NEL.of("error 2")) }
res6: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(error 2))

scala> Ior.left[NEL[String], Int](NEL.of("error 1")) >>=
         { x => Ior.left[NEL[String], Int](NEL.of("error 2")) }
res7: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(error 1))

scala> Ior.both[NEL[String], Int](NEL.of("warning 1"), 1) >>=
         { x => Ior.left[NEL[String], Int](NEL.of("error 2")) }
res8: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(warning 1, error 2))

scala> Ior.right[NEL[String], Int](1) >>=
         { x => Ior.both[NEL[String], Int](NEL.of("warning 2"), x + 1) }
res9: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Both(NonEmptyList(warning 2),2)

scala> Ior.left[NEL[String], Int](NEL.of("error 1")) >>=
         { x => Ior.both[NEL[String], Int](NEL.of("warning 2"), x + 1) }
res10: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Left(NonEmptyList(error 1))

scala> Ior.both[NEL[String], Int](NEL.of("warning 1"), 1) >>=
         { x => Ior.both[NEL[String], Int](NEL.of("warning 2"), x + 1) }
res11: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Both(NonEmptyList(warning 1, warning 2),2)

Let’s try using it in for comprehension:

scala> import cats.syntax.semigroup._
import cats.syntax.semigroup._

scala> for {
         e1 <- Ior.right[NEL[String], Int](1)
         e2 <- Ior.both[NEL[String], Int](NEL.of("event 2 warning"), e1 + 1)
         e3 <- Ior.both[NEL[String], Int](NEL.of("event 3 warning"), e2 + 1)
       } yield (e1 |+| e2 |+| e3)
res12: cats.data.Ior[cats.data.NonEmptyList[String],Int] = Both(NonEmptyList(event 2 warning, event 3 warning),6)

So Ior.left short curcuits like the failure values in Xor[A, B] and Either[A, B], but Ior.both accumulates the failure values like Validated[A, B].

That’s it for today! We’ll pick it up from here later.

day 8 

On day 7 we looked at the State datatype, which encapsulates stateful computation. We also looked at two alternatives for Either[A, B]: the Xor datatype, the Validated datatype, and the Ior datatype.

Free monoids 

I’m going to diverge from Learn You a Haskell for Great Good a bit, and explore free objects.

Let’s first look into free monoids. Given a set of characters:

A = { 'a', 'b', 'c', ... }

We can form the free monoid on A called A* as follows:

A* = String

Here, the binary operation is String concatenation + operator. We can show that this satisfies the monoid laws using the empty string "" as the identity.

Furthermore, a free monoid can be formed from any arbitrary set A by concatenating them:

A* = List[A]

Here, the binary operation is :::, and the identity is Nil. The definition of the free monoid M(A) is given as follows:

Awodey:

Universal Mapping Property of M(A)
There is a function i: A => |M(A)|, and given any monoid N and any function f: A => |N|, there is a unique monoid homomorphism f_hom = M(A) => N such that |f_hom| ∘ i = f, all as indicated in the following diagram:

Instead of A, I’ll use X here. Also |N| means Set[N]:

free monoids

If we think in terms of Scala,

def i(x: X): Set[M[X]] = ???
def f(x: X): Set[N] = ???

// there exists a unique
def f_hom(mx: M[X]): N

// such that
def f_hom_set(smx: Set[M[X]]): Set[N] = smx map {f_hom}
f == f_hom_set compose i

Suppose A is Char and N is (Int, +). We can write a property test to see if String is a free monoid.

scala> def i(x: Char): Set[String] = Set(x.toString)
i: (x: Char)Set[String]

scala> def f(x: Char): Set[Int] = Set(x.toInt) // example
f: (x: Char)Set[Int]

scala> val f_hom: PartialFunction[String, Int] = 
         { case mx: String if mx.size == 1 => mx.charAt(0).toInt }
f_hom: PartialFunction[String,Int] = <function1>

scala> def f_hom_set(smx: Set[String]): Set[Int] = smx map {f_hom}
f_hom_set: (smx: Set[String])Set[Int]

scala> val g = (f_hom_set _) compose (i _)
g: Char => Set[Int] = <function1>

scala> import org.scalacheck.Prop.forAll
import org.scalacheck.Prop.forAll

scala> val propMAFree = forAll { c: Char => f(c) == g(c) }
propMAFree: org.scalacheck.Prop = Prop

scala> propMAFree.check
+ OK, passed 100 tests.

At least for this implemention of f we were able to show that String is free.

Injective 

This intuitively shows that Set[M[X]] needs to be lossless for X to allow any f, meaning no two values on X can map into the same value in M[X]. In algebra, this is expressed as i is injective for arrows from Char.

Definitions: An arrow f satisfying the property ‘for any pair of arrows x1: T => A and x2: T => A, if f ∘ x1 = f ∘ x2 then x1 = x2‘, it is said to be injective for arrows from T.

injective

Uniqueness 

UMP also stipulates f_hom to be unique, so that requires Set[M[A]] to be zero or more combinations of A’s and nothing more. Because M[A] is unique for A, conceptually there is one and only free monoid for a set A. We can however have the free monoid expressed in different ways like String and List[Char], so it ends up being more like a free monoid.

Free objects 

It turns out that the free monoid is an example of free objects, which we can define using a functor Set[A]: C[A] => Set[A].

free objects

Comparing the diagram, we see that they are mostly similar.

Free monads 

We said that free monoids are examples of free objects. Similarly, free monads are examples of free objects.

I’m not going to get into the details, monad is a monoid in the category of endofunctors F: C => C, using F × F => F as the binary operator. Similar to the way we could derive A* from A, we can derive the free monad F* for a given endofunctor F.

Here’s how Haskell does it:

data Free f a = Pure a | Free (f (Free f a))

instance Functor f => Monad (Free f) where
  return = Pure
  Pure a >>= f = f a
  Free m >>= f = Free ((>>= f) <$> m)

Wikipedia on Monad:

Unlike List, which stores a list of values, Free stores a list of functors, wrapped around an initial value. Accordingly, the Functor and Monad instances of Free do nothing other than handing a given function down that list with fmap.

Also note that Free is a datatype, but there are different free monads that gets formed for each Functor.

Why free monads matter 

In practice, we can view Free as a clever way of forming Monad out of Functor. This is particularly useful for interpreter pattern, which is explained by Gabriel Gonzalez (@gabrielg439)’s Why free monads matter:

Let’s try to come up with some sort of abstraction that represents the essence of a syntax tree. …

Our toy language will only have three commands:

output b -- prints a "b" to the console
bell     -- rings the computer's bell
done     -- end of execution

So we represent it as a syntax tree where subsequent commands are leaves of prior commands:

data Toy b next =
    Output b next
  | Bell next
  | Done

Here’s Toy translated into Scala as is:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait Toy[+A, +Next]
object Toy {
  case class Output[A, Next](a: A, next: Next) extends Toy[A, Next]
  case class Bell[Next](next: Next) extends Toy[Nothing, Next]
  case class Done() extends Toy[Nothing, Nothing]
}

// Exiting paste mode, now interpreting.

defined trait Toy
defined object Toy

scala> Toy.Output('A', Toy.Done())
res0: Toy.Output[Char,Toy.Done] = Output(A,Done())

scala> Toy.Bell(Toy.Output('A', Toy.Done()))
res1: Toy.Bell[Toy.Output[Char,Toy.Done]] = Bell(Output(A,Done()))

CharToy 

In WFMM’s original DSL, Output b next has two type parameters b and next. This allows Output to be generic about the output type. As demonstrated above as Toy, Scala can do this too. But doing so unnecessarily complicates the demonstration of Free because of Scala’s handling of partially applied types. So we’ll first hardcode the datatype to Char as follows:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait CharToy[+Next]
object CharToy {
  case class CharOutput[Next](a: Char, next: Next) extends CharToy[Next]
  case class CharBell[Next](next: Next) extends CharToy[Next]
  case class CharDone() extends CharToy[Nothing]

  def output[Next](a: Char, next: Next): CharToy[Next] = CharOutput(a, next)
  def bell[Next](next: Next): CharToy[Next] = CharBell(next)
  def done: CharToy[Nothing] = CharDone()
}

// Exiting paste mode, now interpreting.

defined trait CharToy
defined object CharToy

scala> { import CharToy._
         output('A', done)
       }
res2: CharToy[CharToy[Nothing]] = CharOutput(A,CharDone())

scala> { import CharToy._
         bell(output('A', done))
       }
res3: CharToy[CharToy[CharToy[Nothing]]] = CharBell(CharOutput(A,CharDone()))

I’ve added helper functions lowercase output, bell, and done to unify the types to CharToy.

Fix 

WFMM:

but unfortunately this doesn’t work because every time I want to add a command, it changes the type.

Let’s define Fix:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Fix[F[_]](f: F[Fix[F]])
object Fix {
  def fix(toy: CharToy[Fix[CharToy]]) = Fix[CharToy](toy)
}

// Exiting paste mode, now interpreting.

defined class Fix
defined object Fix

scala> { import Fix._, CharToy._
         fix(output('A', fix(done)))
       }
res4: Fix[CharToy] = Fix(CharOutput(A,Fix(CharDone())))

scala> { import Fix._, CharToy._
         fix(bell(fix(output('A', fix(done)))))
       }
res5: Fix[CharToy] = Fix(CharBell(Fix(CharOutput(A,Fix(CharDone())))))

Again, fix is provided so that the type inference works.

FixE 

We are also going to try to implement FixE, which adds an exception to this. Since throw and catch are reserved, I am renaming them to throwy and catchy:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait FixE[F[_], E]
object FixE {
  case class Fix[F[_], E](f: F[FixE[F, E]]) extends FixE[F, E]
  case class Throwy[F[_], E](e: E) extends FixE[F, E]

  def fix[E](toy: CharToy[FixE[CharToy, E]]): FixE[CharToy, E] =
    Fix[CharToy, E](toy)
  def throwy[F[_], E](e: E): FixE[F, E] = Throwy(e)
  def catchy[F[_]: Functor, E1, E2](ex: => FixE[F, E1])
      (f: E1 => FixE[F, E2]): FixE[F, E2] = ex match {
    case Fix(x)    => Fix[F, E2](Functor[F].map(x) {catchy(_)(f)})
    case Throwy(e) => f(e)
  }
}

// Exiting paste mode, now interpreting.

defined trait FixE
defined object FixE

We can only use this if Toy b is a functor, so we muddle around until we find something that type-checks (and satisfies the Functor laws).

Let’s define Functor for CharToy:

scala> implicit val charToyFunctor: Functor[CharToy] = new Functor[CharToy] {
         def map[A, B](fa: CharToy[A])(f: A => B): CharToy[B] = fa match {
           case o: CharToy.CharOutput[A] => CharToy.CharOutput(o.a, f(o.next))
           case b: CharToy.CharBell[A]   => CharToy.CharBell(f(b.next))
           case CharToy.CharDone()       => CharToy.CharDone()
         }
       }
charToyFunctor: cats.Functor[CharToy] = $anon$1@44565991

Here’s a sample usage:

scala> :paste
// Entering paste mode (ctrl-D to finish)
{
  import FixE._, CharToy._
  case class IncompleteException()
  def subroutine = fix[IncompleteException](
    output('A',
      throwy[CharToy, IncompleteException](IncompleteException())))
  def program = catchy[CharToy, IncompleteException, Nothing](subroutine) { _ =>
    fix[Nothing](bell(fix[Nothing](done)))
  }
}

// Exiting paste mode, now interpreting.

The fact that we need to supply type parameters everywhere is a bit unfortunate.

Free datatype 

WFMM:

our FixE already exists, too, and it’s called the Free monad:

data Free f r = Free (f (Free f r)) | Pure r

As the name suggests, it is automatically a monad (if f is a functor):

instance (Functor f) => Monad (Free f) where
    return = Pure
    (Free x) >>= f = Free (fmap (>>= f) x)
    (Pure r) >>= f = f r

The return was our Throw, and (>>=) was our catch.

The datatype in Cats is called Free:

/**
 * A free operational monad for some functor `S`. Binding is done
 * using the heap instead of the stack, allowing tail-call
 * elimination.
 */
sealed abstract class Free[S[_], A] extends Product with Serializable {

  final def map[B](f: A => B): Free[S, B] =
    flatMap(a => Pure(f(a)))

  /**
   * Bind the given continuation to the result of this computation.
   * All left-associated binds are reassociated to the right.
   */
  final def flatMap[B](f: A => Free[S, B]): Free[S, B] =
    Gosub(this, f)

  ....
}

object Free {
  /**
   * Return from the computation with the given value.
   */
  private final case class Pure[S[_], A](a: A) extends Free[S, A]

  /** Suspend the computation with the given suspension. */
  private final case class Suspend[S[_], A](a: S[A]) extends Free[S, A]

  /** Call a subroutine and continue with the given function. */
  private final case class Gosub[S[_], B, C](c: Free[S, C], f: C => Free[S, B]) extends Free[S, B]

  /**
   * Suspend a value within a functor lifting it to a Free.
   */
  def liftF[F[_], A](value: F[A]): Free[F, A] = Suspend(value)

  /** Suspend the Free with the Applicative */
  def suspend[F[_], A](value: => Free[F, A])(implicit F: Applicative[F]): Free[F, A] =
    liftF(F.pure(())).flatMap(_ => value)

  /** Lift a pure value into Free */
  def pure[S[_], A](a: A): Free[S, A] = Pure(a)

  final class FreeInjectPartiallyApplied[F[_], G[_]] private[free] {
    def apply[A](fa: F[A])(implicit I : Inject[F, G]): Free[G, A] =
      Free.liftF(I.inj(fa))
  }

  def inject[F[_], G[_]]: FreeInjectPartiallyApplied[F, G] = new FreeInjectPartiallyApplied

  ....
}

To use these datatypes in Cats, use Free.liftF:

scala> import cats.free.Free
import cats.free.Free

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait CharToy[+Next]
object CharToy {
  case class CharOutput[Next](a: Char, next: Next) extends CharToy[Next]
  case class CharBell[Next](next: Next) extends CharToy[Next]
  case class CharDone() extends CharToy[Nothing]

  implicit val charToyFunctor: Functor[CharToy] = new Functor[CharToy] {
    def map[A, B](fa: CharToy[A])(f: A => B): CharToy[B] = fa match {
        case o: CharOutput[A] => CharOutput(o.a, f(o.next))
        case b: CharBell[A]   => CharBell(f(b.next))
        case CharDone()       => CharDone()
      }
    }
  def output(a: Char): Free[CharToy, Unit] =
    Free.liftF[CharToy, Unit](CharOutput(a, ()))
  def bell: Free[CharToy, Unit] = Free.liftF[CharToy, Unit](CharBell(()))
  def done: Free[CharToy, Unit] = Free.liftF[CharToy, Unit](CharDone())
  def pure[A](a: A): Free[CharToy, A] = Free.pure[CharToy, A](a)
}

// Exiting paste mode, now interpreting.

defined trait CharToy
defined object CharToy

Here’s the command sequence:

scala> import CharToy._
import CharToy._

scala> val subroutine = output('A')
subroutine: cats.free.Free[CharToy,Unit] = Free(...)

scala> val program = for {
         _ <- subroutine
         _ <- bell
         _ <- done
       } yield ()
program: cats.free.Free[CharToy,Unit] = Free(...)

This is where things get magical. We now have do notation for something that hasn’t even been interpreted yet: it’s pure data.

Next we’d like to define showProgram to prove that what we have is just data:

scala> def showProgram[R: Show](p: Free[CharToy, R]): String =
         p.fold({ r: R => "return " + Show[R].show(r) + "\n" },
           {
             case CharOutput(a, next) =>
               "output " + Show[Char].show(a) + "\n" + showProgram(next)
             case CharBell(next) =>
               "bell " + "\n" + showProgram(next)
             case CharDone() =>
               "done\n"
           })
showProgram: [R](p: cats.free.Free[CharToy,R])(implicit evidence$1: cats.Show[R])String

scala> showProgram(program)
res7: String =
"output A
bell
done
"

We can manually check that the monad generated using Free satisfies the monad laws:

scala> showProgram(output('A'))
res8: String =
"output A
return ()
"

scala> showProgram(pure('A') flatMap output)
res9: String =
"output A
return ()
"

scala> showProgram(output('A') flatMap pure)
res10: String =
"output A
return ()
"

scala> showProgram((output('A') flatMap { _ => done }) flatMap { _ => output('C') })
res11: String =
"output A
done
"

scala> showProgram(output('A') flatMap { _ => (done flatMap { _ => output('C') }) })
res12: String =
"output A
done
"

Looking good. Notice the abort-like semantics of done. Also, due to type inference limitation, I was not able to use >>= and >> here.

WFMM:

The free monad is the interpreter’s best friend. Free monads “free the interpreter” as much as possible while still maintaining the bare minimum necessary to form a monad.

Another way of looking at it is that the Free datatype provides a way of building a syntax tree given a container.

One of the reasons the Free datatype is gaining popularity I think is that people are running into the limitation of combining different monads. It’s not impossible with monad transformer, but the type signature gets hairy quickly, and the stacking leaks into various places in code. On the other hand, Free essentially gives up on encoding meaning into the monad. You gain flexibility because you can do whatever in the interpreter function, for instance run sequentially during testing, but run in parallel for production.

Stackless Scala with Free Monads 

The notion of free monad goes beyond just the interpreter pattern. I think people are still discovering new ways of harnessing its power.

Rúnar (@runarorama) has been a proponent of using Free in Scala. His talk that we covered on day 6, Dead-Simple Dependency Injection, uses Free to implement a mini language to implement a key-value store. The same year, Rúnar also gave a talk at Scala Days 2012 called Stackless Scala With Free Monads. I recommend watching the talk before reading the paper, but it’s easier to quote the paper version, Stackless Scala With Free Monads.

Rúnar starts out with code that uses an implementation of the State monad to zip a list with its indices. It blows the stack when the list is larger than the stack limit. Then he introduces trampoline, which is a single loop that drives the entire program.

sealed trait Trampoline [+ A] {
  final def runT : A =
    this match {
      case More (k) => k().runT
      case Done (v) => v
    }
}
case class More[+A](k: () => Trampoline[A])
  extends Trampoline[A]
case class Done [+A](result: A)
  extends Trampoline [A]

In the above code, Function0 k is used as a thunk for the next step.

To extend its usage for State monad, he then reifies flatMap into a data structure called FlatMap:

case class FlatMap [A,+B](
  sub: Trampoline [A],
  k: A => Trampoline[B]) extends Trampoline[B]

Next, it is revealed that Trampoline is a free monad of Function0. Here’s how it’s defined in Cats:

  type Trampoline[A] = Free[Function0, A]

Trampoline 

Using Trampoline any program can be transformed into a stackless one. Trampoline object defines a few useful functions for trampolining:

object Trampoline {
  def done[A](a: A): Trampoline[A] =
    Free.Pure[Function0,A](a)

  def suspend[A](a: => Trampoline[A]): Trampoline[A] =
    Free.Suspend[Function0, A](() => a)

  def delay[A](a: => A): Trampoline[A] =
    suspend(done(a))
}

Let’s try implementing even and odd from the talk:

scala> import cats._, cats.instances.all._, cats.free.{ Free, Trampoline }
import cats._
import cats.instances.all._
import cats.free.{Free, Trampoline}

scala> import Trampoline._
import Trampoline._

scala> :paste
// Entering paste mode (ctrl-D to finish)
def even[A](ns: List[A]): Trampoline[Boolean] =
  ns match {
    case Nil => done(true)
    case x :: xs => suspend(odd(xs))
  }
def odd[A](ns: List[A]): Trampoline[Boolean] =
  ns match {
    case Nil => done(false)
    case x :: xs => suspend(even(xs))
  }

// Exiting paste mode, now interpreting.

even: [A](ns: List[A])cats.free.Trampoline[Boolean]
odd: [A](ns: List[A])cats.free.Trampoline[Boolean]

scala> even(List(1, 2, 3)).run
res0: Boolean = false

scala> even((0 to 3000).toList).run
res1: Boolean = false

While implementing the above I ran into SI-7139 again, so I had to tweak the Cats’ code. #322

Free monads 

In addition, Rúnar introduces several datatypes that can be derived using Free:

type Pair[+A] = (A, A)
type BinTree[+A] = Free[Pair, A]

type Tree[+A] = Free[List, A]

type FreeMonoid[+A] = Free[({type λ[+α] = (A,α)})#λ, Unit]

type Trivial[+A] = Unit
type Option[+A] = Free[Trivial, A]

There’s also an iteratees implementation based on free monads. Finally, he summarizes free monads in nice bullet points:

  • A model for any recursive data type with data at the leaves.
  • A free monad is an expression tree with variables at the leaves and flatMap is variable substitution.

Free monoid using Free 

Let’s try defining “List” using Free.

scala> type FreeMonoid[A] = Free[(A, +?), Unit]
defined type alias FreeMonoid

scala> def cons[A](a: A): FreeMonoid[A] =
         Free.liftF[(A, +?), Unit]((a, ()))
cons: [A](a: A)FreeMonoid[A]

scala> val x = cons(1)
x: FreeMonoid[Int] = Free(...)

scala> val xs = cons(1) flatMap {_ => cons(2)}
xs: cats.free.Free[[+β](Int, β),Unit] = Free(...)

As a way of interpreting the result, let’s try converting this to a standard List:

scala> implicit def tuple2Functor[A]: Functor[(A, ?)] =
         new Functor[(A, ?)] {
           def map[B, C](fa: (A, B))(f: B => C) =
             (fa._1, f(fa._2))
         }
tuple2Functor: [A]=> cats.Functor[[β](A, β)]

scala> def toList[A](list: FreeMonoid[A]): List[A] =
         list.fold(
           { _ => Nil },
           { case (x: A @unchecked, xs: FreeMonoid[A]) => x :: toList(xs) })
toList: [A](list: FreeMonoid[A])List[A]

scala> toList(xs)
res2: List[Int] = List(1, 2)

Tail Recursive Monads (FlatMap) 

In 2015 Phil Freeman (@paf31) wrote Stack Safety for Free working with PureScript hosted on JavaScript, a strict language like Java:

The paper gives a hat tip to Rúnar (@runarorama)’s Stackless Scala With Free Monads, but presents a more drastic solution to the stack safety problem.

The stack problem 

As a background, in Scala the compiler is able to optimize on self-recursive tail calls. For example, here’s an example of a self-recursive tail calls.

scala> import scala.annotation.tailrec
import scala.annotation.tailrec

scala> :paste
// Entering paste mode (ctrl-D to finish)
def pow(n: Long, exp: Long): Long =
  {
    @tailrec def go(acc: Long, p: Long): Long =
      (acc, p) match {
        case (acc, 0) => acc
        case (acc, p) => go(acc * n, p - 1)
      }
    go(1, exp)
  }

// Exiting paste mode, now interpreting.

pow: (n: Long, exp: Long)Long

scala> pow(2, 3)
res3: Long = 8

Here’s an example that’s not self-recursive. It’s blowing up the stack.

scala> :paste
object OddEven0 {
  def odd(n: Int): String = even(n - 1)
  def even(n: Int): String = if (n <= 0) "done" else odd(n - 1)
}

// Exiting paste mode, now interpreting.

defined object OddEven0

scala> OddEven0.even(200000)
java.lang.StackOverflowError
  at OddEven0$.even(<console>:15)
  at OddEven0$.odd(<console>:14)
  at OddEven0$.even(<console>:15)
  at OddEven0$.odd(<console>:14)
  at OddEven0$.even(<console>:15)
  ....

Next, we’d try to add Writer datatype to do the pow calculation using LongProduct monoid.

scala> import cats._, cats.instances.all._, cats.data.Writer
import cats._
import cats.instances.all._
import cats.data.Writer

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class LongProduct(value: Long)
implicit val longProdMonoid: Monoid[LongProduct] = new Monoid[LongProduct] {
  def empty: LongProduct = LongProduct(1)
  def combine(x: LongProduct, y: LongProduct): LongProduct = LongProduct(x.value * y.value)
}
def powWriter(x: Long, exp: Long): Writer[LongProduct, Unit] =
  exp match {
    case 0 => Writer(LongProduct(1L), ())
    case m =>
      Writer(LongProduct(x), ()) >>= { _ => powWriter(x, exp - 1) }
  }

// Exiting paste mode, now interpreting.

defined class LongProduct
longProdMonoid: cats.Monoid[LongProduct] = $anon$1@538028a
powWriter: (x: Long, exp: Long)cats.data.Writer[LongProduct,Unit]

scala> powWriter(2, 3).run
res4: cats.Id[(LongProduct, Unit)] = (LongProduct(8),())

This is no longer self-recursive, so it will blow the stack with large exp.

scala> powWriter(1, 10000).run
java.lang.StackOverflowError
  at $anonfun$powWriter$1.apply(<console>:35)
  at $anonfun$powWriter$1.apply(<console>:35)
  at cats.data.WriterT$$anonfun$flatMap$1.apply(WriterT.scala:37)
  at cats.data.WriterT$$anonfun$flatMap$1.apply(WriterT.scala:36)
  at cats.package$$anon$1.flatMap(package.scala:34)
  at cats.data.WriterT.flatMap(WriterT.scala:36)
  at cats.data.WriterTFlatMap1$class.flatMap(WriterT.scala:249)
  at cats.data.WriterTInstances2$$anon$4.flatMap(WriterT.scala:130)
  at cats.data.WriterTInstances2$$anon$4.flatMap(WriterT.scala:130)
  at cats.FlatMap$class.$greater$greater$eq(FlatMap.scala:26)
  at cats.data.WriterTInstances2$$anon$4.$greater$greater$eq(WriterT.scala:130)
  at cats.FlatMap$Ops$class.$greater$greater$eq(FlatMap.scala:20)
  at cats.syntax.FlatMapSyntax1$$anon$1.$greater$greater$eq(flatMap.scala:6)
  at .powWriter1(<console>:35)
  at $anonfun$powWriter$1.apply(<console>:35)

This characteristic of Scala limits the usefulness of monadic composition where flatMap can call monadic function f, which then can call flatMap etc..

FlatMap (MonadRec) 

Our solution is to reduce the candidates for the target monad m from an arbitrary monad, to the class of so-called tail-recursive monads.

class (Monad m) <= MonadRec m where
  tailRecM :: forall a b. (a -> m (Either a b)) -> a -> m b

Here’s the same function in Scala:

  /**
   * Keeps calling `f` until a `scala.util.Right[B]` is returned.
   */
  def tailRecM[A, B](a: A)(f: A => F[Either[A, B]]): F[B]

As it turns out, Oscar Boykin (@posco) brought tailRecM into FlatMap in #1280 (Remove FlatMapRec make all FlatMap implement tailRecM), and it’s now part of Cats 0.7.0. In other words, all FlatMap/Monads in Cats 0.7.0 are tail-recursive.

We can for example, obtain the tailRecM for Writer:

scala> def tailRecM[A, B] = FlatMap[Writer[Vector[String], ?]].tailRecM[A, B] _
tailRecM: [A, B]=> A => ((A => cats.data.WriterT[[A]A,scala.collection.immutable.Vector[String],Either[A,B]]) => cats.data.WriterT[[A]A,scala.collection.immutable.Vector[String],B])

Here’s how we can make a stack-safe powWriter:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def powWriter2(x: Long, exp: Long): Writer[LongProduct, Unit] =
  FlatMap[Writer[LongProduct, ?]].tailRecM(exp) {
    case 0L      => Writer.value[LongProduct, Either[Long, Unit]](Right(()))
    case m: Long => Writer.tell(LongProduct(x)) >>= { _ => Writer.value(Left(m - 1)) }
  }

// Exiting paste mode, now interpreting.

powWriter2: (x: Long, exp: Long)cats.data.Writer[LongProduct,Unit]

scala> powWriter2(2, 3).run
res5: cats.Id[(LongProduct, Unit)] = (LongProduct(8),())

scala> powWriter2(1, 10000).run
res6: cats.Id[(LongProduct, Unit)] = (LongProduct(1),())

This guarantees greater safety on the user of FlatMap typeclass, but it would mean that each the implementers of the instances would need to provide a safe tailRecM.

Here’s the one for Option for example:

@tailrec
def tailRecM[A, B](a: A)(f: A => Option[Either[A, B]]): Option[B] =
  f(a) match {
    case None => None
    case Some(Left(a1)) => tailRecM(a1)(f)
    case Some(Right(b)) => Some(b)
  }

That’s it for today!

day 9 

On day 8 we reviewed the Ior datatype, free monoids, free monads, and the Trampoline datatype. Let’s continue on.

Some useful monadic functions 

Learn You a Haskell for Great Good says:

In this section, we’re going to explore a few functions that either operate on monadic values or return monadic values as their results (or both!). Such functions are usually referred to as monadic functions.

Unlike Haskell’s standard Monad, Cats’ Monad is more granularly designed with the hindsight of of weaker typeclasses.

Here there’s no question that all monads are applicative functors as well as functors. This means we can use ap or map operator on the datatypes that form a monad.

flatten method 

LYAHFGG:

It turns out that any nested monadic value can be flattened and that this is actually a property unique to monads. For this, the join function exists.

In Cats, the equivalent function called flatten on FlatMap. Thanks to simulacrum, flatten can also be injected as a method.

@typeclass trait FlatMap[F[_]] extends Apply[F] {
  def flatMap[A, B](fa: F[A])(f: A => F[B]): F[B]

  /**
   * also commonly called join
   */
  def flatten[A](ffa: F[F[A]]): F[A] =
    flatMap(ffa)(fa => fa)

  ....
}

Since Option[A] already implements flatten we need to make an abtract function to turn it into an abtract type.

scala> import cats._, cats.instances.all._, cats.syntax.flatMap._
import cats._
import cats.instances.all._
import cats.syntax.flatMap._

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> def join[F[_]: FlatMap, A](fa: F[F[A]]): F[A] =
         fa.flatten
join: [F[_], A](fa: F[F[A]])(implicit evidence$1: cats.FlatMap[F])F[A]

scala> join(1.some.some)
res0: Option[Int] = Some(1)

If I’m going to make it into a function, I could’ve used the function syntax:

scala> FlatMap[Option].flatten(1.some.some)
res1: Option[Int] = Some(1)

I tried using flatten method for an Xor of an Xor value, but it didn’t seem to work:

scala> import cats.data.Xor
import cats.data.Xor

scala> val xorOfXor = Xor.right[String, Xor[String, Int]](Xor.right[String, Int](1))
xorOfXor: cats.data.Xor[String,cats.data.Xor[String,Int]] = Right(Right(1))

scala> xorOfXor.flatten
res2: cats.data.Xor[String,Int] = Right(1)

filterM method 

LYAHFGG:

The filterM function from Control.Monad does just what we want! … The predicate returns a monadic value whose result is a Bool.

Cats does not have filterM, but there’s filterA on TraverseFilter.

foldM function 

LYAHFGG:

The monadic counterpart to foldl is foldM.

I did not find foldM in Cats, so implemented it myself, but it wasn’t stack-safe. Tomas Mikula added a better implementation and that got merged as #925.

:

  /**
   * Left associative monadic folding on `F`.
   */
  def foldM[G[_], A, B](fa: F[A], z: B)(f: (B, A) => G[B])(implicit G: Monad[G]): G[B] =
    foldLeft(fa, G.pure(z))((gb, a) => G.flatMap(gb)(f(_, a)))

Let’s try using this.

scala> def binSmalls(acc: Int, x: Int): Option[Int] =
         if (x > 9) none[Int] else (acc + x).some
binSmalls: (acc: Int, x: Int)Option[Int]

scala> (Foldable[List].foldM(List(2, 8, 3, 1), 0) {binSmalls})
res3: Option[Int] = Some(14)

scala> (Foldable[List].foldM(List(2, 11, 3, 1), 0) {binSmalls})
res4: Option[Int] = None

In the above, binSmalls returns None when it finds a number larger than 9.

Making a safe RPN calculator 

LYAHFGG:

When we were solving the problem of implementing a RPN calculator, we noted that it worked fine as long as the input that it got made sense.

We have not covered the chapter on RPN calculator, so let’s translate it into Scala.

scala> def foldingFunction(list: List[Double], next: String): List[Double] =
         (list, next) match {
           case (x :: y :: ys, "*") => (y * x) :: ys
           case (x :: y :: ys, "+") => (y + x) :: ys
           case (x :: y :: ys, "-") => (y - x) :: ys
           case (xs, numString) => numString.toInt :: xs
         }
foldingFunction: (list: List[Double], next: String)List[Double]

scala> def solveRPN(s: String): Double =
         (s.split(' ').toList.
         foldLeft(Nil: List[Double]) {foldingFunction}).head
solveRPN: (s: String)Double

scala> solveRPN("10 4 3 + 2 * -")
res0: Double = -4.0

Looks like it’s working.

The next step is to change the folding function to handle errors gracefully. We can implement parseInt as follows:

scala> import scala.util.Try
import scala.util.Try

scala> def parseInt(x: String): Option[Int] =
         (scala.util.Try(x.toInt) map { Some(_) }
         recover { case _: NumberFormatException => None }).get
parseInt: (x: String)Option[Int]

scala> parseInt("1")
res1: Option[Int] = Some(1)

scala> parseInt("foo")
res2: Option[Int] = None

Here’s the updated folding function:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> def foldingFunction(list: List[Double], next: String): Option[List[Double]] =
         (list, next) match {
           case (x :: y :: ys, "*") => ((y * x) :: ys).some
           case (x :: y :: ys, "+") => ((y + x) :: ys).some
           case (x :: y :: ys, "-") => ((y - x) :: ys).some
           case (xs, numString) => parseInt(numString) map {_ :: xs}
         }
foldingFunction: (list: List[Double], next: String)Option[List[Double]]

scala> foldingFunction(List(3, 2), "*")
res3: Option[List[Double]] = Some(List(6.0))

scala> foldingFunction(Nil, "*")
res4: Option[List[Double]] = None

scala> foldingFunction(Nil, "wawa")
res5: Option[List[Double]] = None

Finally, here’s the updated solveRPN using foldM:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> def solveRPN(s: String): Option[Double] =
         for {
           List(x) <- (Foldable[List].foldM(s.split(' ').toList, Nil: List[Double]) {foldingFunction})
         } yield x
solveRPN: (s: String)Option[Double]

scala> solveRPN("1 2 * 4 +")
res6: Option[Double] = Some(6.0)

scala> solveRPN("1 2 * 4")
res7: Option[Double] = None

scala> solveRPN("1 8 garbage")
res8: Option[Double] = None

Composing monadic functions 

LYAHFGG:

When we were learning about the monad laws, we said that the <=< function is just like composition, only instead of working for normal functions like a -> b, it works for monadic functions like a -> m b.

In Cats there’s a special wrapper for a function of type A => F[B] called Kleisli:

/**
 * Represents a function `A => F[B]`.
 */
final case class Kleisli[F[_], A, B](run: A => F[B]) { self =>

  ....
}

object Kleisli extends KleisliInstances with KleisliFunctions

private[data] sealed trait KleisliFunctions {

  def pure[F[_], A, B](x: B)(implicit F: Applicative[F]): Kleisli[F, A, B] =
    Kleisli(_ => F.pure(x))

  def ask[F[_], A](implicit F: Applicative[F]): Kleisli[F, A, A] =
    Kleisli(F.pure)

  def local[M[_], A, R](f: R => R)(fa: Kleisli[M, R, A]): Kleisli[M, R, A] =
    Kleisli(f andThen fa.run)
}

We can use the Kleisli() constructor to construct a Kliesli value:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.data.Kleisli
import cats.data.Kleisli

scala> val f = Kleisli { (x: Int) => (x + 1).some }
f: cats.data.Kleisli[Option,Int,Int] = Kleisli(<function1>)

scala> val g = Kleisli { (x: Int) => (x * 100).some }
g: cats.data.Kleisli[Option,Int,Int] = Kleisli(<function1>)

We can then compose the functions using compose, which runs the right-hand side first:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> 4.some >>= (f compose g).run
res0: Option[Int] = Some(401)

There’s also andThen, which runs the left-hand side first:

scala> 4.some >>= (f andThen g).run
res1: Option[Int] = Some(500)

Both compose and andThen work like function composition but note that they retain the monadic context.

lift method 

Kleisli also has some interesting methods like lift, which allows you to lift a monadic function into another applicative functor. When I tried using it, I realized it’s broken, so here’s the fixed version #354:

  def lift[G[_]](implicit G: Applicative[G]): Kleisli[λ[α => G[F[α]]], A, B] =
    Kleisli[λ[α => G[F[α]]], A, B](a => Applicative[G].pure(run(a)))

Here’s how we can use it:

scala> val l = f.lift[List]
l: cats.data.Kleisli[[α]List[Option[α]],Int,Int] = Kleisli(<function1>)

scala> List(1, 2, 3) >>= l.run
res0: List[Option[Int]] = List(Some(2), Some(3), Some(4))

Making monads 

LYAHFGG:

In this section, we’re going to look at an example of how a type gets made, identified as a monad and then given the appropriate Monad instance. … What if we wanted to model a non-deterministic value like [3,5,9], but we wanted to express that 3 has a 50% chance of happening and 5 and 9 both have a 25% chance of happening?

Since Scala doesn’t have a built-in rational, let’s just use Double. Here’s the case class:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Prob[A](list: List[(A, Double)])

trait ProbInstances {
  implicit def probShow[A]: Show[Prob[A]] = Show.fromToString
}

case object Prob extends ProbInstances

// Exiting paste mode, now interpreting.

defined class Prob
defined trait ProbInstances
defined object Prob

Is this a functor? Well, the list is a functor, so this should probably be a functor as well, because we just added some stuff to the list.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Prob[A](list: List[(A, Double)])

trait ProbInstances {
  implicit val probInstance: Functor[Prob] = new Functor[Prob] {
    def map[A, B](fa: Prob[A])(f: A => B): Prob[B] =
      Prob(fa.list map { case (x, p) => (f(x), p) })
  }
  implicit def probShow[A]: Show[Prob[A]] = Show.fromToString
}
case object Prob extends ProbInstances

// Exiting paste mode, now interpreting.

defined class Prob
defined trait ProbInstances
defined object Prob

scala> import cats.syntax.functor._
import cats.syntax.functor._

scala> Prob((3, 0.5) :: (5, 0.25) :: (9, 0.25) :: Nil) map {-_}
res0: Prob[Int] = Prob(List((-3,0.5), (-5,0.25), (-9,0.25)))

Just like the book, we are going to implement flatten first.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class Prob[A](list: List[(A, Double)])

trait ProbInstances {
  def flatten[B](xs: Prob[Prob[B]]): Prob[B] = {
    def multall(innerxs: Prob[B], p: Double) =
      innerxs.list map { case (x, r) => (x, p * r) }
    Prob((xs.list map { case (innerxs, p) => multall(innerxs, p) }).flatten)
  }

  implicit val probInstance: Functor[Prob] = new Functor[Prob] {
    def map[A, B](fa: Prob[A])(f: A => B): Prob[B] =
      Prob(fa.list map { case (x, p) => (f(x), p) })
  }
  implicit def probShow[A]: Show[Prob[A]] = Show.fromToString
}

case object Prob extends ProbInstances

// Exiting paste mode, now interpreting.

defined class Prob
defined trait ProbInstances
defined object Prob

This should be enough prep work for monad:

scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.annotation.tailrec
case class Prob[A](list: List[(A, Double)])

trait ProbInstances { self =>
  def flatten[B](xs: Prob[Prob[B]]): Prob[B] = {
    def multall(innerxs: Prob[B], p: Double) =
      innerxs.list map { case (x, r) => (x, p * r) }
    Prob((xs.list map { case (innerxs, p) => multall(innerxs, p) }).flatten)
  }

  implicit val probInstance: Monad[Prob] = new Monad[Prob] {
    def pure[A](a: A): Prob[A] = Prob((a, 1.0) :: Nil)
    def flatMap[A, B](fa: Prob[A])(f: A => Prob[B]): Prob[B] = self.flatten(map(fa)(f))
    override def map[A, B](fa: Prob[A])(f: A => B): Prob[B] =
      Prob(fa.list map { case (x, p) => (f(x), p) })
    def tailRecM[A, B](a: A)(f: A => Prob[Either[A, B]]): Prob[B] = {
      val buf = List.newBuilder[(B, Double)]
      @tailrec def go(lists: List[List[(Either[A, B], Double)]]): Unit =
        lists match {
          case (ab :: abs) :: tail => ab match {
            case (Right(b), p) =>
              buf += ((b, p))
              go(abs :: tail)
            case (Left(a), p) =>
              go(f(a).list :: abs :: tail)
          }
          case Nil :: tail => go(tail)
          case Nil => ()
        }
      go(f(a).list :: Nil)
      Prob(buf.result)
    }
  }
  implicit def probShow[A]: Show[Prob[A]] = Show.fromToString
}

case object Prob extends ProbInstances

// Exiting paste mode, now interpreting.

import scala.annotation.tailrec
defined class Prob
defined trait ProbInstances
defined object Prob

The book says it satisfies the monad laws. Let’s implement the Coin example:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait Coin
object Coin {
  case object Heads extends Coin
  case object Tails extends Coin
  implicit val coinEq: Eq[Coin] = new Eq[Coin] {
    def eqv(a1: Coin, a2: Coin): Boolean = a1 == a2
  }
  def heads: Coin = Heads
  def tails: Coin = Tails
}
import Coin.{heads, tails}
def coin: Prob[Coin] = Prob(heads -> 0.5 :: tails -> 0.5 :: Nil)
def loadedCoin: Prob[Coin] = Prob(heads -> 0.1 :: tails -> 0.9 :: Nil)

// Exiting paste mode, now interpreting.

defined trait Coin
defined object Coin
import Coin.{heads, tails}
coin: Prob[Coin]
loadedCoin: Prob[Coin]

Here’s how we can implement flipThree:

scala> import cats.syntax.flatMap._
import cats.syntax.flatMap._

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> def flipThree: Prob[Boolean] = for {
  a <- coin
  b <- coin
  c <- loadedCoin
} yield { List(a, b, c) forall {_ === tails} }
flipThree: Prob[Boolean]

scala> flipThree
res1: Prob[Boolean] = Prob(List((false,0.025), (false,0.225), (false,0.025), (false,0.225), (false,0.025), (false,0.225), (false,0.025), (true,0.225)))

So the probability of having all three coins on Tails even with a loaded coin is pretty low.

Monads are fractals 

The intuition for FlatMap and Monad we built on day 5 via the tightrope walking example is that a monadic chaining >>= can carry context from one operation to the next. A single None in an intermediate value banishes the entire chain.

The context monad instances carry along is different. The State datatype we saw on day 7 for example automates the explicit passing of the state object with >>=.

This is a useful intuition of monads to have in comparison to Functor, Apply, and Applicative, but it doesn’t tell the whole story.

sierpinski triangle

Another intuition about monads (technically FlatMap) is that they are fractals, like the above Sierpinski triangle. Each part of a fractal is self-similar to the whole shape.

Take List for example. A List of Lists can be treated a flat List.

scala> val xss = List(List(1), List(2, 3), List(4))
xss: List[List[Int]] = List(List(1), List(2, 3), List(4))

scala> xss.flatten
res0: List[Int] = List(1, 2, 3, 4)

The flatten function embodies the crunching of the List data structure. If we think in terms of the type signature, it would be F[F[A]] => F[A].

List forms a monad under ++ 

We can get a better idea of the flattening by reimplementing it using foldLeft:

scala> xss.foldLeft(List(): List[Int]) { _ ++ _ }
res1: List[Int] = List(1, 2, 3, 4)

We can say that List forms a monad under ++.

Option forms a monad under? 

Now let try to figure out under what operation does Option form a monad:

scala> val o1 = Some(None: Option[Int]): Option[Option[Int]]
o1: Option[Option[Int]] = Some(None)

scala> val o2 = Some(Some(1): Option[Int]): Option[Option[Int]]
o2: Option[Option[Int]] = Some(Some(1))

scala> val o3 = None: Option[Option[Int]]
o3: Option[Option[Int]] = None

And here’s the foldLeft:

scala> o1.foldLeft(None: Option[Int]) { (_, _)._2 }
res2: Option[Int] = None

scala> o2.foldLeft(None: Option[Int]) { (_, _)._2 }
res3: Option[Int] = Some(1)

scala> o3.foldLeft(None: Option[Int]) { (_, _)._2 }
res4: Option[Int] = None

It seems like Option forms a monad under (_, _)._2.

State as a fractal 

If we come back to the State datatype from the point of view of fractals, it becomes clear that a State of State is also a State. This property allows us to create mini-programs like pop and push, and compose them into a larger State using for comprehension:

def stackManip: State[Stack, Int] = for {
  _ <- push(3)
  a <- pop
  b <- pop
} yield(b)

We also saw similar composition with the Free monad.

In short, monadic values compose within the same monad instance.

Look out for fractals 

When you want to find your own monad, keep a lookout for the fractal structure. From there, see if you can implement the flatten function F[F[A]] => F[A].

day 10 

On day 9 we reviewed monadic functions flatten, filterM, and the implementation of foldM. Next we made a safe RPN calculator using foldM, looked at Kleisli composition, implemented our own monad Prob, and thought about fractals.

Since Cats does not provide Zippers, we are now done with Learn You a Haskell for Great Good, and we need to find our own topic.

One topic we have been dancing around, but haven’t gotten into is the notion of the monad transformer. Luckily there’s another good Haskell book that I’ve read that’s also available online.

Monad transformers 

Real World Haskell says:

It would be ideal if we could somehow take the standard State monad and add failure handling to it, without resorting to the wholesale construction of custom monads by hand. The standard monads in the mtl library don’t allow us to combine them. Instead, the library provides a set of monad transformers to achieve the same result.

A monad transformer is similar to a regular monad, but it’s not a standalone entity: instead, it modifies the behaviour of an underlying monad.

Dependency injection again 

Let’s look into the idea of using Reader datatype (Function1) for dependency injection, which we saw on day 6.

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class User(id: Long, parentId: Long, name: String, email: String)
trait UserRepo {
  def get(id: Long): User
  def find(name: String): User
}

// Exiting paste mode, now interpreting.

defined class User
defined trait UserRepo

Jason Arhart’s Scrap Your Cake Pattern Boilerplate: Dependency Injection Using the Reader Monad generalizes the notion of Reader datatype for supporting multiple services by creating a Config object:

scala> import java.net.URI
import java.net.URI

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait HttpService {
  def get(uri: URI): String
}
trait Config {
  def userRepo: UserRepo
  def httpService: HttpService
}

// Exiting paste mode, now interpreting.

defined trait HttpService
defined trait Config

To use this, we would construct mini-programs of type Config => A, and compose them.

Suppose we want to also encode the notion of failure using Option.

Kleisli as ReaderT 

We can use the Kleisli datatype we saw yesterday as ReaderT, a monad transformer version of the Reader datatype, and stack it on top of Option like this:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.data.Kleisli
import cats.data.Kleisli

scala> :paste
// Entering paste mode (ctrl-D to finish)
type ReaderTOption[A, B] = Kleisli[Option, A, B]
object ReaderTOption {
  def ro[A, B](f: A => Option[B]): ReaderTOption[A, B] = Kleisli(f)
}

// Exiting paste mode, now interpreting.

defined type alias ReaderTOption
defined object ReaderTOption

We can modify the Config to make httpService optional:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait UserRepo {
  def get(id: Long): Option[User]
  def find(name: String): Option[User]
}
trait Config {
  def userRepo: UserRepo
  def httpService: Option[HttpService]
}

// Exiting paste mode, now interpreting.

defined trait UserRepo
defined trait Config

Next we can rewrite the “primitive” readers to return ReaderTOption[Config, A]:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Users {
  def getUser(id: Long): ReaderTOption[Config, User] =
    ReaderTOption.ro {
      case config => config.userRepo.get(id)
    }
  def findUser(name: String): ReaderTOption[Config, User] =
    ReaderTOption.ro {
      case config => config.userRepo.find(name)
    }
}
trait Https {
  def getHttp(uri: URI): ReaderTOption[Config, String] =
    ReaderTOption.ro {
      case config => config.httpService map {_.get(uri)}
    }
}

// Exiting paste mode, now interpreting.

defined trait Users
defined trait Https

We can compose these mini-programs into compound programs:

scala> :paste
// Entering paste mode (ctrl-D to finish)
import cats.syntax.eq._
trait Program extends Users with Https {
  def userSearch(id: Long): ReaderTOption[Config, String] =
    for {
      u <- getUser(id)
      r <- getHttp(new URI(s"http://www.google.com/?q=${u.name}"))
    } yield r
}
object Main extends Program {
  def run(config: Config): Option[String] =
    userSearch(2).run(config)
}
val dummyConfig: Config = new Config {
  val testUsers = List(User(0, 0, "Vito", "vito@example.com"),
    User(1, 0, "Michael", "michael@example.com"),
    User(2, 0, "Fredo", "fredo@example.com"))
  def userRepo: UserRepo = new UserRepo {
    def get(id: Long): Option[User] =
      testUsers find { _.id === id }
    def find(name: String): Option[User] =
      testUsers find { _.name === name }
  }
  def httpService: Option[HttpService] = None
}

// Exiting paste mode, now interpreting.

import cats.syntax.eq._
defined trait Program
defined object Main
dummyConfig: Config = $anon$1@45b7ef23

The above ReaderTOption datatype combines Reader’s ability to read from some configuration once, and the Option’s ability to express failure.

Stacking multiple monad transformers 

RWH:

When we stack a monad transformer on a normal monad, the result is another monad. This suggests the possibility that we can again stack a monad transformer on top of our combined monad, to give a new monad, and in fact this is a common thing to do.

We can stack StateT on top of ReaderTOption to represent state transfer.

scala> import cats.data.StateT
import cats.data.StateT

scala> :paste
// Entering paste mode (ctrl-D to finish)
type StateTReaderTOption[C, S, A] = StateT[({type l[X] = ReaderTOption[C, X]})#l, S, A]
object StateTReaderTOption {
  def state[C, S, A](f: S => (S, A)): StateTReaderTOption[C, S, A] =
    StateT[({type l[X] = ReaderTOption[C, X]})#l, S, A] {
      s: S => Monad[({type l[X] = ReaderTOption[C, X]})#l].pure(f(s))
    }
  def get[C, S]: StateTReaderTOption[C, S, S] =
    state { s => (s, s) }
  def put[C, S](s: S): StateTReaderTOption[C, S, Unit] =
    state { _ => (s, ()) }
  def ro[C, S, A](f: C => Option[A]): StateTReaderTOption[C, S, A] =
    StateT[({type l[X] = ReaderTOption[C, X]})#l, S, A] {
      s: S =>
        ReaderTOption.ro[C, (S, A)]{
          c: C => f(c) map {(s, _)}
        }
    }
}

// Exiting paste mode, now interpreting.

defined type alias StateTReaderTOption
defined object StateTReaderTOption

This is a bit confusing, so let’s break it down. Ultimately the point of State datatype is to wrap S => (S, A), so I kept those parameter names for state. Next, I needed to modify the kind of ReaderTOption to * -> * (a type constructor that takes exactly one type as its parameter).

Similarly, we need a way of using this datatype as a ReaderTOption, which is expressed as C => Option[A] in ro.

We also can implement a Stack again. This time let’s use String instead.

scala> type Stack = List[String]
defined type alias Stack

scala> val pop = StateTReaderTOption.state[Config, Stack, String] {
         case x :: xs => (xs, x)
         case _       => ???
       }
pop: StateTReaderTOption[Config,Stack,String] = cats.data.StateT@616b9787

Here’s a version of pop and push using get and put primitive:

scala> import StateTReaderTOption.{get, put}
import StateTReaderTOption.{get, put}

scala> val pop: StateTReaderTOption[Config, Stack, String] =
         for {
           s <- get[Config, Stack]
           (x :: xs) = s
           _ <- put(xs)
         } yield x
pop: StateTReaderTOption[Config,Stack,String] = cats.data.StateT@2905c98a

scala> def push(x: String): StateTReaderTOption[Config, Stack, Unit] =
         for {
           xs <- get[Config, Stack]
           r <- put(x :: xs)
         } yield r
push: (x: String)StateTReaderTOption[Config,Stack,Unit]

We can also port stackManip:

scala> def stackManip: StateTReaderTOption[Config, Stack, String] =
         for {
           _ <- push("Fredo")
           a <- pop
           b <- pop
         } yield(b)
stackManip: StateTReaderTOption[Config,Stack,String]

Here’s how we can use this:

scala> stackManip.run(List("Hyman Roth")).run(dummyConfig)
res0: Option[(Stack, String)] = Some((List(),Hyman Roth))

So far we have the same feature as the State version. We can modify Users to use StateTReaderTOption.ro:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Users {
  def getUser[S](id: Long): StateTReaderTOption[Config, S, User] =
    StateTReaderTOption.ro[Config, S, User] {
      case config => config.userRepo.get(id)
    }
  def findUser[S](name: String): StateTReaderTOption[Config, S, User] =
    StateTReaderTOption.ro[Config, S, User] {
      case config => config.userRepo.find(name)
    }
}

// Exiting paste mode, now interpreting.

defined trait Users

Using this we can now manipulate the stack using the read-only configuration:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Program extends Users {
  def stackManip: StateTReaderTOption[Config, Stack, Unit] =
    for {
      u <- getUser(2)
      a <- push(u.name)
    } yield(a)
}
object Main extends Program {
  def run(s: Stack, config: Config): Option[(Stack, Unit)] =
    stackManip.run(s).run(config)
}

// Exiting paste mode, now interpreting.

defined trait Program
defined object Main

We can run this program like this:

scala> Main.run(List("Hyman Roth"), dummyConfig)
res1: Option[(Stack, Unit)] = Some((List(Fredo, Hyman Roth),()))

Now we have StateT, ReaderT and Option working all at the same time. Maybe I am not doing it right, but setting up the monad constructor functions like state and ro to set up StateTReaderTOption is a rather mind-bending excercise.

Once the primitive monadic values are constructed, the usage code (like stackManip) looks relatively clean. It sure does avoid the cake pattern, but the stacked monad type StateTReaderTOption is sprinkled all over the code base.

If all we wanted was being able to use getUser(id: Long) and push etc., an alternative is to construct a DSL with those commands using Free monads we saw on day 8.

Stacking Future and Either 

One use of monad transformers that seem to come up often is stacking Future datatype with Either. There’s a blog post by Yoshida-san (@xuwei_k) in Japanese called How to combine Future and Either nicely in Scala.

A little known fact about Yoshida-san outside of Tokyo, is that he majored in Chinese calligraphy. Apparently he spent his collge days writing ancient seal scripts and carving seals:

His namesake Xu Wei was a Ming-era painter, poet, writer and dramatist famed for his free-flowing style. Here’s Yoshida-san writing functional programming language.

In any case, why would one want to stack Future and Either together? The blog post explains like this:

  1. Future[A] comes up a lot in Scala.
  2. You don’t want to block future, so you end up with Future everywhere.
  3. Since Future is asynchronous, any errors need to be captured in there.
  4. Future is designed to handle Throwable, but that’s all you get.
  5. When the program becomes complex enough, you want to type your own error states.
  6. How can we make Future of Either?

Here’s the prepration step:

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class User(id: Long, name: String)

// In actual code, probably more than 2 errors
sealed trait Error
object Error {
  final case class UserNotFound(userId: Long) extends Error
  final case class ConnectionError(message: String) extends Error
}
object UserRepo {
  def followers(userId: Long): Either[Error, List[User]] = ???
}
import UserRepo.followers

// Exiting paste mode, now interpreting.

defined class User
defined trait Error
defined object Error
defined object UserRepo
import UserRepo.followers

Suppose our application allows the users to follow each other like Twitter. The followers function returns the list of followers.

Now let’s try writing a function that checks if two users follow each other.

scala> def isFriends0(user1: Long, user2: Long): Either[Error, Boolean] =
         for {
           a <- followers(user1).right
           b <- followers(user2).right
         } yield a.exists(_.id == user2) && b.exists(_.id == user1)
isFriends0: (user1: Long, user2: Long)Either[Error,Boolean]

Now suppose we want to make the database access async so we changed the followers to return a Future like this:

scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.concurrent.{ Future, ExecutionContext }
object UserRepo {
  def followers(userId: Long): Future[Either[Error, List[User]]] = ???
}
import UserRepo.followers

// Exiting paste mode, now interpreting.

import scala.concurrent.{Future, ExecutionContext}
defined object UserRepo
import UserRepo.followers

Now, how would isFriends0 look like? Here’s one way of writing this:

scala> def isFriends1(user1: Long, user2: Long)
         (implicit ec: ExecutionContext): Future[Either[Error, Boolean]] =
         for {
           a <- followers(user1)
           b <- followers(user2)
         } yield for {
           x <- a.right
           y <- b.right
         } yield x.exists(_.id == user2) && y.exists(_.id == user1)
isFriends1: (user1: Long, user2: Long)(implicit ec: scala.concurrent.ExecutionContext)scala.concurrent.Future[Either[Error,Boolean]]

And here’s another version:

scala> def isFriends2(user1: Long, user2: Long)
         (implicit ec: ExecutionContext): Future[Either[Error, Boolean]] =
         followers(user1) flatMap {
           case Right(a) =>
             followers(user2) map {
               case Right(b) =>
                 Right(a.exists(_.id == user2) && b.exists(_.id == user1))
               case Left(e) =>
                 Left(e)
             }
           case Left(e) =>
             Future.successful(Left(e))
         }
isFriends2: (user1: Long, user2: Long)(implicit ec: scala.concurrent.ExecutionContext)scala.concurrent.Future[Either[Error,Boolean]]

What is the difference between the two versions? They’ll both behave the same if followers return normally, but suppose followers(user1) returns an Error state. isFriend1 would still call followers(user2), whereas isFriend2 would short-circuit and return the error.

Regardless, both functions became convoluted compared to the original. And it’s mostly a boilerplate to satisfy the types. I don’t want to imagine doing this for every functions that uses Future[Either[Error, A]].

XorT datatype 

Cats comes with XorT datatype, which is a monad transformer for Xor that we saw on day 7.

/**
 * Transformer for `Xor`, allowing the effect of an arbitrary type constructor `F` to be combined with the
 * fail-fast effect of `Xor`.
 *
 * `XorT[F, A, B]` wraps a value of type `F[A Xor B]`. An `F[C]` can be lifted in to `XorT[F, A, C]` via `XorT.right`,
 * and lifted in to a `XorT[F, C, B]` via `XorT.left`.
 */
case class XorT[F[_], A, B](value: F[A Xor B]) {
  ....
}

Here’s UserRepo.followers with a dummy implementation:

scala> :paste
// Entering paste mode (ctrl-D to finish)
import cats._, cats.instances.all._
import cats.data.XorT
object UserRepo {
  def followers(userId: Long)
    (implicit ec: ExecutionContext): XorT[Future, Error, List[User]] =
    userId match {
      case 0L =>
        XorT.right(Future { List(User(1, "Michael")) })
      case 1L =>
        XorT.right(Future { List(User(0, "Vito")) })
      case x =>
        println("not found")
        XorT.left(Future.successful { Error.UserNotFound(x) })
    }
}
import UserRepo.followers

// Exiting paste mode, now interpreting.

import cats._
import cats.instances.all._
import cats.data.XorT
defined object UserRepo
import UserRepo.followers

Now let’s try writing isFriends0 function again.

scala> def isFriends3(user1: Long, user2: Long)
         (implicit ec: ExecutionContext): XorT[Future, Error, Boolean] =
         for{
           a <- followers(user1)
           b <- followers(user2)
         } yield a.exists(_.id == user2) && b.exists(_.id == user1)
isFriends3: (user1: Long, user2: Long)(implicit ec: scala.concurrent.ExecutionContext)cats.data.XorT[scala.concurrent.Future,Error,Boolean]

Isn’t this great? Except for the type signature and the ExecutionContext, isFriends3 is identical to isFriends0.

Now let’s try using this.

scala> implicit val ec = scala.concurrent.ExecutionContext.global
ec: scala.concurrent.ExecutionContextExecutor = scala.concurrent.impl.ExecutionContextImpl@71087c73

scala> import scala.concurrent.Await
import scala.concurrent.Await

scala> import scala.concurrent.duration._
import scala.concurrent.duration._

scala> Await.result(isFriends3(0, 1).value, 1 second)
res0: cats.data.Xor[Error,Boolean] = Right(true)

When the first user is not found, XorT will short circuit.

scala> Await.result(isFriends3(2, 3).value, 1 second)
not found
res34: cats.data.Xor[Error,Boolean] = Left(UserNotFound(2))

Note that "not found" is printed only once.

Unlike the StateTReaderTOption example, XorT seems usable in many situations.

That’s it for today!

day 11 

On day 10 we looked at the idea of monad transformers, first using Kliesli as ReaderT, and also stacking Future and Either (XorT).

Genericity 

In the grand scheme of things, functional programming is about abstracting things out. Skimming over Jeremy Gibbons’s 2006 book Datatype-Generic Programming, I found a nice overview.

Generic programming is about making programming languages more flexible without compromising safety.

Genericity by value 

One of the first and most fundamental techniques that any programmer learns is how to parametrize computations by values

scala> def triangle4: Unit = {
         println("*")
         println("**")
         println("***")
         println("****")
       }
triangle4: Unit

We can abstract out 4 into a parameter:

scala> def triangle(side: Int): Unit = {
         (1 to side) foreach { row =>
           (1 to row) foreach { col =>
             println("*")
           }
         }
       }
triangle: (side: Int)Unit

Genericity by type 

List[A] is a polymorphic datatype parametrized by another type, the type of list elements. This enables parametric polymorphism.

scala> def head[A](xs: List[A]): A = xs(0)
head: [A](xs: List[A])A

The above function would work for all proper types.

Genericity by function 

Higher-order programs are programs parametrized by other programs.

For example foldLeft can be used to append two lists:

scala> def append[A](list: List[A], ys: List[A]): List[A] =
         list.foldLeft(ys) { (acc, x) => x :: acc }
append: [A](list: List[A], ys: List[A])List[A]

scala> append(List(1, 2, 3), List(4, 5, 6))
res0: List[Int] = List(3, 2, 1, 4, 5, 6)

Or it can also be used to add numbers:

scala> def sum(list: List[Int]): Int =
        list.foldLeft(0) { _ + _ }
sum: (list: List[Int])Int

Genericity by structure 

“Generic programming” embodied in the sense of a collection library, like Scala Collections. In the case of C++’s Standard Template Library, the parametric datatypes are called containers, and various abstractions are provided via iterators, such as input iterators and forward iterators.

The notion of the typeclass fits in here too.

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Read[A] {
  def reads(s: String): Option[A]
}
object Read extends ReadInstances {
  def read[A](f: String => Option[A]): Read[A] = new Read[A] {
    def reads(s: String): Option[A] = f(s)
  }
  def apply[A: Read]: Read[A] = implicitly[Read[A]]
}
trait ReadInstances {
  implicit val stringRead: Read[String] =
    Read.read[String] { Some(_) }
  implicit val intRead: Read[Int] =
    Read.read[Int] { s =>
      try {
        Some(s.toInt)
      } catch {
        case e: NumberFormatException => None
      }
    }
}

// Exiting paste mode, now interpreting.

defined trait Read
defined object Read
defined trait ReadInstances

scala> Read[Int].reads("1")
res1: Option[Int] = Some(1)

The typeclass captures the requirements required of types, called typeclass contract. It also lets us list the types providing these requirements by defining typeclass instances. This enables ad-hoc polymorphism because A in Read[A] is not universal.

Genericity by property 

In Scala Collection library, some of the concepts promised are more elaborate than the list of operations covered by the type.

as well as signatures of operations, the concept might specify the laws these operations satisfy, and non-functional characteristics such as the asymptotic complexities of the operations in terms of time and space.

Typeclasses with laws fit in here too. For example Monoid[A] comes with the monoid laws. The laws need to be validated for each instance using property-based testing tools.

Genericity by stage 

Various flavors of metaprogramming can be though of as the development or program that construct or manipulate other programs. This could include code generation and macros.

Genericity by shape 

Let’s say there’s a polymorphic datatype of binary trees:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait Btree[A]
object Btree {
  case class Tip[A](a: A) extends Btree[A]
  case class Bin[A](left: Btree[A], right: Btree[A]) extends Btree[A]
}

// Exiting paste mode, now interpreting.

defined trait Btree
defined object Btree

Let’s write foldB as a way of abstracting similar programs.

scala> def foldB[A, B](tree: Btree[A], b: (B, B) => B)(t: A => B): B =
         tree match {
           case Btree.Tip(a)      => t(a)
           case Btree.Bin(xs, ys) => b(foldB(xs, b)(t), foldB(ys, b)(t))
         }
foldB: [A, B](tree: Btree[A], b: (B, B) => B)(t: A => B)B

The next goal is to abstract foldB and foldLeft.

In fact, what differs between the two fold operators is the shape of the data on which they operate, and hence the shape of the programs themselves. The kind of parametrization required is by this shape; that is, by the datatype or type constructor (such as List or Tree) concerned. We call this datatype genericity.

For example, fold apparently could be expressed as

scala> import cats._, cats.functor._, cats.instances.all._
import cats._
import cats.functor._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Fix[F[_,_], A]
def cata[S[_,_]: Bifunctor, A, B](t: Fix[S, A])(f: S[A, B] => B): B = ???

// Exiting paste mode, now interpreting.

defined trait Fix
cata: [S[_, _], A, B](t: Fix[S,A])(f: S[A,B] => B)(implicit evidence$1: cats.functor.Bifunctor[S])B

In the above, S represents the shape of the datatype. By abstracting out the shapes, we can construct parametrically datatype-generic programs. We’ll come back to this later.

Alternatively, such programs might be ad-hoc datatype-generic, when the behaviour exploits that shape in some essential manner. Typical examples of the latter are pretty printers and marshallers.

The example that fits in this category might be Scala Pickling. Pickling defines picklers for common types, and it derives pickler instances for different shapes using macro.

This approach to datatype genericity has been variously called polytypism, structural polymorphism or typecase , and is the meaning given to ‘generic programming’ by the Generic Haskell team. Whatever the name, functions are defined inductively by case analysis on the structure of datatypes ….

We consider parametric datatype genericity to be the ‘gold standard’, and in the remainder of these lecture notes, we concentrate on parametric datatype-generic definitions where possible.

In Scala, shapeless is focused on abstracting out the shape.

Datatype-generic programming with Bifunctor 

Moving on the section 3.6 in Datatype-Generic Programming called “Datatype genericity.” Gibbons tries to call this Origami programming, but I don’t think the name stuck, so we’ll go with datatype-generic programming.

As we have already argued, data structure determines program structure. It therefore makes sense to abstract from the determining shape, leaving only what programs of different shape have in common. What datatypes such as List and Tree have in common is the fact that they are recursive — which is to say, a datatype Fix

data Fix s a = In {out :: s a (Fix s a)}

Here are three instances of Fix using different shapes: lists and internally labelled binary trees as seen before, and also a datatype of externally labelled binary trees.

data ListF a b = NilF | ConsF a b
type List a = Fix ListF a
data TreeF a b = EmptyF | NodeF a b b
type Tree a = Fix TreeF a
data BtreeF a b = TipF a | BinF b b
type Btree a = Fix BtreeF a

From Why free monads matter we saw on day 8 we actually know this is similar as Free datatype, but the semantics around Functor etc is going to different, so let’s implement it from scratch:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed abstract class Fix[S[_], A] extends Serializable {
  def out: S[Fix[S, A]]
}
object Fix {
  case class In[S[_], A](out: S[Fix[S, A]]) extends Fix[S, A]
}

// Exiting paste mode, now interpreting.

defined class Fix
defined object Fix

Following Free, I am putting S[_] on the left, and A on the right.

Let’s try implementing the List first.

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait ListF[+Next, +A]
object ListF {
  case class NilF() extends ListF[Nothing, Nothing]
  case class ConsF[A, Next](a: A, n: Next) extends ListF[Next, A]
}
type GenericList[A] = Fix[ListF[+?, A], A]
object GenericList {
  def nil[A]: GenericList[A] = Fix.In[ListF[+?, A], A](ListF.NilF())
  def cons[A](a: A, xs: GenericList[A]): GenericList[A] =
    Fix.In[ListF[+?, A], A](ListF.ConsF(a, xs))
}

// Exiting paste mode, now interpreting.

defined trait ListF
defined object ListF
defined type alias GenericList
defined object GenericList

scala> import GenericList.{ cons, nil }
import GenericList.{cons, nil}

Here’s how we can use it:

scala> cons(1, nil)
res0: GenericList[Int] = In(ConsF(1,In(NilF())))

So far this is similar to what we saw with the free monad.

Bifunctor 

Not all valid binary type constructors s are suitable for Fixing; for example, function types with the parameter appearing in contravariant (source) positions cause problems. It turns out that we should restrict attention to (covariant) bifunctors, which support a bimap operation ‘locating’ all the elements.

Cats ships with Bifunctor:

/**
 * A typeclass of types which give rise to two independent, covariant
 * functors.
 */
trait Bifunctor[F[_, _]] extends Serializable { self =>

  /**
   * The quintessential method of the Bifunctor trait, it applies a
   * function to each "side" of the bifunctor.
   */
  def bimap[A, B, C, D](fab: F[A, B])(f: A => C, g: B => D): F[C, D]

  ....
}

Here is the Bifunctor instance for GenericList.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.functor.Bifunctor
import cats.functor.Bifunctor

scala> :paste
// Entering paste mode (ctrl-D to finish)
implicit val listFBifunctor: Bifunctor[ListF] = new Bifunctor[ListF] {
  def bimap[S1, A1, S2, A2](fab: ListF[S1, A1])(f: S1 => S2, g: A1 => A2): ListF[S2, A2] =
    fab match {
      case ListF.NilF()         => ListF.NilF()
      case ListF.ConsF(a, next) => ListF.ConsF(g(a), f(next))
    }
}

// Exiting paste mode, now interpreting.

listFBifunctor: cats.functor.Bifunctor[ListF] = $anon$1@1fe8f7e7

Deriving map from Bifunctor 

It turns out that the class Bifunctor provides sufficient flexibility to capture a wide variety of recursion patterns as datatype-generic programs.

First, we can implement map in terms of bimap.

scala> :paste
// Entering paste mode (ctrl-D to finish)
object DGP {
  def map[F[_, _]: Bifunctor, A1, A2](fa: Fix[F[?, A1], A1])(f: A1 => A2): Fix[F[?, A2], A2] =
    Fix.In[F[?, A2], A2](Bifunctor[F].bimap(fa.out)(map(_)(f), f))
}

// Exiting paste mode, now interpreting.

defined object DGP

scala> DGP.map(cons(1, nil)) { _ + 1 }
res1: Fix[[α]ListF[α,Int],Int] = In(ConsF(2,In(NilF())))

The above definition of map is independent from GenericList, abstracted by Bifunctor and Fix. Another way of looking at it is that we can get Functor for free out of Bifunctor and Fix.

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait FixInstances {
  implicit def fixFunctor[F[_, _]: Bifunctor]: Functor[Lambda[L => Fix[F[?, L], L]]] =
    new Functor[Lambda[L => Fix[F[?, L], L]]] {
      def map[A1, A2](fa: Fix[F[?, A1], A1])(f: A1 => A2): Fix[F[?, A2], A2] =
        Fix.In[F[?, A2], A2](Bifunctor[F].bimap(fa.out)(map(_)(f), f))
    }
}

// Exiting paste mode, now interpreting.

defined trait FixInstances

scala> {
  val instances = new FixInstances {}
  import instances._
  import cats.syntax.functor._
  cons(1, nil) map { _ + 1 }
}
res2: GenericList[Int] = In(ConsF(2,In(NilF())))

Intense amount of type lambdas, but I think it’s clear that I translated DB.map into a Functor instance.

Deriving fold from Bifunctor 

We can also implement fold, also known as cata from catamorphism:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object DGP {
  // catamorphism
  def fold[F[_, _]: Bifunctor, A1, A2](fa: Fix[F[?, A1], A1])(f: F[A2, A1] => A2): A2 =
    {
      val g = (fa1: F[Fix[F[?, A1], A1], A1]) =>
        Bifunctor[F].leftMap(fa1) { (fold(_)(f)) }
      f(g(fa.out))
    }
}

// Exiting paste mode, now interpreting.

defined object DGP

scala> DGP.fold[ListF, Int, Int](cons(2, cons(1, nil))) {
         case ListF.NilF()      => 0
         case ListF.ConsF(x, n) => x + n
       }
res3: Int = 3

Deriving unfold from Bifunctor 

The unfold operator for a datatype grows a data structure from a value. In a precise technical sense, it is the dual of the fold operator.

The unfold is also called ana from anamorphism:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object DGP {
  // catamorphism
  def fold[F[_, _]: Bifunctor, A1, A2](fa: Fix[F[?, A1], A1])(f: F[A2, A1] => A2): A2 =
    {
      val g = (fa1: F[Fix[F[?, A1], A1], A1]) =>
        Bifunctor[F].leftMap(fa1) { (fold(_)(f)) }
      f(g(fa.out))
    }
  // anamorphism
  def unfold[F[_, _]: Bifunctor, A1, A2](x: A2)(f: A2 => F[A2, A1]): Fix[F[?, A1], A1] =
    Fix.In[F[?, A1], A1](Bifunctor[F].leftMap(f(x))(unfold[F, A1, A2](_)(f)))
}

// Exiting paste mode, now interpreting.

defined object DGP

Here’s how we can construct list of numbers counting down:

scala> def pred(n: Int): GenericList[Int] =
         DGP.unfold[ListF, Int, Int](n) {
           case 0 => ListF.NilF()
           case n => ListF.ConsF(n, n - 1)
         }
pred: (n: Int)GenericList[Int]

scala> pred(4)
res4: GenericList[Int] = In(ConsF(4,In(ConsF(3,In(ConsF(2,In(ConsF(1,In(NilF())))))))))

There are several more we can derive, too.

Tree 

The point of the datatype-generic programming is to abstract out the shape. Let’s introduce some other datatype, like a binary Tree:

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait TreeF[+Next, +A]
object TreeF {
  case class EmptyF() extends TreeF[Nothing, Nothing]
  case class NodeF[Next, A](a: A, left: Next, right: Next) extends TreeF[Next, A]
}
type Tree[A] = Fix[TreeF[?, A], A]
object Tree {
  def empty[A]: Tree[A] =
    Fix.In[TreeF[+?, A], A](TreeF.EmptyF())
  def node[A, Next](a: A, left: Tree[A], right: Tree[A]): Tree[A] =
    Fix.In[TreeF[+?, A], A](TreeF.NodeF(a, left, right))
}

// Exiting paste mode, now interpreting.

defined trait TreeF
defined object TreeF
defined type alias Tree
defined object Tree

Here’s how to create this tree:

scala> import Tree.{empty,node}
import Tree.{empty, node}

scala> node(2, node(1, empty, empty), empty)
res5: Tree[Int] = In(NodeF(2,In(NodeF(1,In(EmptyF()),In(EmptyF()))),In(EmptyF())))

Now, all we have to do should be to define a Bifunctor instance:

scala> :paste
// Entering paste mode (ctrl-D to finish)
implicit val treeFBifunctor: Bifunctor[TreeF] = new Bifunctor[TreeF] {
  def bimap[A, B, C, D](fab: TreeF[A, B])(f: A => C, g: B => D): TreeF[C, D] =
    fab match {
      case TreeF.EmptyF() => TreeF.EmptyF()
      case TreeF.NodeF(a, left, right) =>
        TreeF.NodeF(g(a), f(left), f(right))
    }
}

// Exiting paste mode, now interpreting.

treeFBifunctor: cats.functor.Bifunctor[TreeF] = $anon$1@f3862bc

First, let’s try Functor:

scala> {
  val instances = new FixInstances {}
  import instances._
  import cats.syntax.functor._
  node(2, node(1, empty, empty), empty) map { _ + 1 }
}
res6: Tree[Int] = In(NodeF(3,In(NodeF(2,In(EmptyF()),In(EmptyF()))),In(EmptyF())))

Looking good. Next, let’s try folding.

scala> def sum(tree: Tree[Int]): Int =
         DGP.fold[TreeF, Int, Int](tree) {
           case TreeF.EmptyF()       => 0
           case TreeF.NodeF(a, l, r) => a + l + r
         }
sum: (tree: Tree[Int])Int

scala> sum(node(2, node(1, empty, empty), empty))
res7: Int = 3

We got the fold.

Here’s a function named grow that generates a binary search tree from a list.

scala> def grow[A: PartialOrder](xs: List[A]): Tree[A] =
          DGP.unfold[TreeF, A, List[A]](xs) {
            case Nil => TreeF.EmptyF()
            case x :: xs =>
              import cats.syntax.partialOrder._
              TreeF.NodeF(x, xs filter {_ <= x}, xs filter {_ > x})
          }
grow: [A](xs: List[A])(implicit evidence$1: cats.PartialOrder[A])Tree[A]

scala> grow(List(3, 1, 4, 2))
res8: Tree[Int] = In(NodeF(3,In(NodeF(1,In(EmptyF()),In(NodeF(2,In(EmptyF()),In(EmptyF()))))),In(NodeF(4,In(EmptyF()),In(EmptyF())))))

Looks like unfold is working too.

For more details on DGP in Scala, Oliveira and Gibbons wrote a paper translating the ideas and many more called Scala for Generic Programmers (2008), and its updated version Scala for Generic Programmers (2010).

Datatype-generic programming described here has a limitation of datatype being recursive, so I don’t think it’s very usable as is, but there are some interesting concepts that can be exploited.

The Origami patterns 

Next, Gibbons claims that the design patterns are evidence of a “evidence of a lack of expressivity in those mainstream programming languages.” He then sets out to replace the patterns using higher-order datatype-generic programming.

Const datatype 

Chapter 5 of Datatype-Generic Programming, the last one before Conclusions, is called “The Essence of the Iterator pattern,” the same name of the paper Gibbons and Oliveira wrote in 2006. The one available online as The Essence of the Iterator Pattern is from 2009. Reading this paper as a continuation of DGP gives it a better context.

Here’s the example given at the beginning of this paper, translated to Java.

public static <E> int loop(Collection<E> coll) {
  int n = 0;
  for (E elem: coll) {
    n = n + 1;
    doSomething(elem);
  }
  return n;
}

EIP:

We emphasize that we want to capture both aspects of the method loop and iterations like it: mapping over the elements, and simultaneously accumulating some measure of those elements.

The first half of the paper reviews functional iterations and applicative style. For applicative functors, it brings up the fact that there are three kinds of applicatives:

  1. Monadic applicative functors
  2. Naperian applicative functors
  3. Monoidal applicative functors

We’ve brought up the fact that all monads are applicatives many times. Naperian applicative functor zips together data structure that are fixed in shape.

Appliactive functors were originally named idom by McBride and Paterson, so Gibbons uses the term idiomatic interchangably with applicative througout this paper, even though McBride and Paterson renamed it to applicative functors.

Monoidal applicative functors using Const datatype 

A second family of applicative functors, this time non-monadic, arises from constant functors with monoidal targets.

We can derive an applicative functor from any Monoid, by using empty for pure, and |+| for ap. The Const datatype is also called Const in Cats:

/**
 * [[Const]] is a phantom type, it does not contain a value of its second type parameter `B`
 * [[Const]] can be seen as a type level version of `Function.const[A, B]: A => B => A`
 */
final case class Const[A, B](getConst: A) {
  /**
   * changes the type of the second type parameter
   */
  def retag[C]: Const[A, C] =
    this.asInstanceOf[Const[A, C]]

  ....
}

In the above, the type parameter A represents the value, but B is a phantom type used to make Functor happy.

scala> import cats._, cats.instances.all._, cats.data.Const
import cats._
import cats.instances.all._
import cats.data.Const

scala> import cats.syntax.functor._
import cats.syntax.functor._

scala> Const(1) map { (_: String) + "!" }
res0: cats.data.Const[Int,String] = Const(1)

When A forms a Semigroup, an Apply is derived, and when A form a Monoid, an Applicative is derived automatically.

Computations within this applicative functor accumulate some measure: for the monoid of integers with addition, they count or sum…

scala> import cats.syntax.apply._
import cats.syntax.apply._

scala> Const(2).retag[String => String] ap Const(1).retag[String]
res1: cats.data.Const[Int,String] = Const(3)

Combining applicative functors 

EIP:

Like monads, applicative functors are closed under products; so two independent idiomatic effects can generally be fused into one, their product.

Cats seems to be missing the functor products altogether.

Product of functors 

Let's try implementing one.

(The impelementation I wrote here got merged into Cats in #388)

/**
 * [[Prod]] is a product to two independent functor values.
 *
 * See: [[https://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator.pdf The Essence of the Iterator Pattern]]
 */
sealed trait Prod[F[_], G[_], A] {
  def first: F[A]
  def second: G[A]
}
object Prod extends ProdInstances {
  def apply[F[_], G[_], A](first0: => F[A], second0: => G[A]): Prod[F, G, A] = new Prod[F, G, A] {
    val firstThunk: Eval[F[A]] = Later(first0)
    val secondThunk: Eval[G[A]] = Later(second0)
    def first: F[A] = firstThunk.value
    def second: G[A] = secondThunk.value
  }
  def unapply[F[_], G[_], A](x: Prod[F, G, A]): Option[(F[A], G[A])] =
    Some((x.first, x.second))
}

First we start with the product of Functor:

private[data] sealed abstract class ProdInstances4 {
  implicit def prodFunctor[F[_], G[_]](implicit FF: Functor[F], GG: Functor[G]): Functor[Lambda[X => Prod[F, G, X]]] = new ProdFunctor[F, G] {
    def F: Functor[F] = FF
    def G: Functor[G] = GG
  }
}

sealed trait ProdFunctor[F[_], G[_]] extends Functor[Lambda[X => Prod[F, G, X]]] {
  def F: Functor[F]
  def G: Functor[G]
  def map[A, B](fa: Prod[F, G, A])(f: A => B): Prod[F, G, B] = Prod(F.map(fa.first)(f), G.map(fa.second)(f))
}

Here’s how to use it:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.data.Prod
import cats.data.Prod

scala> val x = Prod(List(1), (Some(1): Option[Int]))
x: cats.data.Prod[List,Option,Int] = Prod(List(1),Some(1))

scala> Functor[Lambda[X => Prod[List, Option, X]]].map(x) { _ + 1 }
res0: cats.data.Prod[[+A]List[A],Option,Int] = Prod(List(2),Some(2))

First, we are defining a pair-like datatype called Prod, which prepresents a product of typeclass instances. By simply passing the function f to both the sides, we can form Functor for Prod[F, G] where F and G are Functor.

To see if it worked, we are mapping over x and adding 1. We could make the usage code a bit nicer if we wanted, but it’s ok for now.

Product of apply functors 

Next up is Apply:

private[data] sealed abstract class ProdInstances3 extends ProdInstances4 {
  implicit def prodApply[F[_], G[_]](implicit FF: Apply[F], GG: Apply[G]): Apply[Lambda[X => Prod[F, G, X]]] = new ProdApply[F, G] {
    def F: Apply[F] = FF
    def G: Apply[G] = GG
  }
}

sealed trait ProdApply[F[_], G[_]] extends Apply[Lambda[X => Prod[F, G, X]]] with ProdFunctor[F, G] {
  def F: Apply[F]
  def G: Apply[G]
  def ap[A, B](fa: Prod[F, G, A])(f: Prod[F, G, A => B]): Prod[F, G, B] =
    Prod(F.ap(f.first)(fa.first), G.ap(f.second)(fa.second))
  def product[A, B](fa: Prod[F, G, A], fb: Prod[F, G, B]): Prod[F, G, (A, B)] =
    Prod(F.product(fa.first, fb.first), G.product(fa.second, fb.second))
}

Here’s the usage:

scala> val x = Prod(List(1), (Some(1): Option[Int]))
x: cats.data.Prod[List,Option,Int] = Prod(List(1),Some(1))

scala> val f = Prod(List((_: Int) + 1), (Some((_: Int) * 3): Option[Int => Int]))
f: cats.data.Prod[List,Option,Int => Int] = Prod(List(<function1>),Some(<function1>))

scala> Apply[Lambda[X => Prod[List, Option, X]]].ap(f)(x)
res1: cats.data.Prod[[+A]List[A],Option,Int] = Prod(List(2),Some(3))

The product of Apply passed in separate functions to each side.

Product of applicative functors 

Finally we can implement the product of Applicative:

private[data] sealed abstract class ProdInstances2 extends ProdInstances3 {
  implicit def prodApplicative[F[_], G[_]](implicit FF: Applicative[F], GG: Applicative[G]): Applicative[Lambda[X => Prod[F, G, X]]] = new ProdApplicative[F, G] {
    def F: Applicative[F] = FF
    def G: Applicative[G] = GG
  }
}

sealed trait ProdApplicative[F[_], G[_]] extends Applicative[Lambda[X => Prod[F, G, X]]] with ProdApply[F, G] {
  def F: Applicative[F]
  def G: Applicative[G]
  def pure[A](a: A): Prod[F, G, A] = Prod(F.pure(a), G.pure(a))
}

Here’s a simple usage:

scala> Applicative[Lambda[X => Prod[List, Option, X]]].pure(1)
res2: cats.data.Prod[[+A]List[A],Option,Int] = Prod(List(1),Some(1))

We were able to create Prod(List(1), Some(1)) by calling pure(1).

Composition of Applicative 

Unlike monads in general, applicative functors are also closed under composition; so two sequentially-dependent idiomatic effects can generally be fused into one, their composition.

Thankfully Cats ships with the composition of Applicatives. There’s compose method in the typeclass instance:

@typeclass trait Applicative[F[_]] extends Apply[F] { self =>
  /**
   * `pure` lifts any value into the Applicative Functor
   *
   * Applicative[Option].pure(10) = Some(10)
   */
  def pure[A](x: A): F[A]

  /**
   * Two sequentially dependent Applicatives can be composed.
   *
   * The composition of Applicatives `F` and `G`, `F[G[x]]`, is also an Applicative
   *
   * Applicative[Option].compose[List].pure(10) = Some(List(10))
   */
  def compose[G[_]](implicit GG : Applicative[G]): Applicative[λ[α => F[G[α]]]] =
    new CompositeApplicative[F,G] {
      implicit def F: Applicative[F] = self
      implicit def G: Applicative[G] = GG
    }

  ....
}

Let’s try this out.

scala> Applicative[List].compose[Option].pure(1)
res3: List[Option[Int]] = List(Some(1))

So much nicer.

Product of applicative functions 

For some reason people seem to overlook is that Gibbons also introduces applicative function composition operators. An applicative function is a function in the form of A => F[B] where F forms an Applicative. This is similar to Kleisli composition of monadic functions, but better.

Here’s why. Kliesli composition will let you compose A => F[B] and B => F[C] using andThen, but note that F stays the same. On the other hand, AppFunc composes A => F[B] and B => G[C].

/**
 * [[Func]] is a function `A => F[B]`.
 *
 * See: [[https://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator.pdf The Essence of the Iterator Pattern]]
 */
sealed abstract class Func[F[_], A, B] { self =>
  def run: A => F[B]
  def map[C](f: B => C)(implicit FF: Functor[F]): Func[F, A, C] =
    Func.func(a => FF.map(self.run(a))(f))
}

object Func extends FuncInstances {
  /** function `A => F[B]. */
  def func[F[_], A, B](run0: A => F[B]): Func[F, A, B] =
    new Func[F, A, B] {
      def run: A => F[B] = run0
    }

  /** applicative function. */
  def appFunc[F[_], A, B](run0: A => F[B])(implicit FF: Applicative[F]): AppFunc[F, A, B] =
    new AppFunc[F, A, B] {
      def F: Applicative[F] = FF
      def run: A => F[B] = run0
    }

  /** applicative function using [[Unapply]]. */
  def appFuncU[A, R](f: A => R)(implicit RR: Unapply[Applicative, R]): AppFunc[RR.M, A, RR.A] =
    appFunc({ a: A => RR.subst(f(a)) })(RR.TC)
}

....

/**
 * An implementation of [[Func]] that's specialized to [[Applicative]].
 */
sealed abstract class AppFunc[F[_], A, B] extends Func[F, A, B] { self =>
  def F: Applicative[F]

  def product[G[_]](g: AppFunc[G, A, B]): AppFunc[Lambda[X => Prod[F, G, X]], A, B] =
    {
      implicit val FF: Applicative[F] = self.F
      implicit val GG: Applicative[G] = g.F
      Func.appFunc[Lambda[X => Prod[F, G, X]], A, B]{
        a: A => Prod(self.run(a), g.run(a))
      }
    }

  ....
}

Here’s how we can use it:

scala> import cats.data.Func
import cats.data.Func

scala> val f = Func.appFunc { x: Int => List(x.toString + "!") }
f: cats.data.AppFunc[List,Int,String] = cats.data.Func$$anon$6@4d93e6e1

scala> val g = Func.appFunc { x: Int => (Some(x.toString + "?"): Option[String]) }
g: cats.data.AppFunc[Option,Int,String] = cats.data.Func$$anon$6@17730e3

scala> val h = f product g
h: cats.data.AppFunc[[α]cats.data.Prod[List,Option,α],Int,String] = cats.data.Func$$anon$6@2292daee

scala> h.run(1)
res4: cats.data.Prod[List,Option,String] = Prod(List(1!),Some(1?))

As you can see two applicative functions are running side by side.

Composition of applicative functions 

Here’s andThen and compose:

  def compose[G[_], C](g: AppFunc[G, C, A]): AppFunc[Lambda[X => G[F[X]]], C, B] =
    {
      implicit val FF: Applicative[F] = self.F
      implicit val GG: Applicative[G] = g.F
      implicit val GGFF: Applicative[Lambda[X => G[F[X]]]] = GG.compose(FF)
      Func.appFunc[Lambda[X => G[F[X]]], C, B]({
        c: C => GG.map(g.run(c))(self.run)
      })
    }

  def andThen[G[_], C](g: AppFunc[G, B, C]): AppFunc[Lambda[X => F[G[X]]], A, C] =
    g.compose(self)
scala> val f = Func.appFunc { x: Int => List(x.toString + "!") }
f: cats.data.AppFunc[List,Int,String] = cats.data.Func$$anon$6@77e750e9

scala> val g = Func.appFunc { x: String => (Some(x + "?"): Option[String]) }
g: cats.data.AppFunc[Option,String,String] = cats.data.Func$$anon$6@2ec976ae

scala> val h = f andThen g
h: cats.data.AppFunc[[γ]cats.data.Nested[List,Option,γ],Int,String] = cats.data.Func$$anon$6@4865785

scala> h.run(1)
res5: cats.data.Nested[List,Option,String] = Nested(List(Some(1!?)))

EIP:

The two operators [snip] allow us to combine idiomatic computations in two different ways; we call them parallel and sequential composition, respectively.

Even though we had to introduce a new datatype Prod, the combining applicative computation is an abstract concept for all Applicative.

We’ll continue from here.

day 12 

On day 11 we started reading Jeremy Gibbons’s Datatype-Generic Programming. We saw the creative use of Fix and Bifunctor,

When we moved on to The Essence of the Iterator Pattern, we found that Cats uses Const to represent monoidal applicatives like Int, but it’s currently missing a way to compose applicative functions.

We started with a custon datatype called Prod that represents a pair of F[A] and G[A], then we also added AppFunc that represents applicative functions. #388

Traverse 

The Essence of the Iterator Pattern:

Two of the three motivating examples McBride and Paterson provide for idiomatic computations — sequencing a list of monadic effects and transposing a matrix — are instances of a general scheme they call traversal. This involves iterating over the elements of a data structure, in the style of a map, but interpreting certain function applications idiomatically. … We capture this via a type class of Traversable data structures.

In Cats, this typeclass is called Traverse:

@typeclass trait Traverse[F[_]] extends Functor[F] with Foldable[F] { self =>

  /**
   * given a function which returns a G effect, thread this effect
   * through the running of this function on all the values in F,
   * returning an F[A] in a G context
   */
  def traverse[G[_]: Applicative, A, B](fa: F[A])(f: A => G[B]): G[F[B]]

  /**
   * thread all the G effects through the F structure to invert the
   * structure from F[G[_]] to G[F[_]]
   */
  def sequence[G[_]: Applicative, A](fga: F[G[A]]): G[F[A]] =
    traverse(fga)(ga => ga)
  ....
}

Note that the f takes the shape of A => G[B].

When m is specialised to the identity applicative functor, traversal reduces precisely (modulo the wrapper) to the functorial map over lists.

Cats’ identity applicative functor is defined as follows:

  type Id[A] = A
  implicit val Id: Bimonad[Id] =
    new Bimonad[Id] {
      def pure[A](a: A): A = a
      def extract[A](a: A): A = a
      def flatMap[A, B](a: A)(f: A => B): B = f(a)
      def coflatMap[A, B](a: A)(f: A => B): B = f(a)
      override def map[A, B](fa: A)(f: A => B): B = f(fa)
      override def ap[A, B](fa: A)(ff: A => B): B = ff(fa)
      override def flatten[A](ffa: A): A = ffa
      override def map2[A, B, Z](fa: A, fb: B)(f: (A, B) => Z): Z = f(fa, fb)
      override def lift[A, B](f: A => B): A => B = f
      override def imap[A, B](fa: A)(f: A => B)(fi: B => A): B = f(fa)
  }

Here’s how we can traverse over List(1, 2, 3) using Id.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.syntax.traverse._
import cats.syntax.traverse._

scala> List(1, 2, 3) traverse[Id, Int] { (x: Int) => x + 1 }
res0: cats.Id[List[Int]] = List(2, 3, 4)

In the case of a monadic applicative functor, traversal specialises to monadic map, and has the same uses. In fact, traversal is really just a slight generalisation of monadic map.

Let’s try using this for List:

scala> List(1, 2, 3) traverse { (x: Int) => (Some(x + 1): Option[Int]) }
res1: Option[List[Int]] = Some(List(2, 3, 4))

scala> List(1, 2, 3) traverse { (x: Int) => None }
res2: Option[List[Nothing]] = None

For a Naperian applicative functor, traversal transposes results.

We’re going to skip this one.

For a monoidal applicative functor, traversal accumulates values. The function reduce performs that accumulation, given an argument that assigns a value to each element

scala> import cats.data.Const
import cats.data.Const

scala> def reduce[A, B, F[_]](fa: F[A])(f: A => B)
         (implicit FF: Traverse[F], BB: Monoid[B]): B =
         {
           val g: A => Const[B, Unit] = { (a: A) => Const((f(a))) }
           val x = FF.traverse[Const[B, ?], A, Unit](fa)(g)
           x.getConst
         }
reduce: [A, B, F[_]](fa: F[A])(f: A => B)(implicit FF: cats.Traverse[F], implicit BB: cats.Monoid[B])B

Here’s how we can use this:

scala> reduce(List('a', 'b', 'c')) { c: Char => c.toInt }
res3: Int = 294

It sort of works, but it would be nicer if we can reduce the type annotation a bit. The problem is that as it stands, Scala 2.11 compiler cannot infer Const[B, ?]. There’s a actually a trick to nudge the compiler into looking into several shapes called Unapply, and we can use traverseU instead of traverse to take advantage of it:

scala> def reduce[A, B, F[_]](fa: F[A])(f: A => B)
         (implicit FF: Traverse[F], BB: Monoid[B]): B =
         {
           val x = fa traverseU { (a: A) => Const((f(a))) }
           x.getConst
         }
reduce: [A, B, F[_]](fa: F[A])(f: A => B)(implicit FF: cats.Traverse[F], implicit BB: cats.Monoid[B])B

We’ll find out what this is about later.

sequence function 

Applicative and Traverse are mentioned together by McBride and Paterson in Applicative programming with effects.

As a background, until a few months ago (March 2015), the sequence function in Control.Monad package used to look like this:

-- | Evaluate each action in the sequence from left to right,
-- and collect the results.
sequence :: Monad m => [m a] -> m [a]

If I translate this into Scala, it would look like:

def sequence[G[_]: Monad, A](gas: List[G[A]]): G[List[A]]

It takes a list of monadic values, and returns a monadic value of a list. This already looks pretty useful on its own, but whenever you find a hardcoded List like this, we should suspect if it should be replaced with a better typeclass.

McBride and Paterson first generalized the sequence function to dist, by replacing Monad with Applicative:

def dist[G[_]: Applicative, A](gas: List[G[A]]): G[List[A]]

Next, they realized that dist is often called with map so they added another parameter for applicative function, and called it traverse:

def traverse[G[_]: Applicative, A, B](as: List[A])(f: A => G[B]): G[List[B]]

Finally they generalized the above signature into a typeclass named Traversible:

@typeclass trait Traverse[F[_]] extends Functor[F] with Foldable[F] { self =>

  /**
   * given a function which returns a G effect, thread this effect
   * through the running of this function on all the values in F,
   * returning an F[A] in a G context
   */
  def traverse[G[_]: Applicative, A, B](fa: F[A])(f: A => G[B]): G[F[B]]

  /**
   * thread all the G effects through the F structure to invert the
   * structure from F[G[_]] to G[F[_]]
   */
  def sequence[G[_]: Applicative, A](fga: F[G[A]]): G[F[A]] =
    traverse(fga)(ga => ga)
  ....
}

Thus as the matter of course, Traverse implements a datatype-generic, sequence function, which is just traverse with identity, conceptually easier to remember, because it simply flips F[G[A]] into G[F[A]]. You might have seen this function for Future in the standard library.

scala> import scala.concurrent.{ Future, ExecutionContext, Await }
import scala.concurrent.{Future, ExecutionContext, Await}

scala> import scala.concurrent.duration._
import scala.concurrent.duration._

scala> val x = {
         implicit val ec = scala.concurrent.ExecutionContext.global
         List(Future { 1 }, Future { 2 }).sequence
       }
x: scala.concurrent.Future[List[Int]] = List()

scala> Await.result(x, 1 second)
res4: List[Int] = List(1, 2)

Another useseful thing might be to turn a List of Either into an Either.

scala> List(Right(1): Either[String, Int]).sequenceU
res5: scala.util.Either[String,List[Int]] = Right(List(1))

scala> List(Right(1): Either[String, Int], Left("boom"): Either[String, Int]).sequenceU
res6: scala.util.Either[String,List[Int]] = Left(boom)

Note we just used sequenceU, which is the Unapply variant of sequence. Let’s look into that next.

TraverseFilter 

Cats 0.7.0 adds a new typeclass called TraverseFilter.

/**
 * `TraverseFilter`, also known as `Witherable`, represents list-like structures
 * that can essentially have a [[traverse]] and a [[filter]] applied as a single
 * combined operation ([[traverseFilter]]).
 *
 * Must obey the laws defined in cats.laws.TraverseFilterLaws.
 *
 * Based on Haskell's [[https://hackage.haskell.org/package/witherable-0.1.3.3/docs/Data-Witherable.html Data.Witherable]]
 */
@typeclass trait TraverseFilter[F[_]] extends Traverse[F] with FunctorFilter[F] { self =>

  /**
   * A combined [[traverse]] and [[filter]]. Filtering is handled via `Option`
   * instead of `Boolean` such that the output type `B` can be different than
   * the input type `A`.
   */
  def traverseFilter[G[_]: Applicative, A, B](fa: F[A])(f: A => G[Option[B]]): G[F[B]]


  /**
   *
   * Filter values inside a `G` context.
   *
   * This is a generalized version of Haskell's [[http://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Monad.html#v:filterM filterM]].
   * [[http://stackoverflow.com/questions/28872396/haskells-filterm-with-filterm-x-true-false-1-2-3 This StackOverflow question]] about `filterM` may be helpful in understanding how it behaves.
   */
  def filterA[G[_], A](fa: F[A])(f: A => G[Boolean])(implicit G: Applicative[G]): G[F[A]] =
    traverseFilter(fa)(a => G.map(f(a))(if (_) Some(a) else None))

}

filterA 

As noted in the comment, filterA is a more generalized (or weaker) version of filterM. Instead of requiring a Monad[G] it needs Applicative[G].

Here’s how we can use this:

scala> import cats._, cats.instances.all._, cats.syntax.traverseFilter._
import cats._
import cats.instances.all._
import cats.syntax.traverseFilter._

scala> List(1, 2, 3) filterA { x => List(true, false) }
res7: List[List[Int]] = List(List(1, 2, 3), List(1, 2), List(1, 3), List(1), List(2, 3), List(2), List(3), List())

scala> Vector(1, 2, 3) filterA { x => Vector(true, false) }
res8: scala.collection.immutable.Vector[scala.collection.immutable.Vector[Int]] = Vector(Vector(1, 2, 3), Vector(1, 2), Vector(1, 3), Vector(1), Vector(2, 3), Vector(2), Vector(3), Vector())

Coercing type inference using Unapply 

EIP:

We will have a number of new datatypes with coercion functions like Id, unId, Const and unConst. To reduce clutter, we introduce a common notation for such coercions.

In Scala, implicits and type inference take us pretty far. But by dealing with typeclass a lot, we also end up seeing some of the weaknesses of Scala’s type inference. One of the issues we come across frequently is its inability to infer partially applied parameterized type, also known as SI-2712. If you’re reading this, please jump to the page, upvote the bug, or help solving the problem.

Unapply 

To work around this issue, Cats uses a typeclass called Unapply:

/**
 * A typeclass that is used to help guide scala's type inference to
 * find typeclass instances for types which have shapes which differ
 * from what their typeclasses are looking for.
 *
 * For example, [[Functor]] is defined for types in the shape
 * F[_]. Scala has no problem finding instance of Functor which match
 * this shape, such as Functor[Option], Functor[List], etc. There is
 * also a functor defined for some types which have the Shape F[_,_]
 * when one of the two 'holes' is fixed. For example. there is a
 * Functor for Map[A,?] for any A, and for Either[A,?] for any A,
 * however the scala compiler will not find them without some coercing.
 */
trait Unapply[TC[_[_]], MA] {
  // a type constructor which is properly kinded for the typeclass
  type M[_]
  // the type applied to the type constructor to make an MA
  type A

  // the actual typeclass instance found
  def TC: TC[M]

  // a function which will coerce the MA value into one of type M[A]
  // this will end up being the identity function, but we can't supply
  // it until we have proven that MA and M[A] are the same type
  def subst: MA => M[A]
}

I think it’s easier to demonstrate this using an example.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> def foo[F[_]: Applicative](fa: F[Int]): F[Int] = fa
foo: [F[_]](fa: F[Int])(implicit evidence$1: cats.Applicative[F])F[Int]

In the above, foo is a simple function that returns the passed in value fa: F[Int] where F forms an Applicative. Since Either[String, Int] is an applicative, it should qualify.

scala> foo(Right(1): Either[String, Int])
<console>:19: error: no type parameters for method foo: (fa: F[Int])(implicit evidence$1: cats.Applicative[F])F[Int] exist so that it can be applied to arguments (Either[String,Int])
 --- because ---
argument expression's type is not compatible with formal parameter type;
 found   : Either[String,Int]
 required: ?F[Int]
       foo(Right(1): Either[String, Int])
       ^
<console>:19: error: type mismatch;
 found   : Either[String,Int]
 required: F[Int]
       foo(Right(1): Either[String, Int])
                   ^
<console>:19: error: could not find implicit value for evidence parameter of type cats.Applicative[F]
       foo(Right(1): Either[String, Int])
          ^

We got the error “argument expression’s type is not compatible with formal parameter type.” We can make an Unapply version of foo as follows:

scala> def fooU[FA](fa: FA)(implicit U: Unapply[Applicative, FA]): U.M[U.A] =
         U.subst(fa)
fooU: [FA](fa: FA)(implicit U: cats.Unapply[cats.Applicative,FA])U.M[U.A]

Now let’s try passing in exactly same parameter as we tried:

scala> fooU(Right(1): Either[String, Int])
res1: scala.util.Either[String,Int] = Right(1)

It works. Let’s look into how this is implemented. For Either, a monad is formed for Either[AA, ?], which means during map the right side of the parameter might change like List[Int] changing to List[String], but the left side stays put.

sealed abstract class Unapply2Instances extends Unapply3Instances {
  type Aux2Right[TC[_[_]], MA, F[_,_], AA, B] = Unapply[TC, MA] {
    type M[X] = F[AA,X]
    type A = B
  }

  implicit def unapply2right[TC[_[_]], F[_,_], AA, B](implicit tc: TC[F[AA,?]]): Aux2Right[TC,F[AA,B], F, AA, B] = new Unapply[TC, F[AA,B]] {
     type M[X] = F[AA, X]
     type A = B
     def TC: TC[F[AA, ?]] = tc
     def subst: F[AA, B] => M[A] = identity
   }

   ....
}

First Cats is defining Aux2Right as a type alias that defines the abstract types M[_] and A. Next, it defines an implicit converter from an arbitray typeclass instance TC[F[AA,?]] to Aux2Right[TC,F[AA,B], F, AA, B].

Bedazzle my instance 

One area where Unapply is used in Cats is where infix operators, also known as “syntax”, is injected. These implicit converters are called “bedazzlers” in cecc3a.

package cats
package syntax

trait FunctorSyntax1 {
  implicit def functorSyntaxU[FA](fa: FA)(implicit U: Unapply[Functor,FA]): Functor.Ops[U.M, U.A] =
    new Functor.Ops[U.M, U.A]{
      val self = U.subst(fa)
      val typeClassInstance = U.TC
    }
}

trait FunctorSyntax extends Functor.ToFunctorOps with FunctorSyntax1

Let’s try using *> operator from Apply:

scala> import cats.syntax.cartesian._
import cats.syntax.cartesian._

scala> (Right(1): Either[String, Int]) *> Right(2)
res2: scala.util.Either[String,Int] = Right(2)

This works, likely thanks to the Unapply.

The limitations 

AppFunc we looked at on day 11 lets us create ever more complex composition of applicative functor instances. It would be half as useful without the help of Unapply, since part of the benefit is that it can derive the instances automatically. At the same time, Unapply cannot possibly be the ultimate solution because it requires all shapes to be spelled out up front, and the most complex type it handles currently is F[AA, B, ?]. I could go beyond this by making a product of compositions of monad instances with two parameters.

I can only guess that this type inference problem is a problem that needs to be solved by the Scala compiler itself by treating sequential composition and parallel composition (product) as a first-class constuct in the type system.

Shape and contents 

EIP:

In addition to being parametrically polymorphic in the collection elements, the generic traverse operation is parametrised along two further dimensions: the datatype being traversed, and the applicative functor in which the traversal is interpreted. Specialising the latter to lists as a monoid yields a generic contents operation:

Here’s how we can implement this with Cats:

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.data.Const
import cats.data.Const

scala> def contents[F[_], A](fa: F[A])(implicit FF: Traverse[F]): Const[List[A], F[Unit]] =
         {
           val contentsBody: A => Const[List[A], Unit] = { (a: A) => Const(List(a)) }
           FF.traverseU(fa)(contentsBody)
         }
contents: [F[_], A](fa: F[A])(implicit FF: cats.Traverse[F])cats.data.Const[List[A],F[Unit]]

Now we can take any datatype that supports Traverse and turn it into a List.

scala> contents(Vector(1, 2, 3)).getConst
res0: List[Int] = List(1, 2, 3)

I am actually not sure if the result suppose to come out in the reverse order here.

The other half of the decomposition is obtained simply by a map, which is to say, a traversal interpreted in the identity idiom.

When Gibbons say identity idiom, he means the identity applicative functor, Id[_].

scala> def shape[F[_], A](fa: F[A])(implicit FF: Traverse[F]): Id[F[Unit]] =
         {
           val shapeBody: A => Id[Unit] = { (a: A) => () }
           FF.traverseU(fa)(shapeBody)
         }
shape: [F[_], A](fa: F[A])(implicit FF: cats.Traverse[F])cats.Id[F[Unit]]

Here’s the shape for Vector(1, 2, 3):

scala> shape(Vector(1, 2, 3))
res1: cats.Id[scala.collection.immutable.Vector[Unit]] = Vector((), (), ())

EIP:

This pair of traversals nicely illustrates the two aspects of iterations that we are focussing on, namely mapping and accumulation.

Next EIP demonstrates applicative composition by first combining shape and contents together like this:

scala> import cats.data.Prod
import cats.data.Prod

scala> def decompose[F[_], A](fa: F[A])(implicit FF: Traverse[F]) =
         Prod[Const[List[A], ?], Id, F[Unit]](contents(fa), shape(fa))
decompose: [F[_], A](fa: F[A])(implicit FF: cats.Traverse[F])cats.data.Prod[[β]cats.data.Const[List[A],β],cats.Id,F[Unit]]

scala> val d = decompose(Vector(1, 2, 3))
d: cats.data.Prod[[β]cats.data.Const[List[Int],β],cats.Id,scala.collection.immutable.Vector[Unit]] = Prod(Const(List(1, 2, 3)),Vector((), (), ()))

scala> d.first
res2: cats.data.Const[List[Int],scala.collection.immutable.Vector[Unit]] = Const(List(1, 2, 3))

scala> d.second
res3: cats.Id[scala.collection.immutable.Vector[Unit]] = Vector((), (), ())

The problem here is that we are running traverse twice.

Is it possible to fuse the two traversals into one? The product of applicative functors allows exactly this.

Let’s try rewriting this using AppFunc.

scala> import cats.data.AppFunc
import cats.data.AppFunc

scala> import cats.data.Func.appFunc
import cats.data.Func.appFunc

scala> def contentsBody[A]: AppFunc[Const[List[A], ?], A, Unit] =
         appFunc[Const[List[A], ?], A, Unit] { (a: A) => Const(List(a)) }
contentsBody: [A]=> cats.data.AppFunc[[β]cats.data.Const[List[A],β],A,Unit]

scala> def shapeBody[A]: AppFunc[Id, A, Unit] =
         appFunc { (a: A) => ((): Id[Unit]) }
shapeBody: [A]=> cats.data.AppFunc[cats.Id,A,Unit]

scala> def decompose[F[_], A](fa: F[A])(implicit FF: Traverse[F]) =
         (contentsBody[A] product shapeBody[A]).traverse(fa)
decompose: [F[_], A](fa: F[A])(implicit FF: cats.Traverse[F])cats.data.Prod[[β]cats.data.Const[List[A],β],cats.Id,F[Unit]]

scala> val d = decompose(Vector(1, 2, 3))
d: cats.data.Prod[[β]cats.data.Const[List[Int],β],cats.Id,scala.collection.immutable.Vector[Unit]] = Prod(Const(List(1, 2, 3)),Vector((), (), ()))

scala> d.first
res4: cats.data.Const[List[Int],scala.collection.immutable.Vector[Unit]] = Const(List(1, 2, 3))

scala> d.second
res5: cats.Id[scala.collection.immutable.Vector[Unit]] = Vector((), (), ())

The return type of the decompose is a bit messy, but it’s infered by AppFunc: Prod[[X_kp1]Const[List[A], X_kp1], Id, F[Unit]].

Applicative wordcount 

Skipping over to section 6 of EIP “Modular programming with applicative functors.”

EIP:

There is an additional benefit of applicative functors over monads, which concerns the modular development of complex iterations from simpler aspects. ….

As an illustration, we consider the Unix word-counting utility wc, which computes the numbers of characters, words and lines in a text file.

We can translate the full example using applicative function composition, which is only available on my personal branch. (PR #388 is pending)

Modular iterations, applicatively 

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> import cats.data.{ Func, AppFunc, Const }
import cats.data.{Func, AppFunc, Const}

scala> import Func.{ appFunc, appFuncU }
import Func.{appFunc, appFuncU}

The character-counting slice of the wc program accumulates a result in the integers-as-monoid applicative functor:

Here’s a type alias to treat Int as a monoidal applicative:

scala> type Count[A] = Const[Int, A]
defined type alias Count

In the above, A is a phantom type we don’t need, so let’s just hardcode it to Unit:

scala> def liftInt(i: Int): Count[Unit] = Const(i)
liftInt: (i: Int)Count[Unit]

scala> def count[A](a: A): Count[Unit] = liftInt(1)
count: [A](a: A)Count[Unit]

The body of the iteration simply yields 1 for every element:

scala> val countChar: AppFunc[Count, Char, Unit] = appFunc(count)
countChar: cats.data.AppFunc[Count,Char,Unit] = cats.data.Func$$anon$6@666c81aa

To use this AppFunc, we would call traverse with List[Char]. Here’s a quote I found from Hamlet.

scala> val text = ("Faith, I must leave thee, love, and shortly too.\n" +
                  "My operant powers their functions leave to do.\n").toList
text: List[Char] =
List(F, a, i, t, h, ,,  , I,  , m, u, s, t,  , l, e, a, v, e,  , t, h, e, e, ,,  , l, o, v, e, ,,  , a, n, d,  , s, h, o, r, t, l, y,  , t, o, o, .,
, M, y,  , o, p, e, r, a, n, t,  , p, o, w, e, r, s,  , t, h, e, i, r,  , f, u, n, c, t, i, o, n, s,  , l, e, a, v, e,  , t, o,  , d, o, .,
)

scala> countChar traverse text
res0: Count[List[Unit]] = Const(96)

This looks ok.

Counting the lines (in fact, the newline characters, thereby ignoring a final ‘line’ that is not terminated with a newline character) is similar: the difference is simply what number to use for each element, namely 1 for a newline and 0 for anything else.

scala> import cats.syntax.eq._
import cats.syntax.eq._

scala> def testIf(b: Boolean): Int = if (b) 1 else 0
testIf: (b: Boolean)Int

scala> val countLine: AppFunc[Count, Char, Unit] =
         appFunc { (c: Char) => liftInt(testIf(c === '\n')) }
countLine: cats.data.AppFunc[Count,Char,Unit] = cats.data.Func$$anon$6@2506c434

Again, to use this we’ll just call traverse:

scala> countLine traverse text
res1: Count[List[Unit]] = Const(2)

Counting the words is trickier, because it necessarily involves state. Here, we use the State monad with a boolean state, indicating whether we are currently within a word, and compose this with the applicative functor for counting.

scala> def isSpace(c: Char): Boolean = (c === ' ' || c === '\n' || c === '\t')
isSpace: (c: Char)Boolean

scala> val countWord =
         appFuncU { (c: Char) =>
           import cats.data.State.{ get, set }
           for {
             x <- get[Boolean]
             y = !isSpace(c)
             _ <- set(y)
           } yield testIf(y && !x)
         } andThen appFunc(liftInt)
countWord: cats.data.AppFunc[[γ]cats.data.Nested[[X]cats.data.StateT[cats.Eval,Boolean,X],Count,γ],Char,Unit] = cats.data.Func$$anon$6@4063d8a4

Traversing this AppFunc returns a State datatype:

scala> val x = countWord traverse text
x: cats.data.Nested[[X]cats.data.StateT[cats.Eval,Boolean,X],Count,List[Unit]] = Nested(cats.data.StateT@3b411e74)

We then need to run this state machine with an initial value false to get the result:

scala> x.value.runA(false).value
res2: Count[List[Unit]] = Const(17)

17 words.

Like we did with shape and content, we can fuse the traversal into one shot by combining the applicative functions.

scala> val countAll = countWord product countLine product countChar
countAll: cats.data.AppFunc[[α]cats.data.Prod[[α]cats.data.Prod[[γ]cats.data.Nested[[X]cats.data.StateT[cats.Eval,Boolean,X],Count,γ],Count,α],Count,α],Char,Unit] = cats.data.Func$$anon$6@2f9df938

scala> val allResults = countAll traverse text
allResults: cats.data.Prod[[α]cats.data.Prod[[γ]cats.data.Nested[[X]cats.data.StateT[cats.Eval,Boolean,X],Count,γ],Count,α],Count,List[Unit]] = Prod(Prod(Nested(cats.data.StateT@6cb709d0),Const(2)),Const(96))

scala> val charCount = allResults.second
charCount: Count[List[Unit]] = Const(96)

scala> val lineCount = allResults.first.second
lineCount: Count[List[Unit]] = Const(2)

scala> val wordCountState = allResults.first.first
wordCountState: cats.data.Nested[[X]cats.data.StateT[cats.Eval,Boolean,X],Count,List[Unit]] = Nested(cats.data.StateT@6cb709d0)

scala> val wordCount = wordCountState.value.runA(false).value
wordCount: Count[List[Unit]] = Const(17)

EIP:

Applicative functors have a richer algebra of composition operators, which can often replace the use of monad transformers; there is the added advantage of being able to compose applicative but non-monadic computations.

That’s it for today.

day 13 

On day 12 we continued exploring The Essence of the Iterator Pattern, and looked at Traverse, Unapply, shape and contents, and the applicative wordcount.

Id datatype 

We saw Id in passing while reading EIP, but it’s an interesting tool, so we should revisit it. This is also called Identity, Identity functor, or Identity monad, depending on the context. The definition of the datatype is quite simple:

  type Id[A] = A

Here’s with the documentation and the typeclass instances:

/**
 * Identity, encoded as `type Id[A] = A`, a convenient alias to make
 * identity instances well-kinded.
 *
 * The identity monad can be seen as the ambient monad that encodes
 * the effect of having no effect. It is ambient in the sense that
 * plain pure values are values of `Id`.
 *
 * For instance, the [[cats.Functor]] instance for `[[cats.Id]]`
 * allows us to apply a function `A => B` to an `Id[A]` and get an
 * `Id[B]`. However, an `Id[A]` is the same as `A`, so all we're doing
 * is applying a pure function of type `A => B` to a pure value  of
 * type `A` to get a pure value of type `B`. That is, the instance
 * encodes pure unary function application.
 */
  type Id[A] = A
  implicit val Id: Bimonad[Id] with Traverse[Id] =
    new Bimonad[Id] with Traverse[Id] {
      def pure[A](a: A): A = a
      def extract[A](a: A): A = a
      def flatMap[A, B](a: A)(f: A => B): B = f(a)
      def coflatMap[A, B](a: A)(f: A => B): B = f(a)
      override def map[A, B](fa: A)(f: A => B): B = f(fa)
      override def ap[A, B](fa: A)(ff: A => B): B = ff(fa)
      override def flatten[A](ffa: A): A = ffa
      override def map2[A, B, Z](fa: A, fb: B)(f: (A, B) => Z): Z = f(fa, fb)
      override def lift[A, B](f: A => B): A => B = f
      override def imap[A, B](fa: A)(f: A => B)(fi: B => A): B = f(fa)
      def foldLeft[A, B](a: A, b: B)(f: (B, A) => B) = f(b, a)
      def foldRight[A, B](a: A, lb: Eval[B])(f: (A, Eval[B]) => Eval[B]): Eval[B] =
        f(a, lb)
      def traverse[G[_], A, B](a: A)(f: A => G[B])(implicit G: Applicative[G]): G[B] =
        f(a)
  }

Here’s how to create a value of Id:

scala> import cats._
import cats._

scala> val one: Id[Int] = 1
one: cats.Id[Int] = 1

Id as Functor 

The functor instance for Id is same as function application:

scala> Functor[Id].map(one) { _ + 1 }
res0: cats.Id[Int] = 2

Id as Apply 

The apply’s ap method, which takes Id[A => B], but in reality just A => B is also implemented as function application:

scala> Apply[Id].ap({ _ + 1 }: Id[Int => Int])(one)
res1: cats.Id[Int] = 2

Id as FlatMap 

The FlatMap’s flatMap method, which takes A => Id[B] is the same story. It’s implemented function application:

scala> FlatMap[Id].flatMap(one) { _ + 1 }
res2: cats.Id[Int] = 2

What’s the point of Id? 

At a glance, Id datatype is not very useful. The hint is in the Scaladoc of the definition: a convenient alias to make identity instances well-kinded. In other words, there are situations where we need to lift some type A into F[A], and Id can be used to just that without introducing any effects. We’ll see an example of that later.

Eval datatype 

Cats also comes with Eval datatype that contols evaluation.

sealed abstract class Eval[+A] extends Serializable { self =>

  /**
   * Evaluate the computation and return an A value.
   *
   * For lazy instances (Later, Always), any necessary computation
   * will be performed at this point. For eager instances (Now), a
   * value will be immediately returned.
   */
  def value: A

  /**
   * Ensure that the result of the computation (if any) will be
   * memoized.
   *
   * Practically, this means that when called on an Always[A] a
   * Later[A] with an equivalent computation will be returned.
   */
  def memoize: Eval[A]
}

There are several ways to create an Eval value:

object Eval extends EvalInstances {

  /**
   * Construct an eager Eval[A] value (i.e. Now[A]).
   */
  def now[A](a: A): Eval[A] = Now(a)

  /**
   * Construct a lazy Eval[A] value with caching (i.e. Later[A]).
   */
  def later[A](a: => A): Eval[A] = new Later(a _)

  /**
   * Construct a lazy Eval[A] value without caching (i.e. Always[A]).
   */
  def always[A](a: => A): Eval[A] = new Always(a _)

  /**
   * Defer a computation which produces an Eval[A] value.
   *
   * This is useful when you want to delay execution of an expression
   * which produces an Eval[A] value. Like .flatMap, it is stack-safe.
   */
  def defer[A](a: => Eval[A]): Eval[A] =
    new Eval.Call[A](a _) {}

  /**
   * Static Eval instances for some common values.
   *
   * These can be useful in cases where the same values may be needed
   * many times.
   */
  val Unit: Eval[Unit] = Now(())
  val True: Eval[Boolean] = Now(true)
  val False: Eval[Boolean] = Now(false)
  val Zero: Eval[Int] = Now(0)
  val One: Eval[Int] = Now(1)

  ....
}

Eval.later 

The most useful one is Eval.later, which captures a by-name parameter in a lazy val.

scala> import cats._
import cats._

scala> var g: Int = 0
g: Int = 0

scala> val x = Eval.later {
  g = g + 1
  g
}
x: cats.Eval[Int] = cats.Later@7ec6279a

scala> g = 2
g: Int = 2

scala> x.value
res0: Int = 3

scala> x.value
res1: Int = 3

The value is cached, so the second evaluation doesn’t happen.

Eval.now 

Eval.now evaluates eagerly, and then captures the result in a field, so the second evaluation doesn’t happen.

scala> val y = Eval.now {
  g = g + 1
  g
}
y: cats.Eval[Int] = Now(4)

scala> y.value
res2: Int = 4

scala> y.value
res3: Int = 4

Eval.always 

Eval.always doesn’t cache.

scala> val z = Eval.always {
  g = g + 1
  g
}
z: cats.Eval[Int] = cats.Always@22764a85

scala> z.value
res4: Int = 5

scala> z.value
res5: Int = 6

stack-safe lazy computation 

One useful feature of Eval is that it supports stack-safe lazy computation via map and flatMap methods, which use an internal trampoline to avoid stack overflow.

You can also deter a computation which produces Eval[A] value using Eval.defer. Here’s how foldRight is implemented for List for example:

def foldRight[A, B](fa: List[A], lb: Eval[B])(f: (A, Eval[B]) => Eval[B]): Eval[B] = {
  def loop(as: List[A]): Eval[B] =
    as match {
      case Nil => lb
      case h :: t => f(h, Eval.defer(loop(t)))
    }
  Eval.defer(loop(fa))
}

Let’s try blowing up the stack on purpose:

scala> :paste
object OddEven0 {
  def odd(n: Int): String = even(n - 1)
  def even(n: Int): String = if (n <= 0) "done" else odd(n - 1)
}

// Exiting paste mode, now interpreting.

defined object OddEven0

scala> OddEven0.even(200000)
java.lang.StackOverflowError
  at OddEven0$.even(<console>:15)
  at OddEven0$.odd(<console>:14)
  at OddEven0$.even(<console>:15)
  at OddEven0$.odd(<console>:14)
  at OddEven0$.even(<console>:15)
  ....

Here’s my attempt at making a safer version:

scala> :paste
// Entering paste mode (ctrl-D to finish)
object OddEven1 {
  def odd(n: Int): Eval[String] = Eval.defer {even(n - 1)}
  def even(n: Int): Eval[String] =
    Eval.now { n <= 0 } flatMap {
      case true => Eval.now {"done"}
      case _    => Eval.defer { odd(n - 1) }
    }
}

// Exiting paste mode, now interpreting.

defined object OddEven1

scala> OddEven1.even(200000).value
res6: String = done

In the earlier versions of Cats the above caused stackoverflow, but as BryanM let me know in the comment, David Gregory fixed it in #769, so it works now.

The Abstract Future 

One blog post that I occasionally see being mentioned as a poweful application of monad, especially in the context of building large application is The Abstract Future. It was originally posted to the precog.com engineering blog on November 27, 2012 by Kris Nuttycombe (@nuttycom).

At Precog, we use Futures extensively, both in a direct fashion and to allow us a composable way to interact with subsystems that are implemented atop Akka’s actor framework. Futures are arguably one of the best tools we have for reining in the complexity of asynchronous programming, and so our many of our early versions of APIs in our codebase exposed Futures directly. ….

What this means is that from the perspective of the consumer of the DatasetModule interface, the only aspect of Future that we’re relying upon is the ability to order operations in a statically checked fashion; the sequencing, rather than any particular semantics related to Future’s asynchrony, is the relevant piece of information provided by the type. So, the following generalization becomes natural.

Here I’ll use similar example as the Yoshida-san’s.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> :paste
// Entering paste mode (ctrl-D to finish)
case class User(id: Long, name: String)

// In actual code, probably more than 2 errors
sealed trait Error
object Error {
  final case class UserNotFound(userId: Long) extends Error
  final case class ConnectionError(message: String) extends Error
}
trait UserRepos[F[_]] {
  implicit def F: Monad[F]
  def userRepo: UserRepo
  trait UserRepo {
    def followers(userId: Long): F[List[User]]
  }
}

// Exiting paste mode, now interpreting.

defined class User
defined trait Error
defined object Error
defined trait UserRepos

UserRepos with Future 

Let’s start implementing the UserRepos module using Future.

scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.concurrent.{ Future, ExecutionContext, Await }
import scala.concurrent.duration.Duration

class UserRepos0(implicit ec: ExecutionContext) extends UserRepos[Future] {
  override val F = implicitly[Monad[Future]]
  override val userRepo: UserRepo = new UserRepo0 {}
  trait UserRepo0 extends UserRepo {
    def followers(userId: Long): Future[List[User]] = Future.successful { Nil }
  }
}

// Exiting paste mode, now interpreting.

import scala.concurrent.{Future, ExecutionContext, Await}
import scala.concurrent.duration.Duration
defined class UserRepos0

Here’s how to use it:

scala> val service = new UserRepos0()(ExecutionContext.global)
service: UserRepos0 = UserRepos0@5ecf0c1d

scala> val xs = service.userRepo.followers(1L)
xs: scala.concurrent.Future[List[User]] = scala.concurrent.impl.Promise$KeptPromise@484fc3b6

Now we have an asynchronous result. Let’s say during testing we would like it to be synchronous.

UserRepos with Id 

In a test, we probably don’t want to worry about the fact that the computation is being performed asynchronously; all that we care about is that we obtain a correct result. ….

For most cases, we’ll use the identity monad for testing. Suppose that we’re testing the piece of functionality described earlier, which has computed a result from the combination of a load, a sort, take and reduce. The test framework need never consider the monad that it’s operating in.

This is where Id datatype can be used.

scala> :paste
// Entering paste mode (ctrl-D to finish)
class TestUserRepos extends UserRepos[Id] {
  override val F = implicitly[Monad[Id]]
  override val userRepo: UserRepo = new UserRepo0 {}
  trait UserRepo0 extends UserRepo {
    def followers(userId: Long): List[User] =
      userId match {
        case 0L => List(User(1, "Michael"))
        case 1L => List(User(0, "Vito"))
        case x =>  sys.error("not found")
      }
  }
}

// Exiting paste mode, now interpreting.

defined class TestUserRepos

Here’s how to use it:

scala> val testRepo = new TestUserRepos {}
testRepo: TestUserRepos = $anon$1@49b99d49

scala> val ys = testRepo.userRepo.followers(1L)
ys: cats.Id[List[User]] = List(User(0,Vito))

Coding in abstract 

Now that we were able to abtract the type constructor of the followers, let’s try implementing isFriends from day 10 that checks if two users follow each other.

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait UserServices[F[_]] { this: UserRepos[F] =>
  def userService: UserService = new UserService
  class UserService {
    def isFriends(user1: Long, user2: Long): F[Boolean] =
      F.flatMap(userRepo.followers(user1)) { a =>
        F.map(userRepo.followers(user2)) { b =>
          a.exists(_.id == user2) && b.exists(_.id == user1)
        }
      }
  }
}

// Exiting paste mode, now interpreting.

defined trait UserServices

Here’s how to use it:

scala> val testService = new TestUserRepos with UserServices[Id] {}
testService: TestUserRepos with UserServices[cats.Id] = $anon$1@297a3171

scala> testService.userService.isFriends(0L, 1L)
res0: cats.Id[Boolean] = true

The above demonstrates that isFriends can be written without knowing anything about F[] apart from the fact that it forms a Monad. It would be nice if I could use infix flatMap and map method while keeping F abstract. I tried creating FlatMapOps(fa) manually, but that resulted in abstract method error during runtime. The actM macro that we implemented on day 6 seems to work ok:

scala> :paste
// Entering paste mode (ctrl-D to finish)
trait UserServices[F[_]] { this: UserRepos[F] =>
  def userService: UserService = new UserService
  class UserService {
    import example.MonadSyntax._
    def isFriends(user1: Long, user2: Long): F[Boolean] =
      actM[F, Boolean] {
        val a = userRepo.followers(user1).next
        val b = userRepo.followers(user2).next
        a.exists(_.id == user2) && b.exists(_.id == user1)
      }
  }
}

// Exiting paste mode, now interpreting.

defined trait UserServices

scala> val testService = new TestUserRepos with UserServices[Id] {}
testService: TestUserRepos with UserServices[cats.Id] = $anon$1@1b971bb9

scala> testService.userService.isFriends(0L, 1L)
res1: cats.Id[Boolean] = true

UserRepos with XorT 

We can also use this with the XorT (aka EitherT) with Future to carry a custom error type.

scala> :paste
// Entering paste mode (ctrl-D to finish)
import cats.data.XorT
class UserRepos1(implicit ec: ExecutionContext) extends UserRepos[XorT[Future, Error, ?]] {
  override val F = implicitly[Monad[XorT[Future, Error, ?]]]
  override val userRepo: UserRepo = new UserRepo1 {}
  trait UserRepo1 extends UserRepo {
    def followers(userId: Long): XorT[Future, Error, List[User]] =
      userId match {
        case 0L => XorT.right(Future { List(User(1, "Michael")) })
        case 1L => XorT.right(Future { List(User(0, "Vito")) })
        case x =>
          XorT.left(Future.successful { Error.UserNotFound(x) })
      }
  }
}

// Exiting paste mode, now interpreting.

import cats.data.XorT
defined class UserRepos1

Here’s how to use it:

scala> val service1 = {
  import ExecutionContext.Implicits._
  new UserRepos1 with UserServices[XorT[Future, Error, ?]] {}
}
service1: UserRepos1 with UserServices[[γ]cats.data.XorT[scala.concurrent.Future,Error,γ]] = $anon$1@4da0e55f

scala> {
  import scala.concurrent.duration._
  Await.result(service1.userService.isFriends(0L, 1L).value, 1 second)
}
res2: cats.data.Xor[Error,Boolean] = Right(true)

scala> {
  import scala.concurrent.duration._
  Await.result(service1.userService.isFriends(0L, 2L).value, 1 second)
}
res3: cats.data.Xor[Error,Boolean] = Left(UserNotFound(2))

Note that for all three versions of services, I was able to reuse the UserServices trait without any changes.

That’s it for today.

day 14 

On day 13 we looked at Id datatype, Eval datatype, and The Abstract Future.

SemigroupK 

Semigroup we saw on day 4 is a bread and butter of functional programming that shows up in many places.

scala> import cats._, cats.instances.all._, cats.syntax.semigroup._
import cats._
import cats.instances.all._
import cats.syntax.semigroup._

scala> List(1, 2, 3) |+| List(4, 5, 6)
res0: List[Int] = List(1, 2, 3, 4, 5, 6)

scala> "one" |+| "two"
res1: String = onetwo

There’s a similar typeclass called SemigroupK for type constructors F[_].

@typeclass trait SemigroupK[F[_]] { self =>

  /**
   * Combine two F[A] values.
   */
  @simulacrum.op("<+>", alias = true)
  def combineK[A](x: F[A], y: F[A]): F[A]

  /**
   * Given a type A, create a concrete Semigroup[F[A]].
   */
  def algebra[A]: Semigroup[F[A]] =
    new Semigroup[F[A]] {
      def combine(x: F[A], y: F[A]): F[A] = self.combineK(x, y)
    }
}

This enables combineK operator and its symbolic alias <+>. Let’s try using this.

scala> import cats.syntax.semigroupk._
import cats.syntax.semigroupk._

scala> List(1, 2, 3) <+> List(4, 5, 6)
res2: List[Int] = List(1, 2, 3, 4, 5, 6)

Unlike Semigroup, SemigroupK works with any type parameter of F[_].

Option as SemigroupK 

Option[A] can form a Semigroup only when the type parameter A forms a Semigroup.

scala> :paste
// Entering paste mode (ctrl-D to finish)
object Catnip {
  implicit class IdOp[A](val a: A) extends AnyVal {
    def some: Option[A] = Some(a)
  }
  def none[A]: Option[A] = None
}
import Catnip._

// Exiting paste mode, now interpreting.

defined object Catnip
import Catnip._

scala> case class Foo(x: String)
defined class Foo

So this won’t work:

scala> Foo("x").some |+| Foo("y").some
<console>:33: error: value |+| is not a member of Option[Foo]
       Foo("x").some |+| Foo("y").some
                     ^

But this works fine:

scala> Foo("x").some <+> Foo("y").some
res3: Option[Foo] = Some(Foo(x))

There’s also a subtle difference in the behaviors of two typeclasses.

scala> 1.some |+| 2.some
res4: Option[Int] = Some(3)

scala> 1.some <+> 2.some
res5: Option[Int] = Some(1)

The Semigroup will combine the inner value of the Option whereas SemigroupK will just pick the first one.

SemigroupK laws 

trait SemigroupKLaws[F[_]] {
  implicit def F: SemigroupK[F]

  def semigroupKAssociative[A](a: F[A], b: F[A], c: F[A]): IsEq[F[A]] =
    F.combineK(F.combineK(a, b), c) <-> F.combineK(a, F.combineK(b, c))
}

MonoidK 

There’s also MonoidK.

@typeclass trait MonoidK[F[_]] extends SemigroupK[F] { self =>

  /**
   * Given a type A, create an "empty" F[A] value.
   */
  def empty[A]: F[A]

  /**
   * Given a type A, create a concrete Monoid[F[A]].
   */
  override def algebra[A]: Monoid[F[A]] =
    new Monoid[F[A]] {
      def empty: F[A] = self.empty
      def combine(x: F[A], y: F[A]): F[A] = self.combineK(x, y)
    }

  ....
}

This adds empty[A] function to the contract. The notion of emptiness here is defined in terms of the left and right identity laws with regards to combineK. Given that combine and combineK behave differently, Monoid[F[A]].empty and MonoidK[F].empty[A] could also be different.

scala> import cats._, cats.instances.all._
import cats._
import cats.instances.all._

scala> Monoid[Option[Int]].empty
res0: Option[Int] = None

scala> MonoidK[Option].empty[Int]
res1: Option[Int] = None

In case of Option[Int] they happened to be both None.

MonoidK laws 

trait MonoidKLaws[F[_]] extends SemigroupKLaws[F] {
  override implicit def F: MonoidK[F]

  def monoidKLeftIdentity[A](a: F[A]): IsEq[F[A]] =
    F.combineK(F.empty, a) <-> a

  def monoidKRightIdentity[A](a: F[A]): IsEq[F[A]] =
    F.combineK(a, F.empty) <-> a
}

Alternative 

There’s a typeclass that combines Applicative and MonoidK called Alternative:

@typeclass trait Alternative[F[_]] extends Applicative[F] with MonoidK[F] { self =>
   ....
}

Alternative on its own does not introduce any new methods or operators.

It’s more of a weaker (thus better) Applicative version of MonadPlus that adds filter on top of Monad. See day 3 to review the applicative style of coding.

Alternative laws 

trait AlternativeLaws[F[_]] extends ApplicativeLaws[F] with MonoidKLaws[F] {
  implicit override def F: Alternative[F]
  implicit def algebra[A]: Monoid[F[A]] = F.algebra[A]

  def alternativeRightAbsorption[A, B](ff: F[A => B]): IsEq[F[B]] =
    (ff ap F.empty[A]) <-> F.empty[B]

  def alternativeLeftDistributivity[A, B](fa: F[A], fa2: F[A], f: A => B): IsEq[F[B]] =
    ((fa |+| fa2) map f) <-> ((fa map f) |+| (fa2 map f))

  def alternativeRightDistributivity[A, B](fa: F[A], ff: F[A => B], fg: F[A => B]): IsEq[F[B]] =
    ((ff |+| fg) ap fa) <-> ((ff ap fa) |+| (fg ap fa))
}

There’s an open question by Yoshida-san on whether the last law is necessary or not.

Wolf, Goat, Cabbage 

Here’s Justin Le (@mstk)’s 2013 'Wolf, Goat, Cabbage: The List MonadPlus & Logic Problems.'.

We can try implementing this using Alternative.

A farmer has a wolf, a goat, and a cabbage that he wishes to transport across a river. Unfortunately, his boat can carry only one thing at a time with him. He can’t leave the wolf alone with the goat, or the wolf will eat the goat. He can’t leave the goat alone with the cabbage, or the goat will eat the cabbage. How can he properly transport his belongings to the other side one at a time, without any disasters?

scala> import cats._, cats.instances.all._, cats.syntax.show._
import cats._
import cats.instances.all._
import cats.syntax.show._

scala> import cats.syntax.cartesian._, cats.syntax.semigroupk._
import cats.syntax.cartesian._
import cats.syntax.semigroupk._

scala> :paste
// Entering paste mode (ctrl-D to finish)
sealed trait Character
case object Farmer extends Character
case object Wolf extends Character
case object Goat extends Character
case object Cabbage extends Character

case class Move(x: Character)

case class Plan(moves: List[Move])

sealed trait Position
case object West extends Position
case object East extends Position

implicit val moveShow = Show.show[Move](_ match {
  case Move(Farmer)  => "F"
  case Move(Wolf)    => "W"
  case Move(Goat)    => "G"
  case Move(Cabbage) => "C"
})

// Exiting paste mode, now interpreting.

defined trait Character
defined object Farmer
defined object Wolf
defined object Goat
defined object Cabbage
defined class Move
defined class Plan
defined trait Position
defined object West
defined object East
moveShow: cats.Show[Move] = cats.Show$$anon$2@6646f44c

makeNMoves 

Here’s making n moves

scala> val possibleMoves = List(Farmer, Wolf, Goat, Cabbage) map {Move(_)}
possibleMoves: List[Move] = List(Move(Farmer), Move(Wolf), Move(Goat), Move(Cabbage))

scala> :paste
// Entering paste mode (ctrl-D to finish)
def makeMove(ps: List[List[Move]]): List[List[Move]] =
  (ps |@| possibleMoves) map { (p, m) =>  List(m) <+> p }
def makeNMoves(n: Int): List[List[Move]] =
  n match {
    case 0 => Nil
    case 1 => makeMove(List(Nil))
    case n => makeMove(makeNMoves(n - 1))
  }

// Exiting paste mode, now interpreting.

makeMove: (ps: List[List[Move]])List[List[Move]]
makeNMoves: (n: Int)List[List[Move]]

We can test this as follows:

scala> makeNMoves(1)
res0: List[List[Move]] = List(List(Move(Farmer)), List(Move(Wolf)), List(Move(Goat)), List(Move(Cabbage)))

scala> makeNMoves(2)
res1: List[List[Move]] = List(List(Move(Farmer), Move(Farmer)), List(Move(Wolf), Move(Farmer)), List(Move(Goat), Move(Farmer)), List(Move(Cabbage), Move(Farmer)), List(Move(Farmer), Move(Wolf)), List(Move(Wolf), Move(Wolf)), List(Move(Goat), Move(Wolf)), List(Move(Cabbage), Move(Wolf)), List(Move(Farmer), Move(Goat)), List(Move(Wolf), Move(Goat)), List(Move(Goat), Move(Goat)), List(Move(Cabbage), Move(Goat)), List(Move(Farmer), Move(Cabbage)), List(Move(Wolf), Move(Cabbage)), List(Move(Goat), Move(Cabbage)), List(Move(Cabbage), Move(Cabbage)))

isSolution 

Let’s define our helper function isSolution :: Plan -> Bool. Basically, we want to check if the positions of all of the characters are East.

We can define filter using just what’s available in Alternative:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def filterA[F[_]: Alternative, A](fa: F[A])(cond: A => Boolean): F[A] =
  {
    var acc = Alternative[F].empty[A]
    Alternative[F].map(fa) { x =>
      if (cond(x)) acc = Alternative[F].combineK(acc, Alternative[F].pure(x))
      else ()
    }
    acc
  }
def positionOf(p: List[Move], c: Character): Position =
  {
    def positionFromCount(n: Int): Position = {
      if (n % 2 == 0) West
      else East
    }
    c match {
      case Farmer => positionFromCount(p.size)
      case x      => positionFromCount(filterA(p)(_ == Move(c)).size)
    }
  }


// Exiting paste mode, now interpreting.

filterA: [F[_], A](fa: F[A])(cond: A => Boolean)(implicit evidence$1: cats.Alternative[F])F[A]
positionOf: (p: List[Move], c: Character)Position

scala> val p = List(Move(Goat), Move(Farmer), Move(Wolf), Move(Goat))
p: List[Move] = List(Move(Goat), Move(Farmer), Move(Wolf), Move(Goat))

scala> positionOf(p, Farmer)
res2: Position = West

scala> positionOf(p, Wolf)
res3: Position = East

Here’s how we can check all positions are East:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def isSolution(p: List[Move]) =
  {
    val pos = (List(p) |@| possibleMoves) map { (p, m) => positionOf(p, m.x) }
    (filterA(pos)(_ == West)).isEmpty
  }

// Exiting paste mode, now interpreting.

isSolution: (p: List[Move])Boolean

makeMove 

What makes a move legal? Well, the farmer has to be on the same side as whatever is being moved.

scala> def moveLegal(p: List[Move], m: Move): Boolean =
         positionOf(p, Farmer) == positionOf(p, m.x)
moveLegal: (p: List[Move], m: Move)Boolean

scala> moveLegal(p, Move(Wolf))
res4: Boolean = false

The plan is safe if nothing can eat anything else. That means if the wolf and goat or goat and cabbage sit on the same bank, so too must the farmer.

scala> :paste
// Entering paste mode (ctrl-D to finish)
def safePlan(p: List[Move]): Boolean =
  {
    val posGoat = positionOf(p, Goat)
    val posFarmer = positionOf(p, Farmer)
    val safeGoat = posGoat != positionOf(p, Wolf)
    val safeCabbage = positionOf(p, Cabbage) != posGoat
    (posFarmer == posGoat) || (safeGoat && safeCabbage)
  }

// Exiting paste mode, now interpreting.

safePlan: (p: List[Move])Boolean

Using these functions we can now re-implement makeMove:

scala> :paste
// Entering paste mode (ctrl-D to finish)
def makeMove(ps: List[List[Move]]): List[List[Move]] =
  (ps |@| possibleMoves) map { (p, m) =>
  if (!moveLegal(p, m)) Nil
  else if (!safePlan(List(m) <+> p)) Nil
  else List(m) <+> p
}
def makeNMoves(n: Int): List[List[Move]] =
  n match {
    case 0 => Nil
    case 1 => makeMove(List(Nil))
    case n => makeMove(makeNMoves(n - 1))
  }
def findSolution(n: Int): Unit =
  filterA(makeNMoves(n))(isSolution) map { p =>
    println(p map {_.show})
  }

// Exiting paste mode, now interpreting.

makeMove: (ps: List[List[Move]])List[List[Move]]
makeNMoves: (n: Int)List[List[Move]]
findSolution: (n: Int)Unit

Let’s try solving the puzzle:

scala> findSolution(6)

scala> findSolution(7)
List(G, F, C, G, W, F, G)
List(G, F, W, G, C, F, G)

scala> findSolution(8)
List(G, F, C, G, W, F, G)
List(G, F, W, G, C, F, G)
List(G, F, C, G, W, F, G)
List(G, F, W, G, C, F, G)
List(G, F, C, G, W, F, G)
List(G, F, W, G, C, F, G)

It worked. That’s all for today.