Skip to main content

6 posts tagged with "Scala"

View All Tags

· 8 min read

Hello, class. Today we're going to use sbt to publish artifacts to GitHub packages via GitHub Actions when we tag/release our codebase, and we're not going to use any sbt plugins to do it!

It's not that scary

If you check the official SBT Documentation, you can see that the main things you need to do are specify where you are going to publish

publishTo := Some("Sonatype Snapshots Nexus" at "https://oss.sonatype.org/content/repositories/snapshots")

and how to authenticate with that repository

credentials += Credentials("Sonatype Nexus Repository Manager", "my.artifact.repo.net", "admin", "admin123")

that's it!™️

Lead by example

I'm working on a (very new) project that's a slim framework to build ZIO based CLI apps called ursula, so I will use this as an example, and talk through the build.sbt file, and what the important "gotchas" are. The general plan is:

  1. Show the full build.sbt file
  2. Discuss parsing tags to artifact versions using default environment variables
  3. Configure SBT to publish to our repositories package endpoint
  4. Cover some SBT gotchas

1 The full build.dbt

The general structure of this project is that the main library lives in a project/folder named ursual, and there is an example project that depends on it. We'll cover this in the "gotchas", but there is not a root project.

val tagWithQualifier: String => String => String =
qualifier =>
tagVersion => s"%s.%s.%s-${qualifier}%s".format(tagVersion.split("\\."): _*)

val tagAlpha: String => String = tagWithQualifier("a")
val tagBeta: String => String = tagWithQualifier("b")
val tagMilestone: String => String = tagWithQualifier("m")
val tagRC: String => String = tagWithQualifier("rc")

val defaultVersion: String = "0.0.0-a0"
val versionFromTag: String = sys.env
.get("GITHUB_REF_TYPE")
.filter(_ == "tag")
.flatMap(_ => sys.env.get("GITHUB_REF_NAME"))
.flatMap { t =>
t.headOption.map {
case 'a' => tagAlpha(t.tail) // Alpha build, a1.2.3.4
case 'b' => tagBeta(t.tail) // Beta build, b1.2.3.4
case 'm' => tagMilestone(t.tail) // Milestone build, m1.2.3.4
case 'r' => tagRC(t.tail) // RC build, r1.2.3.4
case 'v' => t.tail // Production build, should be v1.2.3
case _ => defaultVersion
}
}
.getOrElse(defaultVersion)

ThisBuild / organization := "com.alterationx10"
ThisBuild / version := versionFromTag
ThisBuild / scalaVersion := "2.13.8"
ThisBuild / publish / skip := true
ThisBuild / publishMavenStyle := true
ThisBuild / versionScheme := Some("early-semver")
ThisBuild / publishTo := Some(
"GitHub Package Registry " at "https://maven.pkg.github.com/alterationx10/ursula"
)
ThisBuild / credentials += Credentials(
"GitHub Package Registry", // realm
"maven.pkg.github.com", // host
"alterationx10", // user
sys.env.getOrElse("GITHUB_TOKEN", "abc123") // password
)

lazy val ursula = project
.in(file("ursula"))
.settings(
name := "ursula",
libraryDependencies ++= Seq(
"dev.zio" %% "zio" % "2.0.0-RC6"
),
fork := true,
publish / skip := false
)

lazy val example = project
.in(file("example"))
.settings(
publishArtifact := false,
fork := true
)
.dependsOn(ursula)

2 Setting the package version

Note that this section is more about how I am deploying versions for packages. You likely already have a versioning scheme, and are handling that mapping, but here you go anyway 😆

Maven as a version ordering specification that we'll use for non-numeric qualifiers, which has this ordering:

"alpha" < "beta" < "milestone" < "rc" = "cr" < "snapshot" < "" = "final" = "ga" < "sp"

In all honesty, for simple projects this many qualifiers is probably overkill! I've mapped out alpha, beta , milestone, rc and "" (which is no qualifier, or "final"/"ga").

A note about GitHub packages that was true the last time I tried publishing SNAPSHOTS (not sure if this is still the case), but they do not allow you to overwrite a package - so to publish over top of an existing SNAPSHOT - you'd need to delete it first, and upload the new one. That's more work than it's worth, so I've designated alphas as my "snapshots"

With that in mind, I want to use git tags to map to these, so, for example, I've designated that tags a.1.2.3.4 should build with version 1.2.3-a4. So by providing a different initial character (a/b/m/r), I can control what qualifier it's release as.

With that outlined, I can achieve this with the tagWithQualifier function below (and it's helpers).

val tagWithQualifier: String => String => String =
qualifier =>
tagVersion => s"%s.%s.%s-${qualifier}%s".format(tagVersion.split("\\."): _*)

val tagAlpha: String => String = tagWithQualifier("a")
val tagBeta: String => String = tagWithQualifier("b")
val tagMilestone: String => String = tagWithQualifier("m")
val tagRC: String => String = tagWithQualifier("rc")

And when I want to do a "production release", I just use the common v1.2.3 tag.

We will use default environment variables to read the git tags, so we can parse them.

We will check + filter for GITHUB_REF_TYPE; this can be branch or tag (we want tag). If we made it this far, we will then check GITHUB_REF_NAME - which at this point, should be the value of out git tag.

val defaultVersion: String = "0.0.0-a0"
val versionFromTag: String = sys.env
.get("GITHUB_REF_TYPE")
.filter(_ == "tag")
.flatMap(_ => sys.env.get("GITHUB_REF_NAME"))
.flatMap { t =>
t.headOption.map {
case 'a' => tagAlpha(t.tail) // Alpha build, a1.2.3.4
case 'b' => tagBeta(t.tail) // Beta build, b1.2.3.4
case 'm' => tagMilestone(t.tail) // Milestone build, m1.2.3.4
case 'r' => tagRC(t.tail) // RC build, r1.2.3.4
case 'v' => t.tail // Production build, should be v1.2.3
case _ => defaultVersion
}
}
.getOrElse(defaultVersion)

Now we have a way to dynamically set the version published based on git tagging!

ThisBuild / version := versionFromTag

3 Where to publish

We need to set our publishTo and credentials. For the publishTo, GitHub has the structure "https://maven.pkg.github.com/USER/REP", so just update with your information. This pattern should hold for orgs as well. An important thing to note is the realm "GitHub Package Registry". This is handled automatically, but when publishing hits the repository, it'll give back a 401 and tell you how you should authenticate and what the realm is. The significant thing to note, is that the value here for the realm is fixed, and determined by the hosting server. sbt will use this realm to find the matching set of credentials.

ThisBuild / publishTo := Some(
"GitHub Package Registry" at "https://maven.pkg.github.com/alterationx10/ursula"
)
ThisBuild / credentials += Credentials(
"GitHub Package Registry", // realm
"maven.pkg.github.com", // host
"alterationx10", // user
sys.env.getOrElse("GITHUB_TOKEN", "abc123") // password
)

We will use an environment variable GITHUB_TOKEN to provide our password. Note, that you could do the same thing for the user value.

4 SBT gotchas

This isn't an all-inclusive list, but just a couple of things to keep in mind.

GitHub packages only supports Maven structure, so we need to set publishMavenStyle to true. We will set out version schema to "early-semver", which keeps binary compatibility across patch updates within 0.Y.z until you hit 1.0.0.

The most important "gotcha" here is ThisBuild / publish / skip := true. Since I do not have a root project here, sbt will make a default one, and aggregate the projects into. This means that it will also try to publish a package named default! We can either define a root project as a placeholder, and configure it accordingly - or globally set the default to skip publishing, but then re-enabling it in the project we're looking to deploy. The latter is shown here.

ThisBuild / publish / skip := true
ThisBuild / publishMavenStyle := true
ThisBuild / versionScheme := Some("early-semver")

lazy val ursula = project
.in(file("ursula"))
.settings(
name := "ursula",
libraryDependencies ++= Seq(
"dev.zio" %% "zio" % "2.0.0-RC6"
),
fork := true,
publish / skip := false
)

Lights! Camera! GitHub Action!

Now that sbt has been included in the environment loaded into the setup-java action, this is easier than it's ever been. For any action, you can use that and just sbt <your task>.

For out case, we only want this to run when we create a release (which is a git tag action), so note the on: block.

We've set up our build.sbt file to use ENV variables that are automatically provided, but we also use the auto-generated ci token: GITHUB_TOKEN which is available automatically - that should be set in the env: block. If you wanted to use a personal access token, you could store and access the secret in the same way!

name: Publish Artifact on Release
on:
release:
types: [ created ]
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
publish:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up JDK
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
cache: 'sbt'
- name: Publish
run: sbt publish

To kick it off, you just need to create a release with a structured git tag. the hardest part is not mistyping for tag 🤣 The packages wills tart to show up on you repositories page, right below the "release" section.

Wrapping up

Now, you too can publish your scala artifacts to GitHub packages without relying on a pre-made sbt plugin! How exciting.

· 3 min read
Scala 2 => 3 Series

This is a part in an ongoing series dealing with migrating old ways of doing things from Scala 2 to Scala 3. It will cover the What's New in Scala 3 from the official site.

Check the Scala 2 => 3 tag for others in the series! For the repo containing all the code, visit GitHub. There are code samples for both Scala 2 and Scala 3 together, that are easy to run via scala-cli.

This post is centered around retroactively extending classes.

In Scala 2, extension methods had to be encoded using implicit conversions or implicit classes. In contrast, in Scala 3 extension methods are now directly built into the language, leading to better error messages and improved type inference.

Extensions are one of my favorite things to use in Scala. Personally, I like the ability to add functionality to "upstream" resources implicitly, but call that functionality explicitly. To me, it makes it less likely to break things during a refactor when you don't have to un-ravel a mysterious series of implicit def methods / conversions that you might not realize are being called.

The preface

For this example, let's say that we have some upstream domain model from a service we use but don't control.

case class UpstreamUser(id: Long, created: Instant, lastSeen: Instant)

In our service, we have a concept of when a user goes "stale" based on usage - but other services also have this notion, and differing beliefs about what conditions make a user stale - so we can't ask the upstream service to implement this for us on our model. Perhaps our model of what a stale user is changes over time as well.

Our conditions for a user going stale are:

  • A user was created over a year ago
  • A user hasn't been seen in the last week.

With that in mind, we could write some logic such as

import java.time.Instant
import java.time.temporal.ChronoUnit._
def isStale(created: Instant, lastSeen: Instant):Boolean = {
lastSeen.plus(7, DAYS).isBefore(Instant.now) &&
created.plus(365, DAYS).isBefore(Instant.now)
}

but calling that everywhere becomes a bit cumbersome, and it would be great if we could attach that functionality directly on UpstreamUser.

Scala 2

In scala 2, we can use an implicit class to achieve our goal. An implicit class should have only one constructor argument, of the Type that is being extended. It also needs to be housed in something, typically an outer object. This can make setting up implicit classes feel a bit "boilerplate-y".

object UpstreamUserExtensions {
implicit class ExtendedUpstreamUser(u: UpstreamUser) {
def isStale: Boolean = {
u.lastSeen.plus(7, DAYS).isBefore(Instant.now) &&
u.created.plus(365, DAYS).isBefore(Instant.now)
}
}
}

Now, with ExtendedUpstreamUser in scope to implicitly add our new functionality, we can (explicitly) call upstreamUserInstance.isStale as if it were on the model directly.

Scala 3

In Scala 3, it works much the same, but with less boilerplate. Instead of declaring an implicit class, you declare an extension: extension (u: UpstreamUser) where the argument matches the Type you're adding functionality to. This doesn't need to be housed in an object either!

The corresponding Scala 3 code would look like:

extension (u: UpstreamUser) {
def isStale: Boolean = {
u.lastSeen.plus(7, DAYS).isBefore(Instant.now) &&
u.created.plus(365, DAYS).isBefore(Instant.now)
}
}

and then we'll get the same upstreamUserInstance.isStale functionality as before.

Final Thoughts

Although the looks of the code have changed, if you're used to Scala 2 implicit classes, Scala 3 extensions will probably be a welcomed ergonomics change, with a familiar feel for usage.

· 4 min read
Scala 2 => 3 Series

This is a part in an ongoing series dealing with migrating old ways of doing things from Scala 2 to Scala 3. It will cover the What's New in Scala 3 from the official site.

Check the Scala 2 => 3 tag for others in the series! For the repo containing all the code, visit GitHub. There are code samples for both Scala 2 and Scala 3 together, that are easy to run via scala-cli.

This post is centered around the new way of passing implicit arguments to methods via using-clauses.

Abstracting over contextual information. Using clauses allow programmers to abstract over information that is available in the calling context and should be passed implicitly. As an improvement over Scala 2 implicits, using clauses can be specified by type, freeing function signatures from term variable names that are never explicitly referred to.

The preface

For this example, let's say that we have some interface that we're going to be passing around a lot, and that it could have multiple implementations.

trait BaseLogger {
def log[T](t: T): Unit
}

case class PrintLogger() extends BaseLogger {
def log[T](t: T): Unit = println(s"Logger result: ${t.toString}")
}

case class FancyLogger() extends BaseLogger {
def log[T](t: T): Unit = println(s"Ye Olde Logger result: ${t.toString}")
}

Scala 2

In Scala 2, we could write a method, and have our trait's implementation passed in as a separate implicit argument.

  def loggingOp[A,B](a: A, b: B)(implicit logger: BaseLogger): Int = {
val result = a.toString.map(_.toInt).sum + b.toString.map(_.toInt).sum
logger.log(result)
result
}

At this point, we could call our method by still passing the argument in explicitly

object Using_2 extends App {

val printLogger: PrintLogger = PrintLogger()
val fancyLogger: FancyLogger = FancyLogger()

loggingOp(40, 2)(printLogger)
loggingOp(40, 2)(fancyLogger)

}

However, if we define an instance of type BaseLogger in scope implicitly, then we don't need to pass it in as an argument every time! Of course, we still have the option to pass something in explicitly, if we don't want to use the instance that is in scope implicitly.


object Using_2 extends App {

val printLogger: PrintLogger = PrintLogger()
val fancyLogger: FancyLogger = FancyLogger()

loggingOp(40, 2)(printLogger)
loggingOp(40, 2)(fancyLogger)

// With an implicit of type BaseLogger in scope...
implicit val defaultLogger = printLogger

// ... I no longer need to pass it as an argument
loggingOp(true, false)
loggingOp(17, "purple")
// ... but I can still call implicit arguments explicitly!
loggingOp("car", printLogger)(fancyLogger)

}

Scala 3

In Scala 3, we don't use the implicit key word when defining a method - we now use using. A faithful port of the Scala 2 code above would look something like:

  // You can specify the name logger, but don't have to
def loggingOp_withParamName[A, B](a: A, b: B)(using logger: BaseLogger): Int = {
val result = a.toString.map(_.toInt).sum + b.toString.map(_.toInt).sum
logger.log(result)
result
}

The awesomeness of Scala 3 doesn't stop there, though, because you can define your methods by just declaring the type! In this case, we just summon an instance internally, and use reference to that.

There are only two hard things in Computer Science: cache invalidation and naming things.

Guess it's just invalidating caches now!

  def loggingOp[A, B](a: A, b: B)(using BaseLogger): Int = {
val logger = summon[BaseLogger]
val result = a.toString.map(_.toInt).sum + b.toString.map(_.toInt).sum
logger.log(result)
result
}

From here, our code works mostly the same - one caveat being that when explicitly passing arguments, you need to use the using keyword - where previously you didn't need to declare the values you were passing in were implicit. We're also declaring our BaseLogger in scope using alias givens

object Using_3 {

val printLogger: PrintLogger = PrintLogger()
val fancyLogger: FancyLogger = FancyLogger()

@main
def main = {

// We can still call things explicitly...
loggingOp(40, 2)(using printLogger)
loggingOp(40, 2)(using fancyLogger)

// .. but we have a new way of defining what type is in scope implicitly
// implicit val defaultLogger = printLogger // <- this would still work
given defaultLogger: BaseLogger = printLogger // <- but probably use this

loggingOp(true, false)
loggingOp(true, false)
loggingOp(17, "purple")
loggingOp("car", printLogger)(using fancyLogger)
}

}

Final Thoughts

Using clauses can be a bit more complex, but with the simple example outlined above - we have one less scary new thing, that we can mentally map back to our years of Scala 2 use!

· 11 min read

How to deploy Kubernetes meme

Premise

The example in this post is about using a kubernetes CustomResourceDefinition and Operator implemented with ZIO to simplify our lives as someone who made need to run a lot of infrastructure set up (dare I even say Dev/Ops).

The example is complete/functioning, but isn't the most robust solution for what it does. It is meant to be enough to work, and illustrate the concept with a solution to a made-up problem - but not exactly a model code base 👼

Let's dig in!

Hey, can you set me up a database?

Perhaps you're the one with the password/access to the database, or the only person nearby on the team that "knows SQL", but it's part of your daily life to set up databases for people. In between your coding work, you run a lot of the following type of code for people who need to access their own database from a kubernetes cluster:

CREATE DATABASE stuff;
CREATE USER stuff PASSWORD 'abc123';
GRANT ALL ON DATABASE stuff TO stuff;

Your hard work is then rewarded by remembering to set up a Secret for each database as well, so the user can easily mount it to their pods for access.

But, wait a minute - you've just picked up a nifty framework called ZIO, and have decided to automate a bit of you daily todos.

Enter ZIO

Let's create a SQLService that will set up a matching database and user:

trait SQLService {
def createDatabaseWithRole(db: String): Task[String]
}

// We're going to be lazy, and not use a Logger
case class SQLServiceLive(cnsl: Console.Service) extends SQLService {
override def createDatabaseWithRole(db: String): Task[String] = ???
}

We aren't running this so often that we need a dedicated connection pool, so let's just grab a connection from the driver, and use this neat new thing we've learned about called Zmanaged.

private val acquireConnection =
ZIO.effect {
val url = {
sys.env.getOrElse(
"PG_CONN_URL", // If this environment variable isn't set...
"jdbc:postgresql://localhost:5432/?user=postgres&password=password" // ... use this default one.
)
}
DriverManager.getConnection(url)
}

private val managedConnection: ZManaged[Any, Throwable, Connection] =
ZManaged.fromAutoCloseable(acquireConnection)

// We'll use a ZManaged for Statements too!
private def acquireStatement(conn: Connection): Task[Statement] =
Task.effect {
conn.createStatement
}

def managedStatement(conn: Connection): ZManaged[Any, Throwable, Statement] =
ZManaged.fromAutoCloseable(acquireStatement(conn))

What's a ZManaged?

ZManaged is a data structure that encapsulates the acquisition and the release of a resource, which may be used by invoking the use method of the resource. The resource will be automatically acquired before the resource is used and automatically released after the resource is used.

So a ZManged is like a try/catch/finally that handles your resources - but you don't have to set up a lot of boilerplate. A common pattern I've used in the past would be to use a thunk to do something similar. The (very unsafe, with no error handling) example below handles the acquisition and release of the connection + statement, and you just need to pass in a function that takes a statement, and produces a result.

def sqlAction[T](thunk: Statement => T): T = {
val url: String = "jdbc:postgresql://localhost:5432/?user=postgres&password=password"
val connection = DriverManager.getConnection(url)
val statement: Statement = connection.createStatement()
val result: T = thunk(statement)
statement.close()
connection.close()
result
}

def someSql = sqlAction { statement =>
// do something with statement
???
}

In the spirit of our thunk, we'll write a ZIO function that takes a Statement, a String (some SQL), and will execute it. We'll print the SQL we run, or log the error that falls out.

val executeSql: Statement => String => ZIO[Any, Throwable, Unit] =
st =>
sql =>
ZIO
.effect(st.execute(sql))
.unit
.tapBoth(
err => cnsl.putStrLnErr(err.getMessage),
_ => cnsl.putStrLn(sql)
)

Now with all of our pieces in place, we can implement our createDatabaseWithRole that will safely grab a Connection + Statement, run our SQL, and then automatically close those resources when done. It'll even hand back the random password generated:

override def createDatabaseWithRole(db: String): Task[String] = {
managedConnection.use { conn =>
managedStatement(conn).use { st =>
for {
pw <- ZIO.effect(scala.util.Random.alphanumeric.take(6).mkString)
_ <- executeSql(st)(s"CREATE DATABASE $db")
_ <- executeSql(st)(s"CREATE USER $db PASSWORD '$pw'")
_ <- executeSql(st)(s"GRANT ALL ON DATABASE $db TO $db")
} yield pw
}
}
}

😍 A thing ouf beauty! Now we can just make a simple ZIO program to call our new service, and call it a day!

val simpleProgram: ZIO[Has[SQLService], Nothing, Unit] =
SQLService(_.createDatabaseWithRole("someUser"))
.unit
.catchAll(_ => ZIO.unit)

Automate the Automation

j/k you still have to stop what you're doing to run this for people, and you still need to make the Secret! Wouldn't it be neat if we could have some sort of Kubernetes resource that allowed anyone to just update a straightforward file? What if we had something like:

apiVersion: alterationx10.com/v1
kind: Database
metadata:
name: databases
spec:
databases:
- mark
- joanie
- oliver

Well, it turns out we can have nice things! We can create a CustomResourceDefinition that will use that exact file as shown above! The following yaml sets up our own Kind called Database that has a spec of databases, which is just an array of String.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: databases.alterationx10.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: alterationx10.com
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
databases:
type: array
items:
type: string
# either Namespaced or Cluster
scope: Namespaced
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: databases
# singular name to be used as an alias on the CLI and for display
singular: database
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: Database
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- db

Since we don't want to run jobs manually, we can create an Operator that will watch for our CustomResourceDefinition , and take action automatically! With the zio-k8s library, these can be fairly straightforward to implement.

  val eventProcessor: EventProcessor[Clock, Throwable, Database] =
(ctx, event) =>
event match {
case Reseted() =>
cnsl.putStrLn(s"Reseted - will (re) add any existing").ignore
case Added(item) =>
processItem(item)
case Modified(item) =>
processItem(item)
case Deleted(item) =>
cnsl.putStrLn(s"Deleted - but not performing action").ignore
}

For our example program, we will always try and create the databases listed in the resources, and log/ignore the error if a database already exists on Added and Modified. We will also take the auto-generated password, and create a secret for it as well! We won't tear anything down on Deleted.

def processItem(item: Database): URIO[Clock, Unit] =
(for {
// Get all of our databases
dbs <- ZIO.fromOption(item.spec.flatMap(_.databases).toOption)
// For each database
_ <- ZIO.foreach(dbs) { db =>
(for {
_ <- cnsl.putStrLn(s"Processing $db...")
// Create things
pw <- sqlService.createDatabaseWithRole(db)
_ <- cnsl.putStrLn(s"... $db created ...")
// Put the generated PW in a k8s secret
_ <- upsertSecret(
Secret(
metadata = Some(
ObjectMeta(
name = Option(db),
namespace = item.metadata
.map(_.namespace)
.getOrElse(Option("default"))
)
),
data = Map(
"POSTGRES_PASSWORD" -> Chunk.fromArray(
pw.getBytes()
)
)
)
).tapError(e => cnsl.putStrLnErr(s"Couldn't make secret:\n $e"))
_ <- cnsl.putStrLn(s"... Secret created for $db")
} yield ()).ignore
}
} yield ()).ignore

def upsertSecret(
secret: Secret
): ZIO[Clock, K8sFailure, Secret] = {
for {
nm <- secret.getName
ns <- secret.getMetadata.flatMap(_.getNamespace)
existing <- secrets.get(nm, K8sNamespace(ns)).option
sec <- existing match {
case Some(_) => secrets.replace(nm, secret, K8sNamespace(ns))
case None => secrets.create(secret, K8sNamespace(ns))
}
} yield sec
}

That's about it! We now have the code we need to automate our daily drudgery!

Deploying

This example is targeted at deploying to the instance of Kubernetes provided by Docker, mainly so we can use our locally published docker image.

Auto generation of our CRD client

We will need the zio-k8s-crd SBT plugin to auto generate the client needed to work with our CRD. Once added, we can update our build.sbt file with the following, which points to the new CRD. With this in place, a compile step will generate the code for us.

externalCustomResourceDefinitions := Seq(
file("crds/databases.yaml")
)

enablePlugins(K8sCustomResourceCodegenPlugin)

Building a Docker image of our service

We'll use the sbt-native-packager SBT plugin to build the docker image for us. We'll need a more recent version of Java than what is default, so well set dockerBaseImage := "openjdk:17.0.2-slim-buster" and set our project to .enablePlugins(JavaServerAppPackaging). Now, when we run sbt docker:publishLocal, it will build and tag an image with the version specified in our build.sbt file that we can use in our kubernetes deployment yaml.

REPOSITORY      TAG            IMAGE ID       CREATED         SIZE
smooth-operator 0.1.0-SNAPSHOT a4e2c2025cba 2 days ago 447MB

Who doesn't love more YAML?

This section will go over the kubernetes yaml needed to deploy everything we need for our app.

We will create a standard Deployment of postgres, configured to have the super secure password of password 🤫. We will also create a Service to route traffic to it.

apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
env:
- name: POSTGRES_PASSWORD
value: "password"
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP

For deploying our Operator, we ultimately are going to set up a Deployment for it, but we're going to need a few more bells and whistles first. Our app will need the right permissions to be able to watch our CustomResourceDefinitions, as well as accessing Secrets - these actions are done by the ServiceAccount our pod runs under. We will create a ClusterRole that has the required permissions, and use a ClusterRoleBinding to assign the ClusterRole to our ServiceAccount.

A very useful kubectl command to check and make sure your permissions are correct is kubectl auth can-i ... command.

kubectl auth can-i create secrets --as=system:serviceaccount:default:db-operator-service-account -n default
kubectl auth can-i watch databases --as=system:serviceaccount:default:db-operator-service-account -n default

With all that in mind, we can use the following yaml to get our app up and running.

apiVersion: v1
kind: ServiceAccount
metadata:
name: db-operator-service-account
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: db-operator-cluster-role
rules:
- apiGroups: [ "alterationx10.com" ]
resources: [ "databases" ]
verbs: [ "get", "watch", "list" ]
- apiGroups: [ "" ]
resources: [ "secrets" ]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: db-operator-cluster-role-binding
subjects:
- kind: ServiceAccount
name: db-operator-service-account
namespace: default
roleRef:
kind: ClusterRole
name: db-operator-cluster-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-operator
labels:
app: db-operator
spec:
selector:
matchLabels:
app: db-operator
template:
metadata:
labels:
app: db-operator
spec:
serviceAccountName: db-operator-service-account
containers:
- name: db-operator
image: smooth-operator:0.1.0-SNAPSHOT
env:
- name: PG_CONN_URL
value: "jdbc:postgresql://postgres:5432/?user=postgres&password=password"
---

Note: When deploying an operator "for real", you want to take care that only one instance is running/working at a time. This is not covered here, but you should look into Leader Election

Running the Example

You can view the source code on GitHub, tagged at v0.0.3 at the time of this blog post.

Assuming you have Docker/Kubernetes et up, you should be able to run the following commands to get an example up and running:

# Build/publish our App to the local Docker repo
sbt docker:publishLocal
# Deploy our CustomResourceDefinition
kubectl apply -f crds/databases.yaml
# Deploy postgres
kubectl apply -f yaml/postgres.yaml
# Deploy our app
kubectl apply -f yaml/db_operator.yaml
# Create Database Resource
kubectl apply -f yaml/databases.yaml

If you check the logs of the running pod, you should hopefully see the SQL successfully ran, and can also use kubectl to check for new Secrets!

➜ smooth-operator (main) ✗ kubectl logs db-operator-74f756c89c-x5f5b 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Reseted - will (re) add any existing
Processing mark...
CREATE DATABASE mark
CREATE USER mark PASSWORD 'VCaHar'
GRANT ALL ON DATABASE mark TO mark
... mark created ...
... Secret created for mark
Processing joanie...
CREATE DATABASE joanie
CREATE USER joanie PASSWORD 'mdlQKB'
GRANT ALL ON DATABASE joanie TO joanie
... joanie created ...
... Secret created for joanie
Processing oliver...
CREATE DATABASE oliver
CREATE USER oliver PASSWORD 'vYODSt'
GRANT ALL ON DATABASE oliver TO oliver
... oliver created ...
... Secret created for oliver

Nice.

There you have it! After a day or two of set up, now you too can save tens of minutes every day!

· 10 min read

Bender do it myself meme

We're going to build a ZIO App, with our own dependencies.

In my previous post, I covered some highlights about working with ZIO, so this time I thought I would go through actually writing some code to illustrate some patterns of what you would actually do when developing in the framework, and then how to inject your resource into a program.

Some important notes about this walk through:

  • We're using scala-cli 🥽
  • We're targeting Scala 3 💪
  • We're using ZIO 2.0 🎉

It's a new year, so we should all eat healthier, exercise, and write more things in Scala 3. Since we're using ZIO 2.0 (RC), the syntax might be a littler different from what you've seen before, but it all generally behaves the same.

What we are building

We are going to build a simple cli app that will do hashing. If given one argument (a message), it will calculate an HMAC hash and print in. If given two arguments (a message and a hash), it will compute the hash of the message, and compare against the provided hash. If provided < 1 or > 2 arguments, it will be grumpy at you.

For example:

./ax10 "Scala is the best"
LIbqLrEYGyr2LkOxlyV7J-6eO4Rvv4odvo6XdjJJlnQ9Tz32LR2raz1U6t-ztHPjjKGPqUu2NIME0mkWM4VixQ
./ax10 "Scala is the best" LIbqLrEYGyr2LkOxlyV7J-6eO4Rvv4odvo6XdjJJlnQ9Tz32LR2raz1U6t-ztHPjjKGPqUu2NIME0mkWM4VixQ
valid
./ax10 "Scala is the best" pd9t4XbrVM-9UtwzJ-O3i5AWxDw_XDKs1bfVstgD2oEdeheL9y82oEfRM9e_YVy1KA93tHjGmjl9l2elNedK1Q
invalid
./ax10 a b c
This app requires 1 argument to hash, and 2 to validate

Service Module Pattern 2

When writing services, you generally follow 3 steps:

  1. Define you trait (This is the Type that the zio Runtime will know about)
  2. Implement your trait (This is what you'll provide to the Runtime via a ZLayer)
  3. Add a companion object to your trait with accessor methods (This is just general ergonomics for using your service)

The Service Trait

As described above, our app is going hash a message, and validate a message against a hash. This would be a sensible description of what we would want to implement:

trait Hasher {
def hash(message: String, key: String): Task[String]
def validate(message: String, key: String, hash: String): Task[Boolean]
}

Note that out return types are Tasks. You'll likely want to return ZIOs with Any in the R channel here, otherwise you are leaking an implementation detail into your generic trait!

The Companion Object

The companion object holds some accessor methods, which basically cut out the boilerplate of you needing to use ZIO.serviceWith[MyType](_.myMethod) everywhere. For example, now we can just call Hasher.hash(a, b) in a for-comprehension.

Note that the type signature on the accessor methods are the same as your trait, but with its type in the R channel.

object Hasher {

def hash(message: String, key: String): RIO[Hasher, String] =
ZIO.serviceWithZIO[Hasher](_.hash(message, key))

def validate(
message: String,
key: String,
hash: String
): RIO[Hasher, Boolean] =
ZIO.serviceWithZIO[Hasher](_.validate(message, key, hash))

}

Writing a program before we've implemented it

I'm actually going to jump the gun here, and write out the logic for our entire program. I think that's a very powerful message to convey - because with our trait and companion objects defined, we actually have enough information to do it!

  // The overall flow of our program
val program: ZIO[ZIOAppArgs & (Hasher & Console), Throwable, ExitCode] = for {
// Read the arguments
args <- ZIOAppArgs.getArgs
// Make sure we've been passed only 1 or 2 args
_ <- ZIO.cond(
args.size == 1 || args.size == 2,
(),
new Exception(
"This app requires one argument to hash, and 2 to validate"
)
)
// When we've been passed 1 arg, hash it
_ <- ZIO.when(args.size == 1) {
Hasher.hash(args.head, superSecretKey).flatMap(h => printLine(h))
}
// When we've been passed 2 args, verify it.
_ <- ZIO.when(args.size == 2) {
ZIO.ifM(Hasher.validate(args.head, superSecretKey, args.last))(
onTrue = printLine("valid"),
onFalse = printLine("invalid")
)
}
} yield ExitCode.success

Our program is just a series of effects to run, so we can describe if solely with service/type traits. val program: ZIO[ZIOAppArgs & (Hasher & Console), Throwable, ExitCode] says, "Give me a ZIOAppArgs, Hasher and a Console, and I will produce for you an ExitCode". This means all you have to do is provide it your dependencies, and run it. This also means that you can test the actual logic of program by providing test implementations of services! We can also easily swap out one implementation of a service for another, and not have to change the flow/logic of how our program runs at all.

I think that's a very powerful system.

Implementing our Service Module

Ok, now for the fun part of writing our very own code. We will write a case class that extends out trait, and takes some dependencies via the constructor arguments. Hint: these arguments are going to be other dependencies your runtime needs via a ZLayer at some point!

Out logic is pretty straight forward, and we just use a Mac to compute a hash, and Base64 encode it.

// The live, default implementation of our Hasher Service.
case class HasherLive(mac: Mac) extends Hasher {

override def hash(message: String, key: String): Task[String] =
for {
hash <- ZIO.attempt(mac.doFinal(message.getBytes("UTF-8")))
encoded <- HashHelper.base64Encode(hash)
} yield encoded

override def validate(
message: String,
key: String,
msgHash: String
): Task[Boolean] =
for {
hash <- ZIO.attempt(mac.doFinal(message.getBytes("UTF-8")))
encoded <- HashHelper.base64Encode(hash)
} yield encoded == msgHash

}

You may have noticed the HashHelper.base64Encode(hash), and that it wasn't a dependency passed to the case class... Very astute of you, and that leads me to my next point:

Not everything has to be a Service Module

Everything looks like a nail to a hammer. If you are new to ZIO, and have learned that the service module pattern is "the way" to inject implementations into your applications, you will sooner or later build some awkward code trying to force a pattern you don't need. I usually find it's when working with Java and non-ZIO Scala libraries. For example, I need a Mac for my Hasher, but to build a Mac I need a SecretKeySpec. But, I don't want to implement a SecretKeySpec, I just want a SecretKeySpec. Enter my HashHelper object below...

object HashHelper {

def hmac512: ZLayer[SecretKeySpec, Throwable, Mac] = {
(
for {
mac <- ZIO.effect(Mac.getInstance("HmacSHA512"))
keySpec <- ZIO.service[SecretKeySpec]
_ <- ZIO.effect(mac.init(keySpec))
} yield mac
).toLayer
}

def specForKey512(key: String): ZLayer[Any, Throwable, SecretKeySpec] = {
ZIO.effect(new SecretKeySpec(key.getBytes("UTF-8"), "HmacSHA512")).toLayer
}

def base64Encode(bytes: Array[Byte]): Task[String] =
ZIO.attempt(Base64.getUrlEncoder.withoutPadding().encodeToString(bytes))

}

Sometimes it's useful to put some helper functionality in an object, and save yourself some ceremony.

Putting it all together

Ok, we've implemented our trait, and built out all the resources we need to instantiate it with our helper object!

Wiring up our layer

For the same of keeping the app example somewhat simple, I've just hard-coded the secret key to. So we know our Hasher implementation needs a Mac: ZLayer[Mac, Nothing, Hasher]. A Mac needs a SecretKeySpec: ZLayer[SecretKeySpec, Throwable, Mac]. We can make a SecretKeySpec without any dependencies. Let's line up the [R, A] channels to better see this visually.

[Any, SecretKeySpec] >>> [SecretKeySpec, Mac] >>> [Mac, Hasher]

So, we just match up the output A from one ZLayer into the R of the next and combine them vertically! Then, our resulting combined layer is just a ZIO[Any, Throwable, Hasher].

  // Shhh! 🤫
val superSecretKey: String = "abc123"

// We call .orDie here to give up, instead of having an something in the error channel,
// because if we can't construct our dependencies, our app isn't going to
// work anyway.
val appLayer: ZLayer[Any, Nothing, Hasher] = {
(HashHelper.specForKey512(
superSecretKey
) >>> HashHelper.hmac512) >>> Hasher.layer
}.orDie

Some things in life are free

Our program is a ZIO[ZIOAppArgs & (Hasher & Console), Throwable, ExitCode] , but we only build a ZLayer[Any, Nothing, Hasher]. Luckily, the ZIO Environment(ZEnv) comes with some things already built in. Those things are Clock, Console, System, and Random. We're going to extend ZIOAppDefault, so we'll get that and ZIOAppArgs for free.

Since the other parts are provided, we will only need to use provideSome to inject in the remaining dependencies.

Running our program

object HashApp extends ZIOAppDefault {

// all the stuff from above...

def run = program
.catchAll(err => printLine(err.getMessage))
.provideSomeLayer(appLayer)

}

With our use of catchAll here, we will catch any Throwable, and recover by printing it to the console.

The Code

The complete Scala code can be found on GitHub at https://github.com/alterationx10/ax10. I've also pasted it below.

scala-cli

To run it, and pass args, you need a --: scala-cli run ax10.scala -- arg1 arg2. To build an executable, just run scala-cli package ax10.scala -f, which should make an ax10 you can run and start using. If you wanted to play with the code, you can easily use VSCode + Metals after running scala-cli setup-ide ..

Full code, for posterity

//> using scala "3.1.1"
//> using lib "dev.zio::zio:2.0.0-RC2"

import zio._
import zio.Console._
import java.awt.Taskbar
import javax.crypto.Mac
import java.util.Base64
import javax.crypto.spec.SecretKeySpec
import javax.crypto.SecretKey

// Hash-based message authentication code
trait Hasher {
def hash(message: String, key: String): Task[String]
def validate(message: String, key: String, hash: String): Task[Boolean]
}

// The live, default implementation of our Hasher Service.
case class HasherLive(mac: Mac) extends Hasher {

override def hash(message: String, key: String): Task[String] =
for {
hash <- ZIO.attempt(mac.doFinal(message.getBytes("UTF-8")))
encoded <- HashHelper.base64Encode(hash)
} yield encoded

override def validate(
message: String,
key: String,
msgHash: String
): Task[Boolean] =
for {
hash <- ZIO.attempt(mac.doFinal(message.getBytes("UTF-8")))
encoded <- HashHelper.base64Encode(hash)
} yield encoded == msgHash

}

// Companion object with accessors
object Hasher {

def hash(message: String, key: String): RIO[Hasher, String] =
ZIO.serviceWithZIO[Hasher](_.hash(message, key))

def validate(
message: String,
key: String,
hash: String
): RIO[Hasher, Boolean] =
ZIO.serviceWithZIO[Hasher](_.validate(message, key, hash))

// Reference implementation layer
val layer: URLayer[Mac, Hasher] = (HasherLive(_)).toLayer

}

// Not everything needs to be/fit a Service Module pattern
object HashHelper {

def hmac512: ZLayer[SecretKeySpec, Throwable, Mac] = {
(
for {
mac <- ZIO.effect(Mac.getInstance("HmacSHA512"))
keySpec <- ZIO.service[SecretKeySpec]
_ <- ZIO.effect(mac.init(keySpec))
} yield mac
).toLayer
}

def specForKey512(key: String): ZLayer[Any, Throwable, SecretKeySpec] = {
ZIO.effect(new SecretKeySpec(key.getBytes("UTF-8"), "HmacSHA512")).toLayer
}

def base64Encode(bytes: Array[Byte]): Task[String] =
ZIO.attempt(Base64.getUrlEncoder.withoutPadding().encodeToString(bytes))

}

object HashApp extends ZIOAppDefault {

val superSecretKey: String = "abc123"

// The overall flow of our program
val program: ZIO[ZIOAppArgs & (Hasher & Console), Throwable, ExitCode] = for {
// Read the arguments
args <- ZIOAppArgs.getArgs
// Make sure we've been passed only 1 or 2 args
_ <- ZIO.cond(
args.size == 1 || args.size == 2,
(),
new Exception(
"This app requires 1 argument to hash, and 2 to validate"
)
)
// When we've been passed 1 arg, hash it
_ <- ZIO.when(args.size == 1) {
Hasher.hash(args.head, superSecretKey).flatMap(h => printLine(h))
}
// When we've been passed 2 args, verify it.
_ <- ZIO.when(args.size == 2) {
ZIO.ifM(Hasher.validate(args.head, superSecretKey, args.last))(
onTrue = printLine("valid"),
onFalse = printLine("invalid")
)
}
} yield ExitCode.success

// We call .orDie here to give up, instead of having an something in the error channel,
// because if we can't construct our dependencies, our app isn't going to
// work anyway.
val appLayer: ZLayer[Any, Nothing, Hasher] = {
(HashHelper.specForKey512(
superSecretKey
) >>> HashHelper.hmac512) >>> Hasher.layer
}.orDie

def run = program
.catchAll(err => printLine(err.getMessage))
.provideSomeLayer(appLayer)

}

· 8 min read

I've been using ZIO at work for about a year now, and thought I would share some of my learnings. On a couple of occasions, I've helped bring people up to speed on using ZIO in our code bases, so this could be thought of as a getting-started highlight for Scala developers who are familiar with the language, but not necessarily functional effect-based system - in this case, ZIO.

Anatomy of a ZIO Application

The main components to start discussing are ZIO[R, E, A] (the computational effects you want to run) , ZLayer[R, E, A] (the dependencies you need to run your effects), and the Runtime[R] (ZIO - the platform / effect system).

ZIO[R, E, A]

If I were to try to explain what a ZIO/effect is, in as few words as possible, I would say

A ZIO[R, E, A] will compute a result of type A, and will need resources of type R to do it. If it recoverably fails, it will fail with an exception of type E.

Let's dig into that.

ZWhatNow?

There type aliases and companion objects that simplify common cases:

  • Task[A] == ZIO[Any, Throwable, A] - Doesn't need any dependencies to compute A, and can recover from a a failure that is Throwable.
  • UIO[A] == ZIO[Any, Nothing, A - Doesn't need any dependencies to compute A, and won't fail with something you could recover from.
  • IO[E, A] == ZIO[Any, E, A] - Doesn't need any dependencies to compute A, and can recover from a failure that is E.
  • RIO[R,A] == ZIO[R, Throwble, A] - Requires R to compute A, and can recover from a a failure that is Throwable.
  • URIO[R, A] == ZIO[R, Nothing, A]- Requires R to compute A, and won't fail with something you could recover from.

The abbreviations may seem daunting at first, but if you feel like they're too much at start - just don't use them! They're just aliases, and ZIO[Any, Throwable, A] is just as valid as Task[A]. You'll get used them pretty quick though, and if you use IntelliJ Idea + the ZIO Plugin, it'll likely even suggest the shorter version for you to help out.

The E is Not Silent

The U in UIO, or URIO above is sometimes said to be for "Un-failing" (i.e. it can't fail), but it's important to state right off that your FP/Effect system application can still absolutely crash! Errors are not magically handled. This is more of a conceptual thing, and realizing that the E error channel is about exceptions you can and want to recover from. If, for example, you were reading numbers from a database to perform math on and your error channel was an ArithmeticException (e.g. ZIO[Any, ArithmeticException, Int]), you could still crash from an un-checked SQLException - because you said "I'm only concerned with recovering from ArithmeticExceptions". This also isn't Akka, so don't "let it crash" - you still need to catch your exceptions!

For example: This is still going to crash your application if you pass it zero:

def danger(denom: Int): ZIO[Console, Throwable, Int] = for {
result <- ZIO.attempt(42 / denom)
_ <- printLine(s"Computed $result")
} yield result

so, you should be sure to handle the exceptions you want to recover from, e.g.:

def lessDanger(denom: Int) = danger(denom).catchSome {
case _: ArithmeticException => ZIO.succeed(0)
}

The R: ZLayer[R, E, A]

A lot of people seem to struggle up with ZLayers at first, but I think they aren't that complicated once you get used to them. A ZLayer provides the R resources for a ZIO. Sometimes those resources need dependencies themselves, so just like with a ZIO, a ZLayer[R, E, A] will give you a dependency resource A you can inject into your application, and will need dependencies of type R to do it. If it recoverably fails, it will fail with an exception of type E. Also, like with the ZIO, there are corresponding type aliases which match above.

The tricky part is combining all the layers for you application. For example, if you have a ZLayer[A, Throwable, B] and a ZLayer[B, Throwable, C], depending on how you combine them, you can get a ZLayer[A, Throwable, C] , ZLayer[A with B, Throwable, B with C], or even a ZLayer[A, Throwable, B with C]. This is due to the fact that you can horizontally, and vertically combine layers.

For example, let's look at some type signatures:

val l1: ZLayer[Console, Throwable, Random] = ???
val l2: ZLayer[Random, Throwable, Clock] = ???
val l3: ZLayer[Console, Throwable, Clock] = l1 >>> l2 // Vertically
val l4: ZLayer[Console with Random, Throwable, Random with Clock] = l1 ++ l2 // Horizontally
val l5: ZLayer[Console, Throwable, Random with Clock] = l1 >+> l2 // A bit of both

For l3, we have combined the layers vertically. This means we used the output of l1 and fed it intol2 - which now means in this example we now have a layer where "If you give me a Console I will produce a Clock for you".

For l4, we have combined them horizontally, which mainly just means we stack the Rs and the As - here, you end up with a layer that when given a Console and a Random, it will produce a Random and a Clock.

In the case of l5, it's a bit of both. With >+> it just stacks the As - so we end up with a layer that says "Give me a Console, and I'll give you a Random and a Clock".

So which of these you need, really just depends on if you are going to use the resulting layer to build any other layers - and if you wanted/needed to easily re-use the dependencies in the R channel. A really nice thing is that your overall program is a collection of ZIOs, and as you combine them all, all of their resources stack up - so you know exactly what dependencies your program needs to run, and then you can build a layer to provide them all! For example:

// Get the current time
def currentTime: URIO[Clock, OffsetDateTime] = Clock.currentDateTime
//Log something
def log(msg: String): ZIO[Console, IOException, Unit] = printLine(msg)
// Log the current time
def logTime: ZIO[Console with Clock, IOException, Unit] = for {
time <- currentTime
_ <- log(s"The current time is ${time}")
} yield ()

We can see that if I want to run logTime, I need to provide Console with Clock, which is the combined set of dependencies of the individual ZIOs used to build that method.

The Runtime[Env]

The awesome follow up to the concepts of ZLayers, and knowing what resources your applications needs to run - is that they're just there. By that, I mean the Runtime which is running our application has to know about all the resources needed. For logTime above, that means I have at least a Runtime[Clock with Console]. Whatever the ultimate layer provided to the application (call it AppEnv where type AppEnv = This with That with Other...), you have a Runtime[AppEnv] - and that means you can access any of those dependencies! For example, logTime could be written as

val fromEnv: ZIO[Console with Clock, IOException, Unit] = for {
clock <- ZIO.service[Clock]
time <- clock.currentDateTime
console <- ZIO.service[Console]
_ <- console.printLine(s"The current time is $time")
} yield ()

Looking at clock <- ZIO.service[Clock] - that's basically saying "from the runtime environment, grab a Clock for me to use". So anywhere in your program's logic, if you're writing a line in a ZIO for-comprehension, and you know there's a service of type S provided, you could quickly grab a reference to it with s <- ZIO.service[S] - even if a companion/helper object hasn't been set up to provide it "nicely" via something like Clock.currentDateTime.

Why use an effect system?

Ok, cool. You can do dependency injection and exception handling without an effect system - so what? Well, in addition to the powerful, tightly integrated ergonomics above - this is all run on a performant fiber based system, which means that it takes near zero effort to take any of your code and add retry logic, scheduling, and async operations. What if I wanted to print the current time, 30 seconds in the future? logTime.delay(30 second). Do that 5 times? logTime.delay(30 second).repeatN(5). Log forever in the background while moving ahead in the application? logTime.repeat(Schedule.spaced(1.second)).forkDaemon

What if you're asking for user input, and you want to retry some number of times in case of mistyping?

val fromuser = (for {
_ <- Console.printLine("Enter a number")
input <- Console.readLine
number <- ZIO.attempt(input.toInt) // This could blow up!
_ <- Console.printLine(s"You entered number $number")
} yield ()).retryN(5)

These are of course silly example, but in a real-world application what if you're making a REST call, get an error code with a Retry-After header set you can recursively call yourself with the appropriate timeout with ease!

Wrapping up

I hope that helped hit some highlights of ZIO, and perhaps make it less scary to jump into!