What Is Mob Programming?

Simply put, mob programming is about getting together with at least three developers and start coding on a single keyboard. At any given time one developer is actually typing, the ‘Driver’. All other developers take the ‘Navigator’ role. They all review, discuss and describe what the Driver should be doing and the Driver narrates. The roles are swapped very frequently to keep everyone fresh and engaged. It’s the ultimate form of collaboration and peer review.

Mobbing During Lock Down

So now you know that mob programming is about live coding together on the same piece of code. But how do you do this when everyone is working remotely during this pandemic. With my current team we decided to give it a go regardless. There’s excellent online collaboration tools available these days, so it must be possible to exercise mob programming fully online. We’ve practiced for three days in a row in a mob programming hackathon. …


Image for post
Image for post

using Scala and Akka Streams

Cloudflow is a relatively new framework that helps you build distributed streaming applications and deploy them on Kubernetes. Its powerful abstractions allow you to easily split your application into independent stream processing components, called streamlets. Streamlets can be developed using several runtimes such as Akka Streams and Flink. A streamlet can have one or more input streams, inlets, and one or more output streams, outlets. You deploy you application as a whole while Cloudflow deploys streamlets individually. …


Image for post
Image for post

with Pivotal’s Reactor

I’ve been doing Scala projects with Akka Streams for quite a few years now and I have a reasonably good feeling of things to watch out for. At my current project we are doing Java and we are using a different implementation of the Reactive Streams Specification: Reactor. While learning the library I stumbled upon many common mistakes and bad practices which I’ll be listing here. Credits to Enric Sala for pointing out these bad practices.

Reactive Streams

Firstly, let’s have a look at the Reactive Streams Specification and see how Reactor maps to that. The spec is pretty straight forward

There’s a Publisher that is a potential source of data. One can subscribe to a Publisher with a Subscriber. One passes a Subscription to a Subscriber. The Subscription is used to demand elements from the Publisher. This is the core principle of Reactive Streams. …


Image for post
Image for post

Scraping Consumer and Producer Metrics from any Scala or Java App

TL;DR

This post focuses on monitoring your Kafka deployment in Kubernetes with Prometheus. Kafka exposes its metrics through JMX and so it does as well for apps using its Java SDK. To be able to have those metrics pulled in by Prometheus we need a way to extract them using the JMX protocol and expose them. This is where JMX Exporter comes in handy. It’s pretty effective to run this as a sidecar in your Kafka client application pods and have Prometheus scrape them using scrape annotations. For the impatient: all sample code is available here.

In my previous article “Monitoring Kafka in Kubernetes” I mainly focused on monitoring the server side of Kafka, while in this post we’re going to have a look at gathering and plotting its client side metrics. …


Image for post
Image for post

Deploying the Confluent platform Helm chart without Tiller

Say, you want to deploy Kafka to your company’s Kubernetes cluster. You search online how other people are doing this and quickly stumble upon some existing Helm charts that are doing EXACTLY what you need. Awesome, this is going to save you a lot of time and error-prone manual hassle with YAML configuration. You just need to install and initialise Helm on the cluster and…

… you are stopped by your company’s security officer, because he has serious security concerns about Helm’s server-side component, Tiller. Now what?

In this post I’ll show you how you can still use practically any available Helm chart AND please your security guy by avoiding Tiller. As an example I’ll show how to deploy the Confluent platform using its official Helm chart on GKE. Before we do that let’s first look at what security concerns one might have with regards to Tiller. …


Monitoring Kafka in Kubernetes without Prometheus

TL;DR

This post focuses on monitoring your Kafka deployment in Kubernetes if you can’t or won’t use Prometheus. Kafka exposes its metrics through JMX. To be able to collect metrics in your favourite reporting backend (e.g. InfluxDB or Graphite) you need a way to query metrics using the JMX protocol and transport them. This is where jmxtrans comes in handy. With a few small tweaks it turns out it’s pretty effective to run this as a sidecar in your Kafka pods, have it query for metrics and transport them into your reporting backend. …


In this blog series we’ll discuss our journey at Cupenya of migrating our monolithic application to a microservice architecture running on Kubernetes. In the previous parts of the series we’ve seen how the core components of the infrastructure, the Api Gateway and the Authentication Service, were built, how we converted our main application to a microservice running in Kubernetes and how we dealt with logging, monitoring & tracing. In this post we’re going to see how we automated deployment by setting up our CI/CD pipeline.

Parts


Image for post
Image for post

Using the SBT Native Packager it’s quite easy to dockerize your Scala apps. You don’t have to manage custom Dockerfile’s anymore. Let’s start with a minimal example application. I’m going to create a new Scala project from a simple giter8 template using the sbt new command. For this tutorial you’ll need:

  • JDK 8
  • sbt 0.13.13 or higher
  • docker console client 1.10 or higher

Bootstrapping

Let’s first generate a basic Akka HTTP web application based on the official giter8 template.

$ sbt new https://github.com/akka/akka-http-scala-seed.g8

This will prompt for a few parameters. For name we will use “hello-world” and we will leave the defaults for the other parameters. …


In this blog series we’ll discuss our journey at Cupenya of migrating our monolithic application to a microservice architecture running on Kubernetes. In the previous parts of the series we’ve seen how the core components of the infrastructure, the Api Gateway and the Authentication Service, were built and how we converted our main application to a microservice running in Kubernetes.

Parts

As mentioned there’s a number of gains to splitting our monolith up into microservices. For us it was mainly…


In this blog series we’ll discuss our journey at Cupenya of migrating our monolithic application to a microservice architecture running on Kubernetes. In the previous parts of the series we’ve seen how the core components of the infrastructure, the Api Gateway and the Authentication Service, were built. In this post we’re going to see how we converted our main application to a microservice and get to a fully working setup in Kubernetes which we could go live with.

Parts

About

Jeroen Rosenberg

Dev of the Ops. Founder of Amsterdam.scala. Passionate about Agile, Continuous Delivery. Proud father of three.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store