Survival of the fittest

Be faster than your competitors

Innovation =
Idea + Product + Customer Usage

DIN SPEC 77555-1:2013-09 Innovation Management - Part 1: Innovation Management System

Microservice

  • fits in one brain,
  • designed for replaceability,
  • autonomy (organisation & technology)

Microservice Taxonomy

Microservice Architecture

The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

Martin Fowler - http://martinfowler.com/articles/microservices.html

Online Shop - Use cases

Online Shop - Deployment

How do we get the sh*t to bare metal?

Processes

bz@cc:~/$ ps aux
www-data  1699  0.5  7.7 2453320 158076 ?      Sl   10:49   0:26 java -cp cart.jar
www-data  1834  0.3  5.0 2435400 102684 ?      Sl   10:49   0:17 java -cp navigation.jar
www-data  1972  0.0  0.1  90792  3124 ?        Ss   10:49   0:00 nginx: master process
www-data  1973  0.0  0.1  91148  3820 ?        S    10:49   0:00 nginx: worker process
www-data  1974  0.0  0.1  91148  3820 ?        S    10:49   0:00 nginx: worker process
www-data  1975  0.0  0.1  91148  3820 ?        S    10:49   0:00 nginx: worker process
www-data  1976  0.0  0.1  91148  3820 ?        S    10:49   0:00 nginx: worker process
www-data  1980  1.5  7.0 2456532 143688 ?      Sl   10:49   1:20 java -cp product.jar

Linux Packages

bz@cc:~/$ ar t cart_0.6.20.deb
debian-binary
control.tar.gz
data.tar.gz
bz@cc:~/$ tar tzf data.tar.gz
./etc/default/cart
./etc/init.d/cart
./usr/share/shop/cart/bin/cart
./usr/share/shop/cart/bin/cart.bat
./usr/share/shop/cart/lib/cart-microservice-0.6.20.jar
bz@cc:~/$ tar tzf control.tar.gz
./postinst
./control
./md5sums
bz@cc:~/$ cat debian-binary
2.0

Control File

Source: shop-cart-service
Section: web
Priority: optional
Version: 4.2.42
Maintainer: Bernd Zuther 
Homepage: http://www.bernd-zuther.de/
Vcs-Git: https://github.com/zutherb/AppStash.git
Vcs-Browser: https://github.com/zutherb/AppStash
Package: shop-cart-service
Architecture: amd64
Depends: redis-server (>= 2.8.13)
Description: Cart Service

Package Source

Buildserver

reprepro -Vb /var/packages/debian includedeb shop /tmp/*.deb
reprepro -b /var/packages/debian/ export
reprepro -b /var/packages/debian/ list shop

Provision

---
- apt_repository: repo='deb http://ci-repo/debian/ shop main' state=present
- apt: update_cache=yes force=yes
- apt: pkg={{item}} state=present force=yes
  with_items:
  - shop-cart-service

Linux Packages

Pro Contra
Service Repository No Service Discovery
Dependency Management Runtime enviroment must be created on every single node
Technologies are battle-tested Depends on the linux distribution

How can we avoid the creation time?

Docker

Build, Ship, Run

Docker-Workflow

bz@cc $ docker build -t zutherb/product-service .
bz@cc $ docker push zutherb/product-service
bz@cc $ docker pull zutherb/product-service
bz@cc $ docker run zutherb/product-service
bz@cc $ docker ps
CONTAINER ID        IMAGE                               COMMAND                CREATED
87bb5524067d        zutherb/product-service:latest      "/product-0.6/bin/pr   14 seconds

Docker-File

FROM relateiq/oracle-java8
MAINTAINER Bernd Zuther <bernd.zuther@codecentric.de>
EXPOSE 18080
ADD product-0.6.tar /
ENTRYPOINT ["/product-0.6/bin/product"]

Deployment Scenario

Linking

bz@cc ~$ docker run -d --name mongodb mongo
705084daa3f852ec796c8d6b13bac882d56d95c261b4a4f8993b43c5fb2f846c
bz@cc ~$ docker run -d --name redis redis
784ebde0e867adb18663e3011b3c1cabe990a0c906396fc306eac669345628cf
bz@cc ~$ docker run -d -P --name cart --link redis:redis zutherb/cart-service
438b2657c7a5c733787fb32b7d28e1a0b84ba9e10d19a8a015c6f24085455011
bz@cc ~$ docker run -d -P -p 8080:8080 --name shop --link cart:cart \
              --link mongodb:mongodb zutherb/monolithic-shop
9926e187faa215ac9044603d51adbd8d679d8076b4a349ebbc9917dade6d560e
bz@cc $ docker exec 9926e187faa215ac9044603d51adbd8d679d8076b4a349ebbc9917dade6d560e env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=87bb5524067d
MONGODB_PORT_27017_TCP=tcp://172.17.0.28:27017
MONGODB_PORT_27017_TCP_ADDR=172.17.0.28
MONGODB_PORT_27017_TCP_PORT=27017
MONGODB_PORT_27017_TCP_PROTO=tcp

How can describe our microservice application?

Docker Compose

redis:
  image: dockerfile/redis
  ports:
    - "6379:6379"
cart:
  image: eshop/cart-service
  ports:
    - "18100:18100"
  links:
    - redis
catalog:
  image: eshop/catalog-frontend
  ports:
    - "80:80"
  links:
    - product
    - cart

Docker

Pro Contra
Applications are isolated Daemon runs as root on host
Images are build once and run anywhere No Process Supervisor
Docker Repository No Service Discovery
Big ecosystem (Kubernetes, Mesos, Vamp)  

Blue Green Deployment

Canary Release

Microservice - Canary Release

I don't care

Distributed System

Kubernetes

Pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    name: cart
    role: backend
  name: cart
spec:
  containers:
    - name: cart
      image: zutherb/cart-service
      ports:
        - containerPort: 18100

Replication Controller

apiVersion: v1
kind: ReplicationController
metadata:
  ... (labels)
spec:
  replicas: 2
  selector:
    name: cart
  template:
    metadata:
      ... (labels)
    spec:
      containers:
      - name: cart
        image: zutherb/cart-service
        ports:
        - containerPort: 18100

Service

Service

kind: Service
apiVersion: v1
metadata:
  labels:
      name: cart
      role: backend
  name: cart
spec:
  ports:
    - name: cart
      port: 18100
  selector:
      name: cart

Kubernetes

Pro Contra
You needn't care where work is executed No description of the whole application like it is done with docker compose and for deployment scenario
You needn't care about dependencies Analysis of failures will get harder
Service discovery Master is a single point of failure
Process Supervisor Few tools and documentation is avaiable yet

Mesos

Marathon Deployment

{"id": "basic-3",
  "cmd": "python3 -m http.server 8080",
  "cpus": 0.5,
  "mem": 32.0,
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "python:3",
      "network": "BRIDGE",
      "portMappings": [
        { "containerPort": 8080, "hostPort": 0 }]}}}
curl -X POST http://10.141.141.10:8080/v2/apps -d @basic-3.json \
  -H "Content-type: application/json"

Marathon + Mesos

Pro Contra
You needn't care where work is executed No description of the whole application and for deployment scenario
High Availability No Service Discovery
Process Supervisor Analysis of failures will get harder
Many tools and documentation are available  

Very Awesome Microservices Platform (Vamp)

Vamp + Marathon + Mesos

Pro Contra
Description of the whole application and for deployment scenario Many components that has to be understood
You needn't care where work is executed Analysis of failures will get harder
High Availability  
Process Supervisor  
Service Discovery  

Summary

  Linux Packages Docker Daemon Kubernetes Vamp + Marathon + Mesos
Service Repository
Orchestration
Service Discovery
Process Supervisor
High Availability
Application Description
Routing Definition

<Thank You!>

Demo Application

Links

Links