Jenkins vs Azure DevOps (formerly known as VSTS)

Jenkins is probably the number #1 Continuous Integration (and Continuous Delivery) tool for Java developers. It is very flexible and has a lot of supported plugins. There are more than 3000 plugins, so plenty to choose. This flexibility however also comes with a price. If you want to use Jenkins in a serious pipeline, you need a scripted workflow, fully integrated with other tools, secure from begin to end, etc. This is doable, but it is frustrating to implement and it certainly costs a lot of time.

For example, a lot of plugins depend on other plugins and often these need to be very specific versions. This results in a "plugin dependency hell". Also the use of some plugins is not always very straightforward. This certainly applies if you want to use them in a scripted pipeline. The examples on the Internet are not always clear and some descriptions assume that you know the ins- and outs of Jenkins. In addition, not all plugins support a multibranch pipeline, which basically makes them useless in modern CI/CD.
Statistics and metrics are non-existing in Jenkins, so if you are into these things you need a couple of plugins and Grafana to visualize, but you have to set this up from scratch.

Lately I have been working with Azure DevOps, formerly know as VSTS (Visual Studio Team Services) and although I am a bit biased when it concerns Jenkins (I do love it, really) and the combination of Microsoft and Java has not always been a good marriage, Azure DevOps grows on you. And Azure DevOps can be used for Java development. It supports Maven and Gradle tasks and the agents are able to run on Windows (of course), Mac and Linux. It has a lot of build-in tasks, both from Microsoft itself, but also downloadable tasks from the Azure DevOps marketplace. And if a task is not available you are able to create it yourself (using typescript).

I managed to set up a small pipeline in a day. This included a Maven build, SonarQube and Fortify scans and a deployment to PCF (Pivotal Cloud Foundry). Some Azure DevOps task are very powerful. I needed something to replace secure credentials in a manifest.yml file. No problem. I added a Tokenizer task from the Marketplace, entered the secure credentials as variables and that was it. I've done this in Jenkins before, but that took me way more time.

Does this mean that I abandon Jenkins? No, certainly not. Azure DevOps is not a silver bullet and some things are not very well supported or need to be improved. A small comparison:

JenkinsAzure DevOps
Very flexible, scriptable workflow Workflow is limited and straightforward (no real if-then-else or switch-case constructions; only some simple filtering and conditions can be set). This makes it more difficult to develop complex workflows
Pipelines must be programmed (in Groovy) Construction by means of a GUI is much easier and faster
User interface is OK, but limited User interface is confusing
A lot of plugins, although some are not very useful, not straightforward or unstable A lot of powerful tasks out-of-the-box; easy to use
Plugin dependency hell NA
Redundancy of code in a pipeline script is not needed Duplication of tasks (because of the first point), although this can be solved by means of task groups
Secret credentials and files support Secret credentials and files support
Open source and support by Cloudbees Closed source and support by Microsoft; a team of about 400 developers are working on this product
Not provided as SaaS solution Provided as SaaS solution
Does not include workitem management, Git, or functionality to deploy to Cloud Full solution, including workitem management, Git, easy deployment to Cloud (not only Azure, but also AWS)
Metrics/statistics are non-existing. No configurable dashboard All steps in the process are registered. Azure DevOps has support to create custom Dashboards, although the number of widgets is still limited
Traceability of changes in the pipeline are only tracked if the pipeline is in Git. Traceablility of a build and release has to be programmed in the pipeline Changes in the pipeline are registered (every change is a commit). All steps in a build and release are traceable out-of-the-box
Secure, but some things can be by-passed (eg. the "Replay"  function makes it possible to make changes in the pipeline or even extract credentials) Secure
NA No support for Webhooks if you use an external Git repository
Notification possible Notification out-of-the-box

So, the question is "who is the winner ?". I think it is a tie. Jenkins is very flexible when it comes to creating complex workflows, but it is faster to get something working in Azure DevOps. In fact, I have a CI/CD pipeline that contains a combination of both. I use DevOps Azure for the main build and deployment (release) tasks and Jenkins as an API to disclose automated tests (Jenkins is called from a Azure DevOps release flow). In addition we still have Jenkins jobs which are not (yet) ported to Azure DevOps, because of time contraints; e.g. we use the Swagger-Confluence CLI (https://cloud.slkdev.net/swagger-confluence/) + some Groovy script and Confluence REST calls to generate documentation. There are no Azure DevOps marketplace tasks yet, so porting it is not straightforward.

Workflow

Workflow is an important feature of a CI/CD pipeline. Some people advocate that only one workflow option is the correct one and that is "CI running against a shared mainline". Your branch strategy looks something like this:

This means that all changes are committed into one Master branch and every commit is a potential release candidate. The CI/CD actions are:

BranchCI/CD actions
Master
  • Perform full build and Unit tests
  • Quality checks / static code analysis
  • Deployment and testing
  • Production deployment



Promotion and deployment to Production can be a full automated process, but it may be convenient to insert a manual step.
The workflow is simple, but it may not be suitable for all teams. Sometimes it is not easy - or possible - to split up a feature into small changes. Frontend applications often involve small GUI changes, so a workflow that consist of only a Master branch may be sufficient, but in a lot of cases, a feature must be implemented as one change. A branch-per-feature workflow may be more suitable in these cases.
Some literature state that other workflows are considered 'CI/CD theatre' (see also https://www.thoughtworks.com/radar/techniques/ci-theatre). I find this a bit debatable. You must choose the workflow that the team feels comfortable with and that makes sense for the type of changes. If your workflow has some sort of mainline you are still be able to "run CI against a shared mainline". In our case for example, we have a Java/Maven project using the Atlassian workflow; this means the use of a branch-per-feature. The workflow is as follows:

The Master branch is the mainline. Each user story or epic results in a Feature branch. Each successfully build Feature branch is merged with the Master. A failed Feature - Epic 1 in the figure - is not merged with the Master until the build and unit tests are successful. Each Release is a branch with a commit from the Master. In CI/CD pipeline terms this means the following actions for each branch type:

Branch CI/CD actions
Master (=snapshot)
  • Perform full build and Unit tests
  • Quality checks / static code analysis
  • Deployment and testing
Feature
  • Perform full build and Unit tests
Release
  • Perform full build and Unit tests
  • Quality checks / static code analysis
  • Deployment and testing
  • Production deployment

Compared with the first workflow, the branch-per-feature workflow has one distinct difference; the commit frequency on the Main branch is less. That does not have to be an issue. The commit frequency on the Feature branch may be high instead. A merge with Master also does not mean that you easily screw up the Master branch. It is the responsibility of the developer to keep the Feature branch in sync with the Master.
The branch-per-feature workflow also does not put a heavy burden on the build- and test environments. Even the smallest change may result in a long build- and test cycle (a Fortify scan can easiliy take up 1 - 2 hours) in a one-branch workflow . Making a lot of small changes per day in a one-branch workflow results in queued jobs and queued testruns, unless your build- and test environments are scaled accordingly, which becomes costly.

Jenkins, friend or foe 2/2

Although installation of Jenkins is relatively easy, configuration is not if you never done this before. The problem is that 'everything'  is code and I wanted to make use of multibranche pipelines. However, a lot of Jenkins examples on the Internet refer to using the Jenkins GUI. I do not want to use the GUI. It must be possible to reinstall everything in one click and I don't want to do any manual task.

Configuration

Most of the configuration is done by means of 'hook' scripts. There are different types of hooks and methods (see Groovy Hook Script), but for most configuration items the 'init hook', using .groovy scripts is sufficient.
In the Ansible installation mentioned in the previous post, the installation directory of Jenkins is /var/lib/jenkins/, so the init scripts are located in.

/var/lib/jenkins/init.groovy.d/*.groovy

The scripts are executed in lexical order, so make sure to use a file naming convention. I use [A..Z]_init*.groovy, but you could also use a numbering.

A_init_git.groovy

Jenkins must be able to connect to a Git repository and does this by means of SSH. This means you need to generate an SSH key pair; I won't go into details how to generate this. Just Google and you find some examples.
The private key must be stored as credential in Jenkins and is used later in the script that creates the Jenkins jobs. Of course, you can insert the private key manually in Jenkins, but we want to automate everything, so also this step is done by means of scripting. First step in the process is to generate the key pair on the Jenkins server (not in scope of this blog post). The private key in this example is located at '/var/lib/jenkins/ssh_git'. As part of the Jenkins (re)start, the A_init_git.groovy script is executed and the key is installed. The script looks like this:

import jenkins.model.*
import hudson.security.*
import hudson.tasks.*
import jenkins.plugins.git.*
import com.cloudbees.plugins.credentials.*;
import com.cloudbees.plugins.credentials.common.*
import com.cloudbees.plugins.credentials.domains.*
import com.cloudbees.plugins.credentials.impl.*
import com.cloudbees.jenkins.plugins.sshcredentials.impl.*
import hudson.plugins.sshslaves.*;

println "==> Executing A_init_git.groovy"

String keyFile = "/var/lib/jenkins/ssh_git"
String keyID = "ssh_git"
String keyPassphrase = ""
String keyUsername = "ssh_git"
String keyDescription = "Read access to repositories"

// Get the credentials provider
def global_domain = Domain.global()
def credentials_store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()

println "--> Insert SSH credentials for accessing Git via SSH"
credentials = new BasicSSHUserPrivateKey(
CredentialsScope.GLOBAL,
keyID,
keyUsername,
new BasicSSHUserPrivateKey.FileOnMasterPrivateKeySource(keyFile),
keyPassphrase,
keyDescription
)
credentials_store.addCredentials(global_domain, credentials)

println "==> End A_init_git.groovy"

B_init_credentials.groovy

One of the other things that need to be done first is setting up the credentials (users + password) in Jenkins, so they can be used in the other init scripts. To reduce manual work this can be done automated and in a secure way. One approach is:
  • Create a yml file with username + password; this is a plain file. This file is maintained in a secure environment (e.g. Keypass). The format (used in this example) is:
- env: all
  username_1: password_1
  username_2: password_2
  • Encrypt it manually with an AES 128 bit algorithm, using a passphrase; the encrypted file is stored in Git (there is example code on the Internet that shows how to perform the encryption/decryption)
  • The passphrase is given as 'extra variable' during startup of the playbook, or - even better -  it is configured by means of Ansible Vault
  • One of the steps in the Ansible playbook is to copy the encrypted yml file from Git to the Jenkins server
  • The second Ansible step is to decrypt the yml file using the passphrase
  • Third Ansible step is to restart Jenkins; this processes the *.groovy files, including the B_init_credentials.groovy
  • The step after a Jenkins restart is to delete the decrypted file; also in case something went wrong, the decrypted file must always be deleted. 
  • The groovy file itself looks something like this:
import jenkins.model.*
import hudson.security.*
import hudson.tasks.*
import com.cloudbees.plugins.credentials.*;
import com.cloudbees.plugins.credentials.common.*
import com.cloudbees.plugins.credentials.domains.*
import com.cloudbees.plugins.credentials.impl.*

println "==> Executing B_init_credentials.groovy"

// Read the credentials file
CredentialsUtils utils = new CredentialsUtils()
def map = [:]
// Read the credentials file, decrypted by Ansible
String propertyFile = 'credentials_decrypted.yml'
map = utils.getPropertyMap ('all', propertyFile)

// Get the credentials provider
def global_domain = Domain.global()
def credentials_store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()

if (map['username_1'] != "") {
  Credentials credentialsUser_1 = new UsernamePasswordCredentialsImpl (CredentialsScope.GLOBAL, "user_1", "", map['username_1'], map['password_1'])
  credentials_store.addCredentials(global_domain, credentialsUser_1)
}

if (map['username_2'] != "") {
  Credentials credentialsUser_2 = new UsernamePasswordCredentialsImpl (CredentialsScope.GLOBAL, "user_2", "", map['username_2'], map['password_2'])
  credentials_store.addCredentials(global_domain, credentialsUser_2)
}

println "==> End B_init_credentials.groovy"

class CredentialsUtils {
  /* Returns a map of all properties in the file 'fileName'
  */
  def getPropertyMap (def target, def fileName) {
    // Read the properties and build a property map
    def propertyfile = new File(fileName)
    String propertyContent = propertyfile.text
    boolean sectionFound = false
    def propertyMap = [:]
    String key = ""
    String value = ""
    def lines = propertyContent.readLines()
    lines.each { 
      line ->
      key = getKeyValueFromPropertyLine(line, true)
      if (key == "- env")
      {
        value = getKeyValueFromPropertyLine(line, false)
        if (value.toUpperCase() == target.toUpperCase())
          sectionFound = true
        else
          sectionFound = false
      }
      else if (sectionFound) {
        key = getKeyValueFromPropertyLine(line, true)
        value = getKeyValueFromPropertyLine(line, false)
        propertyMap[key] = value
      }
    }
   return propertyMap
  }

  /* Returns the key- or value part of a line from a property file
  */
  String getKeyValueFromPropertyLine (String line, boolean k) {
    String kv = ""
    try {
      if (k)
        kv = line.substring(0, line.lastIndexOf(":")).trim()
      else
        kv = line.substring(line.lastIndexOf(":") + 1, line.length()).trim()
    }
    catch (e) {
      // Ignore exceptions and just continue
    }
    return kv
  }
}

C_init_ldap.groovy

This script configures the Jenkins ldap plugin and for convenience the password is in plain text (never do this in a real script). The 'tricky' part is, that to find a user the ldap sometimes returns more than one user. The ldap plugin cannot handle this and it results in an exception; basicly this means that you can't login to Jenkins. The 'userSearch' property defined in the script below takes care of returning only one user.

import jenkins.model.*
import hudson.security.*
import org.jenkinsci.plugins.*
import java.util.logging.Logger
import java.util.logging.Level

println "==> Executing C_init_ldap.groovy"

Logger logger = Logger.getLogger("")

String server = 'ldaps://ldap-p.mycompany.com:636'
String rootDN = 'o=mycompany,c=fr'
String userSearchBase = ''
String userSearch = '(&(objectClass=person)(uid={0}))'
String groupSearchBase = ''
String managerDN = 'CN=username,OU=Users,DC=example,DC=org'
String managerPassword = 'secret'
boolean inhibitInferRootDN = false

SecurityRealm ldap_realm = new LDAPSecurityRealm(server, rootDN, userSearchBase, userSearch, groupSearchBase, managerDN, managerPassword, inhibitInferRootDN)
Jenkins.instance.setSecurityRealm(ldap_realm)

Jenkins.instance.save()
logger.info("--> Security set to LDAP " + server)
println "==> End C_init_ldap.groovy"

H_init_jobs.groovy

The Jenkins jobs are also created by means of a script. In this case, a multibranch pipeline job is created. The beauty of multibranch pipeline jobs is that you are able to script the CI/CD workflow (or at least a part), which is of course very flexible. The private SSH key, stored as Jenkins credential in A_init_git.groovy, is used again in this script.

Note, that the script also creates lockable resources. A lockable resource is a resource - often a test environment - that is allocated during execution of a job. If another job would be started, it allocates another resource until all resources are busy. This queues new jobs until a resource is released again. An example of lockable resources will be given in another blog post that covers the multibranch pipeline job itself.

import jenkins.model.*
import jenkins.plugins.git.*
import hudson.tasks.*
import jenkins.branch.*
import org.jenkinsci.plugins.workflow.multibranch.*
import org.jenkins.plugins.lockableresources.LockableResourcesManager;

println "==> Executing H_init_jobs.groovy"

def instance = Jenkins.getInstance()
String keyID = "ssh_git"

try {
  // Configures Multi Branch project using the ssh key credentials
  WorkflowMultiBranchProject mp = instance.createProject(WorkflowMultiBranchProject.class, "MyProject")
  mp.getSourcesList().add(new BranchSource(
    new GitSCMSource(null, "ssh://git.mycompany.com/myproject.git", keyID, "*", "", false),
    new DefaultBranchPropertyStrategy(new BranchProperty[0])))
}
catch(err) {
  println 'Job already exists; continue...'
}

// Setup test environments
LockableResourcesManager mgr = LockableResourcesManager.class.get();
mgr.createResourceWithLabel("T1", "TestEnvironment")
mgr.createResourceWithLabel("T2", "TestEnvironment")

println "==> End H_init_jobs.groovy"


These are just a few examples of init scripts. Of course you can do much more, but that is beyond the scope of this post.

Jenkins, friend or foe 1/2

Jenkins is probably the most used CI/CD tool and it is indeed very powerful. On the other hand, setting up a CI/CD pipeline with Jenkins from scratch is a big and complex job. The road to reach to the level where we are now has been a bit bumpy. Lack or unclear documentation was a big part of the problem.

Installation

First thing to do was installing Jenkins. On itself, this is a very easy task, so no problem, but we made it more difficult than needed. The decision was to run Jenkins in a Docker container on a managed Docker platform. The main reason was that this would also run in the Cloud. This decision looked ok at first, but this also meant that:
  • We had to know Docker
  • We needed to gain knowledge of a managed platform to run the Docker containers (e.g. Kubernetes)
  • Everything else we used - such as Maven, Fortify SCA client, etc.- also had to run on Docker
It was a hassle to set things up initially but this changed over time and although at some point everything worked fine, it was still a complex stack to manage; I only wanted to manage my Jenkins instance.
After some maintenance issues I decided to change the setup and installed Jenkins on Linux on a VM server. Plain and simple and I was glad that I took this decision.

One of our CI/CD requirements is that everything is code. This applies to both the Jenkins installation and the configuration. Nothing is done manually. The Jenkins installation is defined in an Ansible playbook. This includes downloading and installation of the plugins, installing the initialization scripts and installing Maven and other tools that are used in the CI/CD pipeline. Everything is stored in either Git (the Ansible playbook, the scripts, ...) or Nexus (the Jenkins plugins, tools). An example of this playbook - including vars and tasks - is given below. This is a stripped version of an installation on RHEL.

Ansible file structure

The files mentioned below are organized in the following Git file structure:

files
    |__ jenkins
              |__ jenkins.io.key
              |__ jenkins-2.127-1.1.noarch.rpm
              |__ init.groovy.d
                              |__ A_init_git.groovy
                              |__ B_init_credentials.groovy
                              |__ C_init_ldap.groovy.groovy
                              |__ D_init_matrix_auth.groovy
                              |__ E_init_library.groovy
                              |__ F_init_mail.groovy
                              |__ H_init_jobs.groovy
tasks
    |__ install_jenkins.yml
    |__ install_jenkins_plugins.yml
vars
    |__ variables.yml
main.yml

main.yml (playbook)

The main playbook to be used in Ansible.

- hosts: jenkins
  become: yes
  become_method: sudo
  become_user: root
  tasks:

  - include_vars:  vars/variables.yml

  - include_tasks: tasks/install_jenkins.yml
  - include_tasks: tasks/install_jenkins_plugins.yml

variables.yml

Like it says; the variables used in the Jenkins tasks. Note, that it also refers to an rpm file which is stored in Git, together with the other Ansible files. Usually binary files are not stored in Git, but this was because of lazyness.

jenkins:
  key: "jenkins.io.key"
  file: "jenkins-2.127-1.1.noarch.rpm"
  sysconfig: /etc/sysconfig/jenkins
  plugins: /var/lib/jenkins/plugins
  hook_init: /var/lib/jenkins/init.groovy.d
  
nexus:
  url: "https://nexus3.mycompany.nl/repository/jenkins-plugins/download/plugins"

install_jenkins.yml

The main task that installs Jenkins. Note, that normally installation involves the Jenkins GUI. To circumvent this, the file /etc/sysconfig/jenkins is changed in such a way that it disables the setup wizzard.

- name: Copy Jenkins rpm package and key
  copy:
    src: "{{ item }}"
    dest: /tmp
    owner: root
    group: root
  with_items:
    - jenkins/{{ jenkins.key }}
    - jenkins/{{ jenkins.file }}

- name: Import key
  rpm_key:
    state: present
    key: /tmp/{{ jenkins.key }}

- name: Install Jenkins
  yum:
     name: /tmp/{{ jenkins.file }}
     state: present

- name: Set Jenkins JENKINS_JAVA_OPTIONS; disable initial setup by means of the GUI
  replace:
    path: "{{ jenkins.sysconfig }}"
    regexp: "-Djava.awt.headless=true"
    replace: "-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"

- name: Creates directory (if not existing)
  file:
    path: "{{ jenkins.hook_init }}"
    state: directory
    owner: jenkins
    group: jenkins

- name: Install init .groovy scripts
  template:
    src: "{{ item }}"
    dest: "{{ jenkins.hook_init }}"
  with_fileglob:
    - files/jenkins/init.groovy.d/*

install_jenkins_plugins.yml

The task that installs the plugins

- name: Copy the plugins
  get_url:
    url: "{{ nexus.url }}/{{ item.plugin }}/{{ item.version }}/{{ item.plugin }}.hpi"
    dest: "{{ jenkins.plugins }}/{{ item.plugin }}.hpi"
    owner: jenkins
    group: jenkins

  with_items:
    - { plugin: apache-httpcomponents-client-4-api, version: '4.5.3-2.0' }
    - { plugin: authentication-tokens, version: '1.3' }
    - { plugin: blueocean-autofavorite, version: '1.0.0' }
    - { plugin: cloudbees-bitbucket-branch-source, version: '2.2.5' }
    - { plugin: blueocean-bitbucket-pipeline, version: '1.3.1' }
    - { plugin: blueocean, version: '1.3.1' }
    - { plugin: blueocean-pipeline-editor, version: '1.3.1' }
    - { plugin: branch-api, version: '2.0.15' }
    - { plugin: blueocean-commons, version: '1.3.1' }
    - { plugin: blueocean-config, version: '1.3.1' }
    - { plugin: credentials-binding, version: '1.13' }
    - { plugin: credentials, version: '2.1.16' }
    - { plugin: blueocean-dashboard, version: '1.3.1' }
    - { plugin: display-url-api, version: '2.1.0' }
    - { plugin: blueocean-display-url, version: '2.1.1' }
    - { plugin: docker-java-api, version: '3.0.14' }
    - { plugin: docker-commons, version: '1.9' }
    - { plugin: docker-workflow, version: '1.14' }
    - { plugin: docker-plugin, version: '1.0.4' }
    - { plugin: durable-task, version: '1.15' }
    - { plugin: blueocean-events, version: '1.3.1' }
    - { plugin: favorite, version: '2.3.1' }
    - { plugin: cloudbees-folder, version: '6.2.1' }
    - { plugin: git-server, version: '1.7' }
    - { plugin: blueocean-git-pipeline, version: '1.3.1' }
    - { plugin: git-client, version: '2.6.0' }
    - { plugin: git, version: '3.6.4' }
    - { plugin: github-api, version: '1.90' }
    - { plugin: github-branch-source, version: '2.2.6' }
    - { plugin: blueocean-github-pipeline, version: '1.3.1' }
    - { plugin: github, version: '1.28.1' }
    - { plugin: htmlpublisher, version: '1.14' }
    - { plugin: blueocean-jira, version: '1.3.1' }
    - { plugin: jira, version: '2.4.2' }
    - { plugin: jsch, version: '0.1.54.1' }
    - { plugin: junit, version: '1.21' }
    - { plugin: blueocean-jwt, version: '1.3.1' }
    - { plugin: jackson2-api, version: '2.8.7.0' }
    - { plugin: ace-editor, version: '1.1' }
    - { plugin: handlebars, version: '1.1.1' }
    - { plugin: momentjs, version: '1.1.1' }
    - { plugin: jquery-detached, version: '1.2.1' }
    - { plugin: mailer, version: '1.20' }
    - { plugin: matrix-auth, version: '2.1.1' }
    - { plugin: matrix-project, version: '1.12' }
    - { plugin: mercurial, version: '2.2' }
    - { plugin: blueocean-personalization, version: '1.3.1' }
    - { plugin: workflow-aggregator, version: '2.5' }
    - { plugin: pipeline-graph-analysis, version: '1.5' }
    - { plugin: blueocean-pipeline-scm-api, version: '1.3.1' }
    - { plugin: blueocean-pipeline-api-impl, version: '1.3.1' }
    - { plugin: workflow-api, version: '2.23.1' }
    - { plugin: workflow-basic-steps, version: '2.6' }
    - { plugin: pipeline-build-step, version: '2.5.1' }
    - { plugin: pipeline-model-definition, version: '1.2.3' }
    - { plugin: pipeline-model-declarative-agent, version: '1.1.1' }
    - { plugin: pipeline-model-extensions, version: '1.2.3' }
    - { plugin: workflow-cps, version: '2.41' }
    - { plugin: pipeline-input-step, version: '2.8' }
    - { plugin: workflow-job, version: '2.15' }
    - { plugin: pipeline-milestone-step, version: '1.3.1' }
    - { plugin: pipeline-model-api, version: '1.2.3' }
    - { plugin: workflow-multibranch, version: '2.16' }
    - { plugin: workflow-durable-task-step, version: '2.17' }
    - { plugin: pipeline-rest-api, version: '2.9' }
    - { plugin: workflow-scm-step, version: '2.6' }
    - { plugin: workflow-cps-global-lib, version: '2.9' }
    - { plugin: pipeline-stage-step, version: '2.3' }
    - { plugin: pipeline-stage-tags-metadata, version: '1.2.3' }
    - { plugin: pipeline-stage-view, version: '2.9' }
    - { plugin: workflow-step-api, version: '2.13' }
    - { plugin: workflow-support, version: '2.16' }
    - { plugin: plain-credentials, version: '1.4' }
    - { plugin: pubsub-light, version: '1.12' }
    - { plugin: blueocean-rest, version: '1.3.1' }
    - { plugin: blueocean-rest-impl, version: '1.3.1' }
    - { plugin: scm-api, version: '2.2.5' }
    - { plugin: ssh-credentials, version: '1.13' }
    - { plugin: ssh-slaves, version: '1.22' }
    - { plugin: script-security, version: '1.35' }
    - { plugin: swarm, version: '3.6' }
    - { plugin: sse-gateway, version: '1.15' }
    - { plugin: sonar, version: '2.6.1' }
    - { plugin: structs, version: '1.10' }
    - { plugin: token-macro, version: '2.3' }
    - { plugin: variant, version: '1.1' }
    - { plugin: blueocean-web, version: '1.3.1' }
    - { plugin: deployit-plugin, version: '6.1.1' }
    - { plugin: bouncycastle-api, version: '2.16.2' }
    - { plugin: blueocean-i18n, version: '1.3.1' }
    - { plugin: jquery, version: '1.12.4-0' }
    - { plugin: http_request, version: '1.8.21' }
    - { plugin: lockable-resources, version: '2.0' }
    - { plugin: email-ext, version: '2.61' }
    - { plugin: ldap, version: '1.17' }
    
- name: Restart Jenkins
  service: name=jenkins state=restarted

- name: Wait for Jenkins to become available
  wait_for: port=8080

The Ansible playbook we are using includes much more steps. It creates a logical volume, copies and decrypts credentials and uploads them into Jenkins, installs and configures specific plugins and installs tools (Maven, Fortify ...), etc..., but the structure is similar.

The next blog post explains how to configure Jenkins by means of the /init.groovy.d/ groovy scripts.

It starts with requirements

Martin Fowler explains the concepts of Continuous Integration (CI) and Continuous Delivery (CD) in two articles, Continuous Integration and Continuous Delivery (*). These articles already contains some hints towards requirements and with this blog post I want to provide an overview of the requirements I have identified for our CI/CD pipeline.
If you visualize the CI/CD process, you will see that it involves several main topics. You also notice that a CI/CD pipeline is not a straight line with a begin and an end. It is a continuous and cyclic process in which all aspects have a 'continuous' character.

(*)
Do not confuse Continuous Delivery with Continuous Deployment. Continuous Deployment goes one step further. In Continuous Deployment all changes that passes the (automated) tests are also automatically deployed to production.



The requirements of a CD/CD pipeline covers all of these aspects. It is up to you, which requirements you find important. Articles on the Internet usually highlight the most obvious ones.

In my pursue to gather a ' complete' set of requirements, I came up with the list below. Some of the topics are still underexposed. For example, we looked into automating the 'Design' and came up with tools such as 'CA Agile Requirements Designer', but ultimately decided to focus on other automation topics. Also 'Operate' deserves a bit more attention. We already implemented the ITIL processes and supporting tools, but it was difficult to define more day-to-day requirements for 'Continuous Operation' (e.g. think of automated server restarts after an incident).

The requirements list so far:

General requirements

  • Make the CI/CD Pipeline the Only Way to deploy to Production
  • CI/CD Pipeline as code: With “CI/CD pipeline as Code”, your code, your automation and your orchestration are commited to the source code management (e.g. Git). The pipeline has the exact same versioned lifecycle, helping you to ensure long term maintainability
  • The CI/CD Pipeline must be high available, because it must be possible to create and deploy fixes 24x7
  • Use the same mechanism to deploy to every environment, incl. production
  • Use short feedback loops - break the process as soon as a certain quality threshold is not met
  • Make small releases
  • Release often

Design

  • Designs are versioned
  • Create workitems that refer to the design (e.g. using Jira, VSTS, ...)

Code

  • Keep source code in a Git repository (e.g. Bitbucket)

Build

  • Webhooks are used to automatically start a build process after a commit/push
  • Keep binaries (dependencies) and artefacts in an artefact repository (e.g. Nexus, Artifactory)
  • Build artefacts are automatically uploaded into an artefact repository
  • Release artefacts are build only once
  • Builds are made by means of a CI tool (e.g. Jenkins)

Build Automation QA

  • Perform static  code analysis using Sonar, Fortify and Sonatype CLM
  • Unit tests (Junit, Cucumber) are performed as part of the build automation
  • Integration tests are (partly) performed as part of the build automation (e.g. using Docker)
  • Code coverage must be defined; setup a Quality Gate to fail builds if test coverage drops below a certain threshold
  • Sonar and Fortify checks may not contain high/critical defects

Provision

  • The CI/CD pipeline is stored as code; the CI/CD pipeline can be recreated in an acceptable timeframe
  • Tests environment are created on-demand (e.g. by means of Ansible); infrastructure is configured (infrastructure-as-code)
  • Request for a testenvironment including middleware and connections is done by means of REST API’s
  • (Test) Environments can be used for both (very) short and longer terms

Deploy

  • Source of a deployment is an artefact from the artefact repository
  • Deploy an application by means of a deployment tool (e.g. XL Deploy, Cloud Foundry CLI, UrbanCode Deploy)
  • Deploy database scripts by means of a deployment tool
  • Stubs and drivers are deployed by means of a deployment tool
  • Automatically rollback a deployment gone bad
  • Implement blue/green deployment to support zero downtime deployment (ZDD)

Test

  • System – and regression tests are automated 
  • User acceptance tests are automated (e.g. UFT, Cucumber, Postman/Newman, SoapUI)
  • Load and stress tests are automated
  • Penetration tests are automated (e.g. OWASP ZAP)
  • Automated tests are triggered/initiated from the CI/CD pipeline
  • Drivers and stubs, needed for different test environments, are developed in parallel with the development of a new feature
  • Tests are reusable (for next testcycles/regression)
  • Test can be executed remotely
  • Tests, testdata, drivers and stubs are versioned (e.g. in Git)
  • Deployment to production is only possible when all OTA tests are succesfully passed; automatically check whether the version in Production is not higher than the highest version in OTA

Release

  • Release management is automated (e.g. XL Release, VSTS)
  • Release builds can be promoted; this depends on the workflow. If a workflow only builds release artefacts, the artefact repository should have a staging function
  • Releases are started and registered in an orchestration tool (e.g. XL Release, VSTS)

Operate

  • Maintenance scripts are versioned in Git

Monitor

  • The CI/CD pipeline is monitored (e.g. Insufficient diskspace is automatically detected and the team is informed)
  • The CI/CD pipeline automatically restarts in case the server restarts
  • Standard monitoring tools are used (e.g. Splunk)
  • Monitor the application in production (e.g. use Dynatrace, Splunk, Oracle Enterprise Manager). Monitor:
    • performance
    • number of request per second
    • throughput
    • CPU usage (application server, database)
    • Memory usage
  • Monitoring results are spread amongst the devops team, so they can judge what to improve (e.g. by means of a dashboard)

Orchestration

  • All steps in the CI/CD pipeline are orchestrated by the appropriate tool;  (e.g. overall orchestration by VSTS or XL Release in combination with Jenkins)

Security

  • Make sure two or more reviewers approved a Pull Request, and there are no failing builds or test runs associated with that commit
  • Do not allow manual upload of artefacts in the deployment tool
  • Uploading artefacts in the artefact repository is done by means of a non-personal account
  • Deployments are done by means of a non-personal account
  • Refine access to specific features by setting permissions for a user or group:
    • Release OTA
    • Release Prod
    • Approve critical steps
    • SCM; arrange permissions in such a way that only certain people can make changes in specific repos and/or branches – and such that nobody can make changes directly in production repo's
    • Dahsboards
    • Define team admin role
  • Store sensitive information (such as database credentials) in a vault or an encrypted dictionary (eg. CyberArk, XL Deploy dictionary, CredHub, Hashicorp Vault)
  • LDAP access to tools (e.g. Jenkins) is mandatory
  • Tools must have fine-grained/matrix authorization
  • User communication with tools is by means of HTTPS (TLS with server side authentication)
  • Non-personal accounts, SSH key pairs and TLS with mutual authentication are used to communicate between tools
  • Only grant accounts temporary (high) access to start deployment to a Production environment
  • Use a 4-eyes principle to grant accounts higher access
  • Higher authorizations are delegated to selected members of a devops team, who issue underlying rights to team members
  • Audit trail: Automatically log build, test, and deploy results. Then complete the circle by linking in commits, reviews, and, issue
  • Traceability: Make sure code changes, code reviews, build results, deploys, and issues are all linked together (as in, you can get from one to the other with a single mouse click). Track:
    • Designs - a design must be linked to a workitem
    • Workitems; epic, story or task (e.g. Jira workitems) - a workitem is associated with a Git feature branche / code commit hash
    • Git commit - associated with the committer (developer)
    • Pull request and approvals by reviewers - are associated with a Git feature branche / commit hash
    • Buildjob and buildnumber - are associated with a Git  feature branche / commit hash
    • Artefact - associated with a Git feature branche / commit hash
    • Release version/identification (release version preferably is the Git tag), including approval - associated with an artefact in the artefact repository
    • Artefact in production - associated with an artefact in the artefact repository

Metrics, reporting, aggregation and notification

  • Metrics must be made available; provide a dashboard to see the throughput of each feature and queues or bottlenecks in the process
  • Service descriptions are automatically generated from code and uploaded to a service repository; this can also be something like Confluence
  • Testreports are automatically generated
  • Testreports are aggregated in one dashboard (e.g. Hygieia)
  • Testresults are automatically fed into an issuetracker (e.g. Jira)
  • Developers are informed (by means of email):
    • of a breaking feature branch build (only the committer receives the mail to prevent email overloading)
    • of a successful feature branch build (only the committer receives the mail to prevent email overloading)
    • of any breaking release or snapshot build (all developers)
    • of a successful release or  snapshot build (all developers)
    • of a deployment to production (stakeholders to be determined by the team)

Governance

  • The devops team keeps track of CI/CD development by means of a scorecard
  • The devops team actively shares successes and knowledge with other project teams

My first blog

A few years ago I started with CI/CD and had some basis knowledge of Jenkins, XL Deploy/XL Release, Nexus, but I had no good understanding about the CI/CD concepts, the workflow and how to glue everything together. Yes, you can find some information on the Internet, but these are only tiny pieces and when you don't know the whole picture, it becomes a complex puzzle!
After some time, things began to fall in its place and with the help of all these snippets in blogs, on stack overflow and whatever, I managed to define a CI/CD pipeline and implemented various parts. Of course there are other people struggling with CI/CD, so I wanted to do something in return, in the form of a few blogs.
I don't want to be pretentious and claim that I know everything about CI/CD, but I picked up a few things along the way, so I hope that I can help at least a few people.

Jenkins vs Azure DevOps (formerly known as VSTS)

Jenkins is probably the number #1 Continuous Integration (and Continuous Delivery) tool for Java developers. It is very flexible and has a l...