Thursday 29 October 2020

Azure DevOps Release Pipeline fails with "The current operating system is not capable of running this task. That typically means that the task was written for Windows only." while trying to deploy a Azure DevOps Lab

 I was following this lab guide when I come up against this error, while studying for the AZ-400 certification.

The Hands on Labs really are great and should be used if you are learning Azure DevOps.

https://www.azuredevopslabs.com/labs/vstsextend/kubernetes/

Once I followed through the lab I was faced with "The current operating system is not capable of running this task. That typically means that the task was written for Windows only." when I tried to run the Release Pipeline. The Build Pipeline completed without issue so I decided to dig in a bit to find the cause, for once the error message was pretty descriptive. 


It turns out I had miss configured the Release Pipeline stage which run Windows script to use a Ubuntu DevOps agent, which obviously caused the code to fail to execute. 

The fix here was to change this to a Windows-based agent pool so that the code could execute.



 

Monday 25 May 2020

Study Resources for Learning Docker and Passing the Docker Certified Associate (DCA) Certification

On the 23rd of March, the UK Government announced we would be entering “lock down” for the foreseeable future, with no real end date given. My role within Microsoft usually means that I’m travelling at least 3-4 times per month, with no trips possible due to lock down I decided that it was an opportunity to really study hard and skill up in some of my weaker areas.

This helped me define my study plan for the rest of 2020, for me having a study plan aligned to getting certifications makes sense. The formality of having an exam booked with a syllabus of content to learn has always helped me keep on track. This is my justification for chasing certifications instead of “actual skills” even though the two overlap.

For the rest of 2020 I have set myself the goal of getting the following certifications:

  •     Docker Certified Associate (DCA)
  •     Linux Foundation Certified Systems Administrator (LFCS)
  •     Certified Kubernetes Administrator (CKA)
  •     Microsoft Azure Certified DevOps Expert (AZ-400)
The entire Containers and DevOps ecosystem has always interested me, it is also a critical area for my role as a Cloud Solution Architect with a focus on applications & infrastructure.

The first step was to get the Docker Certified Associate out of the way, which is what I am going to cover in this post. Docker was the obvious choice to start with in the journey to get deeper skills in the Container and DevOps ecosystem. Docker has become the industry leading container engine, with it being the default engine which is shipped with Kubernetes.

The Docker Certified Associate (DCA) certification is the only professional certification offered to cover Docker Inc and their technology. It is a multiple-choice exam with some new DOMC questions, delivered remotely which consists of 55 questions which must be answered in a 90-minute period. The DOMC questions are weird, I would use the following simulator to get an idea of what to expect before you go into the exam (link at bottom of page). Full details can be found here https://success.docker.com/certification

The following resources are what I used to get up to speed enough with Docker to pass the certification exam.

Video Training

The courses listed below are the ones in which I used before the exam. I took them in this order and learned something new from each of them. Many of the courses have overlapping content but that is not a bad thing when learning something new.

Pluralsight (Nigel Poulton)
Good intro course which covers many of the basics around containers and container orchestration. If you have some experience with containers you can probably skip this one, but I found it useful.

Pluralsight (Nigel Poulton)
Great course which was key for me to build that first level of formal knowledge around Docker. Not to be missed unless you have some production experience with Docker.

Pluralsight (Nigel Poulton)
Do this course if you have no other choice from the Pluralsight library. It is the most complete and well rounded of the courses and will put you in a good place to go deep with Docker. However, it does not cover all the areas of the DCA and will not make you exam ready alone.

Pluralsight (Nigel Poulton)
Short and sweet but to the point and covers loads of good detail on how networking in Docker works.

Pluralsight (Elton Stoneman)

Excellent course if you have some time before the exam to go deeper with Swarm. I did it to bridge some gaps but after passing the exam I do not plan to do much with Swarm as Kubernetes is the orchestrator of choice at work.

Udemy (Brett Fisher)

This is also an excellent course. I only did half of it which covered the Docker content but intend on going through the Kubernetes sections as well. This course does assume some knowledge but will help massively in getting prepared for the DCA. Brett does state this is not an exam prep course so other study is required to round off the areas this brushes over.
Linux Academy (William Boyd)
This is unmissable in the weeks before the DCA exam. It is very exam focused which none of the course above are. It covers off all the points on the DCA syllabus.

Linux Academy (Travis Thomsen)
Again don’t’ go into the exam without having watched this course. I skipped some of the earlier videos and went to my weak areas to ensure I filled the gaps. Highlight recommended. It is very fast paced so if you are building your own study plan do this course towards the end.

Linux Academy (Travis Thomsen)
Very good resource to help build the hands-on skills needed to be confident with Docker. I did this two days before the exam.



All the video content listed above is worthwhile and absolutely worth your time if you plan to sit the Docker Certified Associate (DCA) exam. I must say that Linux Academy stood out from the rest, probably because they provide hands on labs.

Reading Material

To supplement the video training, I also used the following resources:

The Docker Deep Dive Book – Nigel Poulton

This is an excellent book not only for study but for general reference as well. Just buy it and read it. It is on Amazon for less than a tenner. I also printed the exam blueprint and used it to cross check exam topics with the contents of this book. Not to be missed. I think the technical diagrams in this book are what stands out they are could even make it into design documents in some cases.


Docker Reference Architecture – Docker Inc

I read most of the relevant architectures a couple of times and did one last scan the day of the exam.


Docker Study Guide – Evgeny Shmarnev

This is a collection of Docker documentation which links to the exam content. I used it extensively and it was helpful.


Practice Exams

Practice exams are mandatory before sitting a certification in my experience. I used the following ones.

Linux Academy – offer practice exams as part of the courses.

Whiz Labs – Docker Certified Associate Practice Tests

Example Questions for DCA

Ref 1: (DOMC questions)

https://sei.caveon.com/take/?launch_token=.eJwNy8ERwCAIBMBefIcZQYxQSyYP4bD_EpL979PuFHEs0PY8pIuTfBdIB2N12eam7Wo1PcKgAoC39akDfOpwZMW_2vsB-YAUaA.Ea20GQ.enU9ox51hKShTHna0DCRpHdhi30

Sunday 24 May 2020

Understanding Azure Linux VM Authentication with SSH Key Pairs

The best and most secure option for authenticating to an Azure Linux VM is with a private & public key pair. It is possible to configure password authentication during the deployment of a VM, but this could be subject to a brute-force attack. A private & public key pair is used to secure the authentication, the Azure Linux VM has the public key and the administrator’s workstation stores the private key. It is possible to use the same key pair to authenticate to multiple Azure Linux VM instances, any most do.

I am going to start by deploying a new Ubuntu instance as an Azure VM, from the deployment I am going to select SSH Public Key for the authentication type. For certificate-based authentication you must still specify a username, in this example I have went with AzureUser. I have also selected the option to generate a new key pair as part of this deployment. This key pair will become my primary set of keys to authenticate to all my Azure Linux VM instances, so I have given it a descriptive name. 


Continue through the rest of the wizard to deploy a new Azure Virtual Machine. Once you get to send and you push the configuration to Azure Resource Manager in the form of a deployment you will be faced with this. You must select the option to download the private key as Azure does not store this for you. If you fail to download the private key at this stage this key pair will be null and void. 


The default format is PEM when you generate a key pair directly from Azure. This is what format the private key is downloaded in. If you want to authenticate to Azure Linux VMs using Putty you will have to convert the PEM file to PPK, or authentication will fail. To do this the PuttyGen tool can be used. The first step is to import the private key PEM file from Azure. 


Once successfully imported you must then use the Save Private Key option to ensure you end up with a PPK file which Putty can parse for authentication. 


Once you have a PPK version of the private key, you can use Putty to authenticate to the Azure Linux VM. Do not forget you must point to the PPK file from inside Putty under SSH > Auth.


Ok so we have covered how to deploy a new Azure Linux VM with a new key pair, convert the private key to a form in which Putty can use and authenticate to a VM.

The next step is to configure other Azure Linux VM’s to use this key pair for authentication. A new resource is created in the Azure Resource Group in which the next Azure Linux VM was deployed, this resource is an SSH Key. This is the public key side of this key pair which can be configured on other VM instances. If you query the SSH Key, we can copy and paste the key itself. 


Now if we want to update existing VMs to use the key pair we can go to the VM and under Reset Password select thee Reset SSH Public Key option. From here we paste in the new public key which was created as part of the original VM. 


Once this has been committed, we will be able to use out master private key to authenticate to this Azure Linux VM. 

Configure Azure Linux VM for Certificate Authentication

PuttyGen is handy tool which is installed as part of Putty. It allows you to generate key pairs, this first step is to click Generate. This will then use mouse inputs to generate a random key pair. Once this has completed you will see a Public Key displayed in the Key window.
In this example I am going to reconfigure an existing Azure Linux VM to use certificate-authentication instead of passwords. This VM was deployed using password authentication, which will still work post this configuration change. When key pairs are used for authentication they are placed in the following way:
·         Public Key - this is placed on the Azure Linux VM
·         Private Key - this is kept on the administrative workstation
The key pair can be used to authenticate to many Azure Linux VMs, the important factor is keeping the private key secure. The next step is to click Save Public Key, you will be prompted to enter a passphrase which is entirely optional. If a passphrase is entered here, you will be required to enter it when you authenticate to your Azure Linux VM's using this key pair. Once this has been done, we also must do the same for the private key so click Save Private Key and choose a suitable location for PuttyGen to write it. 

All going well we should be left with two files, one holding the public key and one holding the private key.

Open the public key file, this is the key we must configure the Azure Linux VM with. Copy the entire contents of the public key file. 

From the Azure Portal find the Azure Linux VM you are looking to reconfigure and go to the Reset Password option. Click on Reset SSH Public Key.
You will be presented with the following fields. You must enter a valid username on the Azure Linux VM, in my case the default AzureUser was still being used. You then must paste in the entire public key. Click Update to commit any changes.

To test the configuration open Putty and click Connection > SSH > Auth, from here we must point to the private key file so that Putty can present it when asked by the Azure Linux VM. 

Now try to connect. You will be presented with a username prompt, I entered AzureUser in my example and as you can see the connection has been authenticated successfully with the certificates. 

Friday 22 May 2020

Building a Hybrid Docker Swarm (Windows & Linux) on Azure

In this post I am going to outline how to build a hybrid Docker Swarm cluster with container nodes running Ubuntu and Windows Server. This will allow Windows and Linux containers to run on the same Swarm cluster.

This lab is all running in Azure, and is built up with 4x server. Two are Ubuntu 1804 and Two are Windows Server 2019. 


You can consult these links for details on how to get Docker CE installed onto each of the host operating systems. It’s pretty straight forward.

Install Docker CE on Ubuntu Server

Install Docker CE on Windows Server 2019

Once we have our Docker hosts built the first step is to initialise the Swarm. In Docker Swarm hosts can be either “managers” or “workers”. To get a Swarm going we must first start by creating a manager now, this can be done using the following command:

docker swarm init

I’ve chosen to make my first Ubuntu Server (ubu-docker-1) the clusters only manager node. In a production deployment you would spend time planning and designing the placement of manager nodes etc. it is not uncommon to see Swarm clusters spanned across Azure Availability Zones, which obviously adds complexity but also adds another design factor for placing manager and worker nodes.

Once you run the docker swarm init command on the first server it will output a connection token to the console, this can be used to join other nodes to the Swarm cluster. 


I’ve went ahead and done this on ubu-docker-2, the best way to query a Swarm cluster for nodes is to run docker node ls from one of the managers.


The same command runs successfully on Windows Server 2019 to join it to the Swarm cluster. 



Now that we have a Swarm cluster with Ubuntu and Windows it is possible for us to run Docker service deployments which use Windows and Linux containers.

Thursday 21 May 2020

Installing Docker CE on Windows Server 2019 1809 SAC

It is possible to install Docker CE directly on Windows Server 2019, which gives first party support to running Docker containers on a Windows host. It's very easy to do and is done using Install-Package.

Install-Module DockerMsftProvider -Force



Install-Package Docker -ProviderName DockerMsftProvider -Force


You will get a daemon connection error if you do not restart the host after the installation.


Restart-Computer


This will run a first party container using the Nano Server image, which will be pulled from Docker Hub. 

docker run hello-world:nanoserver

How to build a Dockerfile into an image, tag it and push it to Docker Hub

We use Dockerfiles to create images, along like blueprints to outline applications. In this post I am going to walk through:

  1. Create a Dockerfile
  2. Add a custom COPY directive
  3. Build a Dockerfile into an image
  4. Query and identify images on a Docker host
  5. Tag a Docker image on the Docker host
  6. Push a Docker image to Docker Hub
Below is an example of a Dockerfile, which is basically just an nginx web server, we can tell this as the FROM directive is pointing towards the nginx:latest image, which is stored on Docker Hub. This is the base image which will be used to build this Dockerfile. If this image does not exist on the Docker host, when we go to build the file Docker will automatically pull it down locally to the server. 

The WORKDIR allows as to change the working directory inside the container, as part of this example we are changing directory so that we can copy in a new index.html file to customise the landing page for nginx. 

The COPY directive allows us to copy a local index.html to our working directory. The custom index.html file is in the same folder as the Dockerfile, so the process will copy and replace the index.html from the local folder on the host and overwrite what is in /usr/share/nginx. 


# this shows how we can extend/change an existing official image from Docker Hub

FROM nginx:latest
# highly recommend you always pin versions for anything beyond dev/learn

WORKDIR /usr/share/nginx/html
# change working directory to root of nginx webhost
# using WORKDIR is preferred to using 'RUN cd /some/path'

COPY index.html index.html

# I don't have to specify EXPOSE or CMD because they're in my FROM


Now that we have a basic understanding of how the Dockerfile is structured we can build a new Docker image from the Dockerfile. 

From the Docker host cd to the directory which stores the Dockerfile (ensure the sample index.html is also available, as our Dockerfile copies and overwrites the default installed as part of nginx). 

This command builds the image from the Dockerfile, the . at the end of the line mean using the working directory. We are also giving the image a new rpb-web-custom. 

docker build -t rpb-web-custom .     

This command allows us to query the system and find all the images present on this host. 

docker image ls

We can see that the rpb-web-customer image now exists with an IMAGE ID. We need the Image ID for the next command to log a tag against the image. 


The following command tags our local image with a tag we can use when pushing the image to Docker Hub. 

docker tag 9b1ac3 ryanbetts/rpb-web-custom

The following command takes out locally tagged image and pushes it up to Docker Hub at ryanbetts/rpb-web-custom. 

docker push ryanbetts/rpb-web-custom 

Wednesday 20 May 2020

Docker Management: Clean up all services, containers and images on a Docker host

The following commands can be used to clean up a Docker host. This could be useful if you have been learning about Docker and want to get your lab hosts back to a vanilla state. They could of course work in production but please use the commands with caution.

Remove all services

docker service rm $(docker service ls -q)

This command queries Docker for a list of services and pipes the output to the docker service rm command. You could of course use it with a single service reference, for example:

docker service rm service name/id

Remove all containers

docker ps -aq

Use the ps -aq command to find a list of all the containers running on a Docker host. 

docker stop $(docker ps -aq)

The first step is to use docker stop and pipe the output of docker ps -aq to force all running containers on the system to stop. 

docker rm $(docker ps -aq)

The last step to actually rm the containers from the host, again pipe ps -aq into the docker rm command to achieve this. 

Remove all images

docker rmi $(docker images -q)

This command removes all the images from the Docker host.

docker image prune

This command will remove all images which are not associated with a running or stopped container. Obviously if you have rm'd all the images this command is not needed. But its a good maintenance exercise to complete on a production host when optimsing disk space. 

Tuesday 19 May 2020

Authenticating to Azure Container Registry with Docker EE on Ubuntu

Azure Container Registry is a PaaS service which allows customers to manage and maintain their own container registry in the cloud. Azure Container Registry can be used with Docker if you do not want to use the defatul, Docker Hub to store your container images. 

It is very easy to create a new instance of Azure Container Registry. From the portal search for Azure Container Registry and follow the steps. You will notice the SKU options available for ACR, full details are available here but in short, throughput, disk storage and private network access are some of the main drivers customers choose Premium over Standard.



Once you have Azure Container Registry, copy your registry server name to the clip board. We must pass it into Docker using the docker login command. It should be presented in the following format:

docker login registry name


Next you will be prompted for a username and password. These can be accessed from the Azure Portal. If you go to the Container Registry and under Access Key, you will notice an option to Enable or Disable "Admin User". This must be enabled for Docker to be able to authentication to ACR. Once enabled, you should be presented with a username and password string. Copy these into Docker and you should be authenticated to the ACR instance. Now that this connection has been made you will be able to push and pull Docker images to the ACR.


Saturday 16 May 2020

Using tasksel on Ubuntu Server 18.xx to install Ubuntu Desktop GUI environment

The Ubuntu and Debian package known as "tasksel" provides an easy and straight forward was to install some of the most well known components onto a Ubuntu or Debian system. In this example I am going to install the Ubuntu Desktop environment.

sudo apt-get update
sudo apt-get upgrade -y

sudo apt-get install tasksel -y
sudo tasksel

The sudo tasksel command will launch the GUI interface below with a check box option to install packages. 



Use the space bar to select the object until a star appears in the column. The installation will then reach out to the Internet to pull down the binaries which make up the installation package to enable the Ubuntu Desktop. 


When the installation completes, you will be back at the CLI prompt. If you type startx it should launch the graphical environment.  

Friday 15 May 2020

Configuring Azure Firewall with a contiguous public IP Range

The Azure Firewall is a service offering available to customers in Azure. I'm not going to cover the details of the architecture or the basics of deploying Azure Firewall in this article. 

Many enterprise customers are adopting Azure Firewall to help control and manage the traffic flow for their services within a Azure and in hybrid locations across their WAN. A common ask for customers who are deploying edge facing services is around Public IP space, and how this differs in the cloud compared to on-premise with a traditional ISP.

Any customer with an active Azure Subscription can allocate and assign public addresses to their services from the portal. Many services, such as Azure Virtual Machines are provisioned with a public address as part of the automated deployment process.

Enterprise customers are usually looking for a little more control. This is where Public IP Prefixes come into the picture. It is possible for a customer to define a CIDR Block of public addresses directly in their subscription, to be used at their disposal. This is done by creating a new Public IP Prefix, as shown below.

You will see in the article below Public IP Prefixes can be provisioned with /31, /30, /29 or /28 CIDR Blocks giving a contiguous range of 2, 4, 8 or 16 public addresses. It is possible to bind one of these Public IP Prefixes to your Azure Firewall to ensure the public address range is contiguous.



Once you have you Public IP Prefix create, you must then use the Add IP Address option from the resource. This will create an actual usable address within the prefix range which can in turn be associated to Azure Firewall.


When you provision a new address you must give it a name and a resolveable DNS label.


Now head over to Azure Firewall and go to the Public IP Configuration section and click on Add a Public IP Configuration. This will guide you through binding this new public address to your Azure Firewall. It's worth noting that you cannot provision an Azure Firewall with a Public IP Prefix, you must first create the Azure Firewall with it's default of one random public address, then retrospectively configure the prefix like we are doing here.


From the Add Public IP Configuration window from within Azure Firewall you will notice from the drop down that the public addresses you provisioned as part of the prefix block are now available to be bound to the outside of the Azure Firewall.