Thursday, 21 May 2020

Installing Docker CE on Windows Server 2019 1809 SAC

It is possible to install Docker CE directly on Windows Server 2019, which gives first party support to running Docker containers on a Windows host. It's very easy to do and is done using Install-Package.

Install-Module DockerMsftProvider -Force



Install-Package Docker -ProviderName DockerMsftProvider -Force


You will get a daemon connection error if you do not restart the host after the installation.


Restart-Computer


This will run a first party container using the Nano Server image, which will be pulled from Docker Hub. 

docker run hello-world:nanoserver

How to build a Dockerfile into an image, tag it and push it to Docker Hub

We use Dockerfiles to create images, along like blueprints to outline applications. In this post I am going to walk through:

  1. Create a Dockerfile
  2. Add a custom COPY directive
  3. Build a Dockerfile into an image
  4. Query and identify images on a Docker host
  5. Tag a Docker image on the Docker host
  6. Push a Docker image to Docker Hub
Below is an example of a Dockerfile, which is basically just an nginx web server, we can tell this as the FROM directive is pointing towards the nginx:latest image, which is stored on Docker Hub. This is the base image which will be used to build this Dockerfile. If this image does not exist on the Docker host, when we go to build the file Docker will automatically pull it down locally to the server. 

The WORKDIR allows as to change the working directory inside the container, as part of this example we are changing directory so that we can copy in a new index.html file to customise the landing page for nginx. 

The COPY directive allows us to copy a local index.html to our working directory. The custom index.html file is in the same folder as the Dockerfile, so the process will copy and replace the index.html from the local folder on the host and overwrite what is in /usr/share/nginx. 


# this shows how we can extend/change an existing official image from Docker Hub

FROM nginx:latest
# highly recommend you always pin versions for anything beyond dev/learn

WORKDIR /usr/share/nginx/html
# change working directory to root of nginx webhost
# using WORKDIR is preferred to using 'RUN cd /some/path'

COPY index.html index.html

# I don't have to specify EXPOSE or CMD because they're in my FROM


Now that we have a basic understanding of how the Dockerfile is structured we can build a new Docker image from the Dockerfile. 

From the Docker host cd to the directory which stores the Dockerfile (ensure the sample index.html is also available, as our Dockerfile copies and overwrites the default installed as part of nginx). 

This command builds the image from the Dockerfile, the . at the end of the line mean using the working directory. We are also giving the image a new rpb-web-custom. 

docker build -t rpb-web-custom .     

This command allows us to query the system and find all the images present on this host. 

docker image ls

We can see that the rpb-web-customer image now exists with an IMAGE ID. We need the Image ID for the next command to log a tag against the image. 


The following command tags our local image with a tag we can use when pushing the image to Docker Hub. 

docker tag 9b1ac3 ryanbetts/rpb-web-custom

The following command takes out locally tagged image and pushes it up to Docker Hub at ryanbetts/rpb-web-custom. 

docker push ryanbetts/rpb-web-custom 

Wednesday, 20 May 2020

Docker Management: Clean up all services, containers and images on a Docker host

The following commands can be used to clean up a Docker host. This could be useful if you have been learning about Docker and want to get your lab hosts back to a vanilla state. They could of course work in production but please use the commands with caution.

Remove all services

docker service rm $(docker service ls -q)

This command queries Docker for a list of services and pipes the output to the docker service rm command. You could of course use it with a single service reference, for example:

docker service rm service name/id

Remove all containers

docker ps -aq

Use the ps -aq command to find a list of all the containers running on a Docker host. 

docker stop $(docker ps -aq)

The first step is to use docker stop and pipe the output of docker ps -aq to force all running containers on the system to stop. 

docker rm $(docker ps -aq)

The last step to actually rm the containers from the host, again pipe ps -aq into the docker rm command to achieve this. 

Remove all images

docker rmi $(docker images -q)

This command removes all the images from the Docker host.

docker image prune

This command will remove all images which are not associated with a running or stopped container. Obviously if you have rm'd all the images this command is not needed. But its a good maintenance exercise to complete on a production host when optimsing disk space. 

Tuesday, 19 May 2020

Authenticating to Azure Container Registry with Docker EE on Ubuntu

Azure Container Registry is a PaaS service which allows customers to manage and maintain their own container registry in the cloud. Azure Container Registry can be used with Docker if you do not want to use the defatul, Docker Hub to store your container images. 

It is very easy to create a new instance of Azure Container Registry. From the portal search for Azure Container Registry and follow the steps. You will notice the SKU options available for ACR, full details are available here but in short, throughput, disk storage and private network access are some of the main drivers customers choose Premium over Standard.



Once you have Azure Container Registry, copy your registry server name to the clip board. We must pass it into Docker using the docker login command. It should be presented in the following format:

docker login registry name


Next you will be prompted for a username and password. These can be accessed from the Azure Portal. If you go to the Container Registry and under Access Key, you will notice an option to Enable or Disable "Admin User". This must be enabled for Docker to be able to authentication to ACR. Once enabled, you should be presented with a username and password string. Copy these into Docker and you should be authenticated to the ACR instance. Now that this connection has been made you will be able to push and pull Docker images to the ACR.


Saturday, 16 May 2020

Using tasksel on Ubuntu Server 18.xx to install Ubuntu Desktop GUI environment

The Ubuntu and Debian package known as "tasksel" provides an easy and straight forward was to install some of the most well known components onto a Ubuntu or Debian system. In this example I am going to install the Ubuntu Desktop environment.

sudo apt-get update
sudo apt-get upgrade -y

sudo apt-get install tasksel -y
sudo tasksel

The sudo tasksel command will launch the GUI interface below with a check box option to install packages. 



Use the space bar to select the object until a star appears in the column. The installation will then reach out to the Internet to pull down the binaries which make up the installation package to enable the Ubuntu Desktop. 


When the installation completes, you will be back at the CLI prompt. If you type startx it should launch the graphical environment.  

Friday, 15 May 2020

Configuring Azure Firewall with a contiguous public IP Range

The Azure Firewall is a service offering available to customers in Azure. I'm not going to cover the details of the architecture or the basics of deploying Azure Firewall in this article. 

Many enterprise customers are adopting Azure Firewall to help control and manage the traffic flow for their services within a Azure and in hybrid locations across their WAN. A common ask for customers who are deploying edge facing services is around Public IP space, and how this differs in the cloud compared to on-premise with a traditional ISP.

Any customer with an active Azure Subscription can allocate and assign public addresses to their services from the portal. Many services, such as Azure Virtual Machines are provisioned with a public address as part of the automated deployment process.

Enterprise customers are usually looking for a little more control. This is where Public IP Prefixes come into the picture. It is possible for a customer to define a CIDR Block of public addresses directly in their subscription, to be used at their disposal. This is done by creating a new Public IP Prefix, as shown below.

You will see in the article below Public IP Prefixes can be provisioned with /31, /30, /29 or /28 CIDR Blocks giving a contiguous range of 2, 4, 8 or 16 public addresses. It is possible to bind one of these Public IP Prefixes to your Azure Firewall to ensure the public address range is contiguous.



Once you have you Public IP Prefix create, you must then use the Add IP Address option from the resource. This will create an actual usable address within the prefix range which can in turn be associated to Azure Firewall.


When you provision a new address you must give it a name and a resolveable DNS label.


Now head over to Azure Firewall and go to the Public IP Configuration section and click on Add a Public IP Configuration. This will guide you through binding this new public address to your Azure Firewall. It's worth noting that you cannot provision an Azure Firewall with a Public IP Prefix, you must first create the Azure Firewall with it's default of one random public address, then retrospectively configure the prefix like we are doing here.


From the Add Public IP Configuration window from within Azure Firewall you will notice from the drop down that the public addresses you provisioned as part of the prefix block are now available to be bound to the outside of the Azure Firewall. 


How to Backup and Restore Docker EE UCP

Its possible to back and restore Docker EE UCP if you are running this in a production environment. One thing to note, that if you are doing a backup of UCP you must also have a backup of the Docker Swarm configuration. This can be done following this guide:

https://blog.ryanbetts.co.uk/2020/05/how-to-backup-and-restore-docker-swarm.html

The first step is to find your UCP (id):

docker container run --rm \
  --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp:latest \
  id

This command will push out a string like this 4tkiq0u898qlr0bvehx9g6ek2.


Once you have this id reference we can run the actual backup. Ensure to change any variables which might be different for you, such as the user profile path which is being used as a target for the backup.

docker container run \
  --log-driver none --rm \
  --interactive \
  --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp:latest backup \
  --passphrase "Password" \
  --id 4tkiq0u898qlr0bvehx9g6ek2 > /home/it/ucp-backup.tar


Now we can run the restore using this command:

docker container run --rm -i --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock  \
  docker/ucp:latest restore --passphrase "Password" < /home/cloud_user/ucp-backup.tar

You might get faced with errors if you are trying to fix corruption in which case you have to clean up the old configuration before you attempt to restore the backup. The following command can be used to do this:

docker container run --rm -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --name ucp \
  docker/ucp:latest uninstall-ucp --interactive

Deploying the Docker Universal Control Plane on Ubuntu (EE 19.03)

I was recently building up a Docker EE lab to go deeper in some of my weak areas after passing the Docker Certified Associate certification. The following commands walk you through how to install the Univeral Control Plane on a newly created Ubuntu 18.04 server.

docker image pull docker/ucp:latest

PRIVATE_IP=172.16.1.212

docker container run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp:3.1.5 install \
  --host-address $PRIVATE_IP \
  --interactive

In this particular example I set a variable of PRIVATE_IP and for clarity this was set to the host of my Docker Swarm manager (leader) node. I think most of the time it pulls this automatically but I found this to be more stable if you hard code it.

Please note there is another switch which must be used if you are deploying this on Azure IaaS VMs. 

The following switch should be added --cloud-provider Azure

You may also want to review this article, for another pre step which should be done before deploying UCP on Azure VMs.

https://blog.ryanbetts.co.uk/2020/05/installing-docker-ucp-fails-with-unable.html

How to install and configure SSH on Ubuntu Server 19.03

If you elect not to install OpenSSH at the installation stage of Ubuntu Server, you must install and configure it once the server is deployed. The following commands can be used to achieve this:

sudo apt-get install openssh-server
sudo systemctl enable ssh
sudo systemctl start ssh

Installing Docker UCP fails with "unable to run install step "Deploy Kubernetes API Server": unable to reconcile state of Kubernetes API"

I was deploying Docker EE with the Universal Control Plane to Azure VM's and was hit with the error "unable to run install step "Deploy Kubernetes API Server": unable to reconcile state of Kubernetes API" late on in the process to deploy the UCP.

The first step to fix this was to scrub all of the half baked deployment of UCP, this can be done using the following commands:


docker swarm leave --force

docker rm $(docker ps -a -q)
docker system prune -a
docker secret rm ucp-auth-key

The reason the installation fails here is because of an empty JSON file which is missing from /etc/kubernetes - the /kubernetes folder is actually not there either.


Use the following commands to create /kubernetes along with the empty azure.json file which will resolve the installation error:


cd /

cd etc
mkdir kubernetes
sudo chmod a+rwx /etc/kubernetes
sudo echo "" > azure.json

Hopefully this was helpful. 

Docker Universal Control Plane install fails with "unable to verify that no UCP components already exist"

You may hit this error if you have had a failed attempt at deploying the UCP and there is still orphaned references to it on the Docker host. I forgot to remove the old UCP Docker volumes so when I tried to deploy the latest verion of UCP this happened.

"FATA[0001] unable to verify that no UCP components already exist: the following volumes appear to be from an existing UCP installation and must be removed before proceeding with a new in                stallation: ucp-kv ucp-metrics-data ucp-auth-api-certs ucp-auth-store-certs ucp-client-root-c                a ucp-cluster-root-ca ucp-controller-server-certs ucp-auth-worker-certs ucp-auth-worker-data          
    ucp-kv-certs ucp-auth-store-data ucp-controller-client-certs ucp-node-certs"


Simply use the docker volume prune command to get rid of all the old volumes which are not associated to a running container. Tread with caution if you are using a production system.

How to install Docker Enterprise Edition on Ubuntu 18.04 LTS

To get a trial of Docker Enterprise Edition sign into Docker Hub and get your unique URL.

Set the URL as a variable. 

DOCKER_EE_URL=https://storebits.docker.com/ee/trial/sub-aa658aaa-ec3b-489p-8f91-20774175a085
DOCKER_EE_VERSION=18.09

curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -

sudo add-apt-repository \
  "deb [arch=$(dpkg --print-architecture)] $DOCKER_EE_URL/ubuntu \
  $(lsb_release -cs) \
  stable-$DOCKER_EE_VERSION"

sudo apt-get update

sudo apt-get install -y docker-ee=5:18.09.4~3-0~ubuntu-bionic

sudo usermod -a -G docker AzureUser

Wednesday, 13 May 2020

How to Backup and Restore a Docker Swarm configuration set

It is possible to backup a Swarm configuration store, which by default is stored at /var/lib/docker/swarm any Docker host acting as a manager node. 

This directory must be exported or stored externally to ensure you can restore the Swarm configuration in the event of a failure, obviously it would be best to have multiple Swarm managers in the cluster to ensure high availability. However this will not protect against corruption to the configuration.

sudo systemctl stop docker - 1st stop the Docker service so that no new writes are being committed.

sudo tar -zvcf backup.tar.gz /var/lib/docker/swarm - this command outputs the directory to a tar zip.

The taz zip file is stored in the working directory. You could of course setup up an automated job with something like cron to take out the manual intervention required to achieve this. 

sudo systemctl stop docker //stops Docker on the host to allow a restore

sudo rm -rf /var/lib/docker/swarm/* //deletes the existing data in the Swarm directory

sudo tar -zxvf backup.tar.gz -C /var/lib/docker/swarm/ //unzip tar zip to the directory

sudo systemctl start docker //restart Docker


The Docker Swarm config has now been restored to the Docker host. 

Configuring Docker Swarm Hosts with Join Tokens

To configure a new Docker Swarm, its remarkably easy you run the following command on a new Docker host and the Swarm is created:

docker swarm init

A Swarm is of course a cluster of Docker hosts, so the next step after creating a new Swarm is to join manager and worker nodes to the Swarm. Once you run the docker swarm init command the console will display a join-token, which can be used to join a new host to the Swarm. 

It will look something like this. You will see it prepopulates all the details required to successfully join a new Docker host to this Swarm. 

docker swarm join --token SWMTKN-1-26w39jcflglpun070cl0qxwnbqwobwj68e4i1dxdi1w2n3we80-4vftzhty9xqmhmq13tgapnzyq 192.168.1.9:2377

Its important to understand that a reliable network connection is required between all Docker hosts in Swarm. You will also notice port 2377 is used by the manager and worker nodes to communicate, so this port must be open between the two servers. It is unlikely you will hit any problems here if they are on the same network segment. 

From the console of an existing Docker manager node its possible to generate a new join-token. When you generate a new join-token you state whether the new host is going to join the cluster as a manager or a worker node.

For example create a new join-token for a manager node by running

docker swarm join-token manager

or 

docker swarm join-token worker

How to create a new Docker Swarm on Debian-Buster10

A Docker Swarm is a collection of node working together in a cluster. A Swarm can be made up of manager and worker node. As with most clustering solutions the recommendation is to try and avoid having an even number of nodes, to prevent issues with voting rights and cluster quorum. 

The diagram depicts how a Docker Swarm looks. 



The command below should be run on the first node which you want to become a Swarm Manager, its possible (and recommended) to have more than one Swarm Manager. However, when you initialise the cluster the first node becomes the "manager leader" which is basically the node which conducts and manages the orchestration tasks of the cluster.

docker swarm init --advertise-addr "pirate ip of cluster manager node"

The -advertise-addr switch tells the cluster to be provisioned with the correct address. In many examples its possible to leave this out and the docker swarm command assumes the private address of the host anyway. It's more important to be explicit if the host belongs to multiple networks, perhaps in a public cloud provider where the node might have multiple interfaces on different network segments. 

The following command can be run to check to see if the cluster has provisioned correctly. 

docker node ls

After running the docker swarm init command, it will generate string which can be used to join worker nodes to your Swarm cluster. 

Understanding and Configuring the Logging Driver on a Docker Host

Logging Drivers on Docker are pluggable components which all you to access and pull log data out of containers running on a Docker host. Many Logging Drivers exist for Docker, but the default and most common is known as "json-file". This is set as the default on a newly installed Docker host.

It is possible to change the default Logging Driver by editing the following file:

/etc/docker/daemon.json

Please note that a new file will be created if this does not already exist with custom settings 

sudo vi /etc/docker/daemon.json

Add the following code block in JSON format to set the Logging Driver to "json-file" for the entire Docker host. 

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "15m"
  }
}

The "log-opts" switch is another way for us to control the Logging Driver, and the subsettings depend on the driver itself. Above we have set "log-opts" to "max-size" which sets a max log file size to 15MB's.

Once you have made the code changes to the daemon.json file, commit and save the file. You must restart the Docker services before the changes will be pharsed. To do this run the following command:

sudo systemctl restart docker

Depending on whether you are running Docker EE or CE edition will depend on the Logging Drivers available to you. 

In Docker EE you have the following drivers available: 
  • syslog
  • gelf
  • fluentd
  • awslogs
  • splunk
  • etwlogs
  • gcplogs
  • Logentries
However in Docker CE edition the options are more limited:
  • local
  • json-file
  • journald
The information above demonstrates how to set the Logging Driver at a global level across the entire host. It is possible with Logging Drivers to use different drivers on a per-container basis.

To do this use the -log-driver switch with the docker run command. For example:

docker run --log-driver json-file --log-opt max-size=50m hello-world

Tuesday, 12 May 2020

Audit Docker Security with CIS Benchmark Script

The following Git Hub repo includes a script which checks against dozens of common best practices related to securing Docker. 

https://github.com/docker/docker-bench-security

It is worth running this to get an understanding of your Docker environments security posture. 


Step 1: Clone the repo on your Docker host

git clone https://github.com/docker/docker-bench-security.git

Step 2: cd to the directory

cd docker-bench-security

Step 3: run the script (this runs the entire script)

sudo ./docker-bench-security/sh

Step 4: review the output



It is also possible to target certain aspects of a Docker deployment, such as doing a targeted scan of the Docker host configurations.

To do this run the script with the following switches:

This command runs checks againest the Docker hosts itself. 

sudo ./docker-bench-security.sh -c host_configuration

The other targeted tests are shown below. Just substitued the test name into the above command. 


Tuesday, 14 January 2020

Windows Virtual Desktop: Create Custom Windows 7 Image for WVD

Windows 7 is still very much alive in the enterprise, regardless if today is the last day Microsoft officially supports the legacy operating system. Announced in September 2019, Windows Virtual Desktop allows customers to run a fully managed VDI environment from Azure. The underlying technology is similar to Remote Desktop Service we have had in Windows Server since 2012 but the new Azure service abstracts all of the actual components which make it work. In other words you no longer have to manage RD Session Hosts etc with WVD.
Customers who have not yet managed to migrate away from Windows 7 are effectively exposed as new security updates will no longer be released for Windows 7 (unless you have a side agreement with Microsoft). However, when WVD was announced it was made clear that Windows 7 as an operating system would be supported in WVD deployments. In addition to this Microsoft are providing free Extended Security Updates (ESU) to customers running Windows 7 as part of their WVD deployment.
Although it is very straight forward to get a Windows Virtual Desktop deployment up and going a few steps are required if you would like you WVD Host Pools to spin out VM's running Windows 7. By default, it is only possible to create Host Pools with later operating systems such as Windows 10 Enterprise Multi-Session.
The process to make Windows 7 available to your WVD users is straight forward. You must create a customer "managed" image in Azure to be used as a reference template when creating a new WVD Host Pool.
The easiest way to do this is to deploy a new Windows 7 VM directly into Azure, install your apps and do any customisation, then convert it to an image. It is possible to do something similar if for whatever reason this won't work for you. The process would be to build a reference Windows 7 image on premise, probably on Hyper-V and run through the steps outlined in this guide https://docs.microsoft.com/en-us/azure/virtual-desktop/set-up-customize-master-image
Step 1: Create a new Windows 7 VM
This is very simple, use the poral to provision a new Virtual Machine, selecting Windows 7 Enterprise as the source Image.

Windows 7 Enterprise is not in the quick drop list of operating systems, if you search for it though you will find that there is a single image available. 

Step 2: Install Apps and Make Customisations
Once the VM has deployed, login to it and install any applications, or make any customisations. It is worth noting that some applications need further configuration to make them user-ready at first launch, this would be addressed on a per-application basis.

Step 3: Run SysPrep to Generalise the VM
The SysPrep tool has been around in Windows since before WDS, I think it first appeared in Windows Server 2003 with the introduction of Remote Installation Service (RIS). Anyway it has not changed since, it is a tool used to generalise (strip any machine specific metadata away from a Windows installation) to aid Windows imaging. If you fail to SysPrep you VM images you will have nothing but boot problems and instability.
Open an elevated Command Prompt and cd C:\Windows\System32\sysprep

Select the options shown below Enter System Out-of-Box Experience (OOBE) and ensure Generalize is ticked. You also want to ensure the VM is powered off after this process has completed, if not it will boot and begin detecting all the hardware and run the OOBE process.

The SysPrep process should complete in a few minutes and leave you with a clean, generalised VM ready to be converted into an image. 

Once the process has completed you will see the VM has entered the Stopped (Deallocated) state from the Azure Portal.

Step 4: Convert Windows 7 Image to Azure Image
Once we have the template ready click into the Virtual Machine from the Portal and click the Capture button. This will begin the process of converting the VM to an Azure Managed Image. 

The wizard will walk you through creating a new image, ensure that you give it a descriptive name. The name of the Azure Managed Image will be required when creating your WVD Host Pool along with it's Resource Group. You will notice the option to delete the VM once the template has been created, in this example I have chosen to select this option. However, in production you might want the template in place so that you can do some online servicing of the image as time goes on. 

Once the process completes you will have a new image available from the Portal. You can view all images if you search for Images in the global search bar.

Step 5: Create Windows Virtual Desktop Host Pool from Windows 7 Managed Image
The next step is to create WVD Host Pool, this can be done by searching for WVD in the poral and selecting Create.

You must have a number of infrastructure components in place before you can deploy a WVD Host Pool. This includes a WVD tenant, AD DS domain, AAD tenant with M365 licenses with all the associated networking. When you create a WVD Host Pool the creation must be able to join your WVD Session Hosts to an Active Directory domain.
From the wizard I have labelled the Host Pool "win7-personal" and selected the type as Personal. This outlines that users will maintain a 1:1 mapping with a dedicated Azure VM running Windows 7, which is created from this template. It is also possible to create Pool WVD Host Pools but there is limited value in doing this with Windows 7. By default, Windows 7 can only support 2 concurrent login sessions. This has been true of all desktop operating systems from Microsoft until the release of Windows 10 Enterprise Multi-Session, with is designed to allow pooled desktops to be created from a Windows 10 image.

On the Virtual Machine Settings tab, instead of taking the default of selecting a Gallery image, click on Managed Image. This will then present two new fields for the Azure Managed Image name, along with the Resource Group that the image is in. 

Monday, 13 January 2020

Azure Network Watcher: the default "NetworkWatcherRG" Resource Group is just irritating (how to change it)


If you are like me and insist on keeping your Azure subscriptions nice and tidy, with consistent naming of resource groups the default “NetworkWatcherRG” resource group is bound to annoy you.

Network Watcher is a region level service which can be used to troubleshoot network connectivity between your Azure resources. If you do not want the “NetworkWatcherRG” resource group making things look untidy, the trick is to create the instance of Network Watcher manually using Azure CLI or PowerShell.

The example below creates a Network Watcher instance for UK South in a designated resource group.

az login

az network watcher configure --resource-group "rb-core-rg-1" --locations uksouth --enabled