Looking back at the previous parts of this series, we have been able to manually setup two hosts, a Windows one and a Linux one, and a simple pipeline to automatically deploy new Azure DevOps/TFS Agents in Docker containers on such hosts and even update them. In this post we will look how to provision the hosts themselves. For this purpose we will use Terraform and invoke it from Azure Pipelines so we can automate host creation in Azure.
In the previous instalment we built custom Docker images for Azure Pipelines/TFS Agents. In this post, we will explore the lifecycle of Docker containers running such images. Container Deploy Pipeline This pipeline is more complex than the previous requiring 4 actions: checking if the agent (rectius the container running the agent) is running If so, stop and remove the container Pulling the image from the selected Docker Registry Starting the container with the proper parameters.
In the previous instalments we examined a possible Dockerfile for an Azure Pipelines/TFS Agent. In this post, we will explore the pipeline that can automatically build such custom agent images. Docker Registry To automate properly we need a Docker Registry where storing the Docker images we build. There are many advantages in using a registry, in our scenario it enables: pulling an image version built years ago distribution of images to multiple hosts caching locally base images, allowing air gap builds For the purpose of this series we will use Azure Container Registry (ACR for short), but there are many options; for example I used successfully ProGet.
In the previous instalment we setup a couple of machines to run Docker and host docker containers. In this post, we will explore the structure of a Dockerfile for Azure Pipelines/TFS Agent. There is a notable difference between Azure DevOps Service and Server in terms of handling agent updates. The first part of this article can be used in air-gapped environments. If you need a primer on Docker there is plenty of resources, from the excellent The Docker Book to the official documentation, Pluralsight courses, etc.
The first step will be to setup an environment where we can run Docker and is the topic for this instalment. We need at least two kinds of hosts: a Windows and a Linux machines. Simple reason: you cannot run Windows containers on a Linux host, also running Linux containers on a Windows machine is inefficient (they truly run inside a virtual machine). Windows support for Docker is tied to specific kernel versions.
Welcome. This series of articles will go in details of automating Azure Pipelines infrastructure itself. The text is accompanied by a source code repository publicly available on GitHub. Scenario and Problems Imagine yourself in the scenario of an independent team responsible of maintaining its own build pipeline. Typical solutions are: Grab a leftover desktop or server machine Ask the IT department for a virtual machine Buy a VM in the cloud Use the standard hosted agents provided by Azure Pipelines These solutions share some common problems.
To work on TFS Aggregator I use a number of virtual machines running different versions of Team Foundation Server. I also install the remote debugging tools (more info on remote debugging) inside the VM and I am good to go, except for a little tweak that I always forget. So I am writing about here as a reminder. In my configuration the Virtual Machine in Hyper-V sees the network as Public while TFS setup, by default, opens the Firewall ports only on Private networks.
Many of you knows my work on TFS Aggregator. Since the beginning we opted for Markdown as the format for the project documentation, at the beginning they were some files in a doc folder, then I moved the content to the project’s GitHub Wiki, today I use the same files to generate the GitHub pages at https://tfsaggregator.github.io/intro/. In this post I will describe how this latter step works in detail to publish our open source project’s documentation.
It all started because I needed using files coming from a Git repository and additional files stored in classic Team Foundation Version Control (TFVC), all toghether in the same build. The Options You have three options: the REST API, the tf.exe vc command or … we will see. Option 1 – REST API My first attempt relied on a Powershell script to download the files from TFVC using REST API.
I did it again and changed my blog engine to Hugo; as a consequence you will find some small change in style and navigation. The first and foremost reason is the complexity induced by Jekyll. It requires some GB (who says that Visual Studio is big?) and many vodoo hacks to work on Windows. I ended having a Linux VM running Jekyll and some additional steps going from the Markdown file to a published post.