Instances on Microsoft Azure must be created within a resource group. Create a new resource group with the following command:
1
az group create --name group-1 --location <location>
Now that you have a resource group, you can choose a channel of Flatcar Container Linux you would like to install.
Using the official image from the Marketplace
Official Flatcar Container Linux images for all channels are available in the Marketplace.
Flatcar is published by the kinvolk publisher on Marketplace.
Flatcar Container Linux is designed to be
updated automatically
with different schedules per channel. Updating
can be
disabled
, although it is not recommended to do so. The
release notes
contain
information about specific features and bug fixes.
The following command will create a single instance through the Azure CLI.
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within
the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4459.2.4.
$ az vm image list --all -p kinvolk -f flatcar -s stable-gen2 --query '[-1]' # Query the image name urn specifier
{
"architecture": "x64",
"offer": "flatcar-container-linux-free",
"publisher": "kinvolk",
"sku": "stable-gen2",
"urn": "kinvolk:flatcar-container-linux-free:stable-gen2:4459.2.4",
"version": "4459.2.4"
}
$ az vm create --name node-1 --resource-group group-1 --admin-username core --user-data config.ign --image kinvolk:flatcar-container-linux-free:stable-gen2:4459.2.4
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4593.1.0.
$ az vm image list --all -p kinvolk -f flatcar -s beta-gen2 --query '[-1]' # Query the image name urn specifier
{
"architecture": "x64",
"offer": "flatcar-container-linux-free",
"publisher": "kinvolk",
"sku": "beta-gen2",
"urn": "kinvolk:flatcar-container-linux-free:beta-gen2:4593.1.0",
"version": "4593.1.0"
}
$ az vm create --name node-1 --resource-group group-1 --admin-username core --user-data config.ign --image kinvolk:flatcar-container-linux-free:beta-gen2:4593.1.0
The Alpha channel closely tracks the master branch and is released frequently. The newest versions of system
libraries and utilities are available for testing in this channel. The current version is Flatcar Container Linux 4628.0.0.
$ az vm image list --all -p kinvolk -f flatcar -s alpha-gen2 --query '[-1]'
{
"architecture": "x64",
"offer": "flatcar-container-linux-free",
"publisher": "kinvolk",
"sku": "alpha-gen2",
"urn": "kinvolk:flatcar-container-linux-free:alpha-gen2:4628.0.0",
"version": "4628.0.0"
}
$ az vm create --name node-1 --resource-group group-1 --admin-username core --user-data config.ign --image kinvolk:flatcar-container-linux-free:alpha-gen2:4628.0.0
LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.6.
$ az vm image list --all -p kinvolk -f flatcar -s lts2024-gen2 --query '[-1]'
{
"architecture": "x64",
"offer": "flatcar-container-linux-free",
"publisher": "kinvolk",
"sku": "lts2024-gen2",
"urn": "kinvolk:flatcar-container-linux-free:lts2024-gen2:4081.3.6",
"version": "4081.3.6"
}
$ az vm create --name node-1 --resource-group group-1 --admin-username core --user-data config.ign --image kinvolk:flatcar-container-linux-free:lts2024-gen2:4081.3.6
Use the offer named flatcar-container-linux-free, there is also a legacy offer called flatcar-container-linux with the same contents.
The SKU, which is the third element of the image URN, relates to one of the release channels and also depends on whether to use Hyper-V Generation 1 or 2 VM. There are multiple LTS SKUs for each generation, the latest being lts2024.
Generation 2 instance types use UEFI boot and should be preferred, the SKU matches the pattern <channel>-gen2: stable-gen2, beta-gen2, alpha-gen2 or lts2024-gen2. For Generation 1 instance types drop the -gen2 from the SKU: stable, beta, alpha or lts2024.
Note: az vm image list -s flag matches parts of the SKU, which means that -s stable will return both the stable and stable-gen2 SKUs.
Before being able to use the offers, you may need to accept the legal terms once, here done for flatcar-container-linux-free and stable-gen2:
1
2
az vm image terms show --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2
For quick tests the official Azure CLI also supports an alias for the latest Flatcar stable image:
1
az vm create --name node-1 --resource-group group-1 --admin-username core --user-data config.ign --image FlatcarLinuxFreeGen2
CoreVM
Flatcar images are also published under an offer called flatcar-container-linux-corevm-amd64. This offer does not require accepting image terms and does not require specifying plan information when creating instances or building derived images. The content of the images matches the other offers.
1
2
3
4
5
6
7
8
9
$ az vm image list --all -p kinvolk -f flatcar-container-linux-corevm-amd64 -s stable-gen2 --query '[-1]'{"architecture": "x64",
"offer": "flatcar-container-linux-corevm-amd64",
"publisher": "kinvolk",
"sku": "stable-gen2",
"urn": "kinvolk:flatcar-container-linux-corevm-amd64:stable-gen2:3815.2.0",
"version": "3815.2.0"}
ARM64
Arm64 images are published under the offer called flatcar-container-linux-corevm. These are Generation 2 images, the only supported option on Azure for Arm64 instances, so the SKU contains only the release channel name without the -gen2 suffix: alpha, beta, stable. This offer has the same properties as the CoreVM offer described above.
1
2
3
4
5
6
7
8
9
$ az vm image list --all --architecture arm64 -p kinvolk -f flatcar -s stable --query '[-1]'{"architecture": "Arm64",
"offer": "flatcar-container-linux-corevm",
"publisher": "kinvolk",
"sku": "stable",
"urn": "kinvolk:flatcar-container-linux-corevm:stable:3815.2.0",
"version": "3815.2.0"}
Flatcar Pro Images
Flatcar Pro images were paid marketplace images that came with commercial support and extra features. All the previous features of Flatcar Pro images, such as support for NVIDIA GPUs, are now available to all users in standard Flatcar marketplace images.
Plan information for building your image from the Marketplace Image
When building an image based on the Marketplace image you sometimes need to specify the original plan. The plan name is the image SKU, e.g., stable, the plan product is the image offer, e.g., flatcar-container-linux-free, and the plan publisher is the same (kinvolk).
Community Shared Image Gallery
While the Marketplace images are recommended, it sometimes might be easier or required to use Shared Image Galleries, e.g., when using Packer for Kubernetes CAPI images.
A public Shared Image Gallery hosts recent Flatcar Stable images for amd64. Here is how to show the image definitions (for now you will only find flatcar-stable-amd64) and the image versions they provide:
1
2
az sig image-definition list-community --public-gallery-name flatcar-23485951-527a-48d6-9d11-6931ff0afc2e --location westeurope
az sig image-version list-community --public-gallery-name flatcar-23485951-527a-48d6-9d11-6931ff0afc2e --gallery-image-definition flatcar-stable-amd64 --location westeurope
A second gallery flatcar4capi-742ef0cb-dcaa-4ecb-9cb0-bfd2e43dccc0 exists for prebuilt Kubernetes CAPI images. It has image definitions for each CAPI version, e.g., flatcar-stable-amd64-capi-v1.26.3 which provides recent Flatcar Stable versions.
Uploading your own Image
To automatically download the Flatcar image for Azure from the release page and upload it to your Azure account, run the following command:
<storage account name> should be a valid
Storage Account
name.
During execution, the script will ask you to log into your Azure account and then create all necessary resources for
uploading an image. It will then download the requested Flatcar Container Linux image and upload it to Azure.
If uploading fails with one of the following errors, it usually indicates a problem on Azure’s side:
1
Put https://mystorage.blob.core.windows.net/vhds?restype=container: dial tcp: lookup iago-dev.blob.core.windows.net on 80.58.61.250:53: no such host
1
storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:a3ed1ebc-701e-010c-5258-0a2e84000000 Time:2019-05-14T13:26:00.1253383Z, RequestId=a3ed1ebc-701e-010c-5258-0a2e84000000, QueryParameterName=, QueryParameterValue=
The command is idempotent and it is therefore safe to re-run it in case of failure.
To see all available options, run:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker run -it --rm quay.io/kinvolk/azure-flatcar-image-upload --help
Usage: /usr/local/bin/upload_images.sh [OPTION...] Required arguments:
-g, --resource-group Azure resource group.
-s, --storage-account-name Azure storage account name. Must be between 3 and 24 characters and unique within Azure.
Optional arguments:
-c, --channel Flatcar Container Linux release channel. Defaults to 'stable'.
-v, --version Flatcar Container Linux version. Defaults to 'current'.
-i, --image-name Image name, which will be used later in Lokomotive configuration. Defaults to 'flatcar-<channel>'.
-l, --location Azure location to storage image. To list available locations run with '--locations'. Defaults to 'westeurope'.
-S, --storage-account-type Type of storage account. Defaults to 'Standard_LRS'.
The Dockerfile for the quay.io/kinvolk/azure-flatcar-image-upload image is managed
here
.
SSH User Setup
Azure offers to provision a user account and SSH key through the WAAgent daemon that runs by default.
In the web UI you can enter a user name for a new user and provide an SSH pub key to be set up.
On the CLI you can pass the user and the SSH key as follows:
1
az vm create ... --admin-username myuser --ssh-key-values ~/.ssh/id_rsa.pub
This also works for the core user.
If you plan to use the core user with an SSH key set up through Ignition userdata, the key argument here is not needed, and you can safely pass --admin-username core and no new user gets created.
Butane Config
Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more
via a Butane Config. Head over to the
provisioning docs
to learn how to use Butane Configs.
Note that Microsoft Azure doesn’t allow an instance’s userdata to be modified after the instance had been launched. This
isn’t a problem since Ignition, the tool that consumes the userdata, only runs on the first boot.
You can provide a raw Ignition JSON config (produced from a Butane Config) to Flatcar Container Linux via the Azure CLI using the --custom-data or --user-data flag
or in the web UI under Custom Data or User Data.
As an example, this Butane YAML config will start an NGINX Docker container:
cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
Spawn a VM by passing the Ignition JSON in, and because the Ignition config didn’t include the SSH keys, they are passed as VM metadata:
1
2
az vm create --name node-1 --resource-group group-1 --admin-username core --ssh-key-values ~/.ssh/id_rsa.pub --user-data ./ignition.json --image kinvolk:flatcar-container-linux:stable:3760.2.0
# Alternatively, instead of '--user-data ./ignition.json' you can use: --custom-data "$(cat ./ignition.json)"
Use the Azure Hyper-V Host for time synchronisation instead of NTP
By default, Flatcar container Linux uses
systemd-timesyncd
for date and time synchronization, using an external NTP server as the source of accurate time.
Azure provides an alternative for accurate time - a
PTP
clock source that surfaces Azure Host time in Azure guest VMs.
Because Azure Host time is rigorously maintained with high precision, it’s a good source against which to synchronize guest time.
Unfortunately, systemd-timesyncd doesn’t support PTP clock sources, though there is an
upstream feature request
for adding this.
To work around this missing feature and to use Azure’s PTP clock source, we can employ
chrony
in an
alpine
container to synchronise time.
Since alpine is relentlessly optimised for size, the container will merely take about 16MB of disk space.
Here’s a configuration snippet to create a minimal chrony container during provisioning, and use it instead of systemd-timesyncd:
If the above works for your use case without modifications or additions (i.e. there’s no need to configure anything else) feel free to supply this ignition configuration as custom data for your deployments and call it a day:
gitclonehttps://github.com/flatcar/flatcar-terraform.git# From here on you could directly run it, TLDR:cdazureexportARM_SUBSCRIPTION_ID="<azure_subscription_id>"exportARM_TENANT_ID="<azure_subscription_tenant_id>"exportARM_CLIENT_ID="<service_principal_appid>"terraforminit# Edit the server configs or just go ahead with the default exampleterraformplanterraformapply
Start with a azure-vms.tf file that contains the main declarations:
variable"resource_group_location"{default="eastus"description="Location of the resource group."}variable"machines"{type=list(string)description="Machine names, corresponding to machine-NAME.yaml.tmpl files"}variable"cluster_name"{type=stringdescription="Cluster name used as prefix for the machine names"}variable"ssh_keys"{type=list(string)description="SSH public keys for user 'core' (and to register directly with waagent for the first)"}variable"server_type"{type=stringdefault="Standard_D2s_v4"description="The server type to rent"}variable"flatcar_stable_version"{type=stringdescription="The Flatcar Stable release you want to use for the initial installation, e.g., 2605.12.0"}
An outputs.tf file shows the resulting IP addresses:
Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
First create a terraform.tfvars file with your settings:
The machine name listed in the machines variable is used to retrieve the corresponding
Container Linux Config
template from the cl/ subfolder.
For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.
Create the configuration for mynode in the file cl/machine-mynode.yaml.tmpl:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
passwd:users:- name:coressh_authorized_keys:- ${ssh_keys}storage:files:- path:/home/core/worksfilesystem:rootmode:0755contents:inline:| #!/bin/bash
set -euo pipefail
# This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
hostname="$(hostname)"
echo My name is ${name} and the hostname is $${hostname}
First find your subscription ID, then create a service account for Terraform and note the tenant ID, client (app) ID, client (password) secret:
1
2
3
4
5
6
7
8
9
az login
az account set --subscription <azure_subscription_id>
az ad sp create-for-rbac --name <service_principal_name> --role Contributor
{
"appId": "...",
"displayName": "<service_principal_name>",
"password": "...",
"tenant": "..."
}
Make sure you have AZ CLI version 2.32.0 if you get the error Values of identifierUris property must use a verified domain of the organization or its subdomain.
AZ CLI installation docs are
here
.
Before you run Terraform, accept the image terms:
1
az vm image terms accept --urn kinvolk:flatcar-container-linux:stable:<flatcar_stable_version>
Finally, run Terraform v0.13 as follows to create the machine:
With Terraform version 3.x it
is currently not possible to partition and format data disks
for a azurerm_linux_virtual_machine with ignition’s storage configuration. It might be possible to use the deprecated azurerm_virtual_machine module.
Another workaround is to use a systemd unit to run a script that does partitioning and formatting for you. Here is an example on how to create a
single partition with an ext4 filesystem: