An Intro to the Harvester Terraform Provider
Part 1: The Basics of the Harvester Terraform Provider
I recently took the Harvester v1.2.0-rc1 pre-release for a spin. Instead of upgrading using a local web server (more on that in a future article), I decided it was better to just wipe my server and do a clean install. I have a guide on that here.
But that also meant I would have to redo most of the provisioning work that I had done previously. I would have to download the proper images, create the proper cloud configs, create the proper networks, and finally, spin up all the VMs. That just seems like a lot of work.
That’s where Terraform comes in.
Terraform is an open-source Infrastructure as Code (IaC) software tool developed by HashiCorp, designed to assist in the provisioning, management, and maintenance of infrastructure resources in an efficient and scalable way. Terraform allows users to define and describe their infrastructure using declarative configuration language (HCL), which aids in automating the process of creating, updating, and deleting resources through cloud providers or other platforms.
With Terraform, developers and operators can manage and version their infrastructure, ensuring safe and predictable deployments. Furthermore, its modular architecture, support for a wide range of providers, and seamless integration with other DevOps tools make Terraform a go-to choice for many organizations striving to create agile and maintainable IT ecosystems.
And luckily, Harvester has a terraform provider.
Providers are a logical abstraction of an upstream API. They are responsible for understanding API interactions and exposing resources.
~ terraform.io
With the Harvester terraform provider, we can provision the following:
cluster networks
OS images
vm networks
ssh keys
storage classes
virtual machines
vlans
volumes
Let’s take a look at an example of each.
First we have the cluster network. Usually, you will use the built in management network. Here is what a cluster network would look like in terraform:
resource "harvester_clusternetwork" "cluster-vlan" {
name = "cluster-vlan"
}
Pretty simple. You can also add the following optional fields:
description (String) Any text you want that better describes this resource
id (String) The ID of this resource.
tags (Map of String)
Next we can take a look at our OS images:
resource "harvester_image" "ubuntu20" {
name = "ubuntu20"
namespace = "harvester-public"
display_name = "ubuntu-20.04-server-cloudimg-amd64.img"
source_type = "download"
url = "http://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img"
}
Notice that this will download the Ubuntu 20.04 cloud image. I always recommend using the cloud image over the ISO. You can configure the user using cloud config.
Up next, we look at the vm network:
resource "harvester_network" "mgmt-vlan1" {
name = "mgmt-vlan1"
namespace = "harvester-public"
vlan_id = 1
route_mode = "auto"
route_dhcp_server_ip = "10.0.0.1"
cluster_network_name = data.harvester_clusternetwork.mgmt.name
}
This creates the “mgmt-vlan1” network. The route mode is DHCP and the DHCP server is defined. We also add the cluster network name. Notice it is NOT using the previously created “cluster-vlan” network but is using the built in mgmt network.
SSH keys are relatively simple:
resource "harvester_ssh_key" "mysshkey" {
name = "mysshkey"
namespace = "default"
public_key = "your ssh public key"
}
Notice that you can allocate it to a specific namespace.
Storage classes are sometimes necessary to define. Harvester has a concept called “diskSelectors” which are tags that you can assign to specific disks. You can read more about that here.
Let’s say you have a server with some HDD, SSD, and NVME drives. You can create diskSelectors for each drive type and create storage classes based off the diskSelectors:
resource "harvester_storageclass" "ssd-replicas-3" {
name = "ssd-replicas-3"
parameters = {
"migratable" = "true"
"numberOfReplicas" = "3"
"staleReplicaTimeout" = "30"
"diskSelector" = "ssd"
}
}
Notice that this storage class uses all disks with the “ssd” disk selector.
Now we get to the most important resource IMHO. Virtual machines can be provisioned like so:
esource "harvester_virtualmachine" "ubuntu20" {
name = "ubuntu20"
namespace = "default"
restart_after_update = true
description = "test ubuntu20 raw image"
tags = {
ssh-user = "ubuntu"
}
cpu = 2
memory = "2Gi"
efi = true
secure_boot = true
run_strategy = "RerunOnFailure"
hostname = "ubuntu20"
reserved_memory = "100Mi"
machine_type = "q35"
network_interface {
name = "nic-1"
wait_for_lease = true
}
disk {
name = "rootdisk"
type = "disk"
size = "10Gi"
bus = "virtio"
boot_order = 1
image = harvester_image.ubuntu20.id
auto_delete = true
}
disk {
name = "emptydisk"
type = "disk"
size = "20Gi"
bus = "virtio"
auto_delete = true
}
cloudinit {
user_data = <<-EOF
#cloud-config
password: 123456
chpasswd:
expire: false
ssh_pwauth: true
package_update: true
packages:
- qemu-guest-agent
runcmd:
- - systemctl
- enable
- '--now'
- qemu-guest-agent
EOF
network_data = ""
}
}
A few things to notice here:
we are creating an Ubuntu 20.04 VM
the VM will have two disks; a boot disk and an empy disk
we are passing in the user data using cloud config
the default ssh user is “ubuntu”
This will more or less create a “stock” Ubuntu instance that you would see on any cloud provider.
We can also configure VLANs and volumes.
resource "harvester_vlanconfig" "cluster-vlan-node1" {
name = "cluster-vlan-node1"
cluster_network_name = harvester_clusternetwork.cluster-vlan.name
uplink {
nics = [
"eth5",
"eth6"
]
bond_mode = "active-backup"
mtu = 1500
}
node_selector = {
"kubernetes.io/hostname" : "node1"
}
}
resource "harvester_volume" "mount-ssd-3-disk" {
name = "mount-ssd-3-disk"
namespace = "default"
storage_class_name = harvester_storageclass.ssd-replicas-3.name
size = "10Gi"
}
Not really exciting but its possible.
Now that we have gone through each resource type for the Harvester terraform provider, we can create a script that deploys the infra and networking we need. Isn’t HCI the best!
I will create a demo script and dive into it in another article. Give me some time.
Cheers,
Joe