Another day, another release! So what’s new in Harvester v1.3.0?
New Features:
One of the standout additions is the support for vGPU (virtual GPU) usage. This feature allows multiple virtual machines to share a single physical GPU, unlocking a world of possibilities for resource-intensive workloads and accelerated computing scenarios. This feature will be key for running AI/ML models, rendering graphics, or leveraging GPU-accelerated applications.
High availability has always been a top priority, and Harvester v1.3.0 takes it to the next level with the introduction witness nodes. Harvester now supports clusters with two management nodes and one witness node. In production environments, Harvester clusters require a control plane for node and pod management. A typical three-node cluster has three management nodes that each contain the complete set of control plane components, including etcd, which Kubernetes uses to store its data (configuration, state, and metadata).
Some situations may require you to avoid deploying workloads and user data to management nodes. In these scenarios, one cluster node can be assigned the witness role, which limits it to functioning as an etcd cluster member. The witness node is responsible for establishing a member quorum (a majority of nodes), which must agree on updates to the cluster state. While witness nodes do not store any data, the hardware recommendations for etcd nodes must still be considered to ensure optimal cluster performance.
Moreover, Harvester v1.3.0 has been meticulously optimized to handle environments where devices frequently power off and on, a common scenario in locations prone to intermittent power outages or where devices are frequently relocated. This optimization ensures that your virtual machines remain responsive and readily available, minimizing downtime and reducing the burden on cluster operators.
Experimental support for Managed DHCP further enhances the flexibility and ease of deployment within Harvester clusters. This feature allows you to configure IP pools and serve IP addresses directly to virtual machines, streamlining network management and simplifying the overall deployment process.
For those exploring new frontiers, Harvester v1.3.0 introduces technical previews for ARM architecture support and Fleet management. The ARM support paves the way for leveraging the power-efficient and cost-effective ARM ecosystem, while Fleet management empowers you to seamlessly deploy and manage objects across multiple Harvester clusters, unleashing unprecedented scalability and centralized control.
Small Fixes
Harvester v1.3.0 also brings a host of enhancements to backup and restore operations, disk management, node joining, installation methods (including ISO and PXE), monitoring, logging, and hardware requirement checks during installation. These improvements not only enhance the overall user experience but also ensure that your virtualization infrastructure remains robust, reliable, and easy to maintain.
Stay tuned for the upcoming Harvester v1.3.1 release, which will introduce changes to the Rancher UI, further streamlining the vGPU provisioning workflow and unlocking even more possibilities for GPU-accelerated workloads.
And that really is just a simple rundown of what is new in Harvester v1.3.0.
I have it installed on my homelab servers and I am looking forward to writing an install guide and my review.
Cheers,
Joe