There are many misconceptions when it comes to what Rancher does. I think the better question is, what does Rancher not do?!
Over the years, Rancher has seemingly become a standard tool when it comes to Kubernetes management.
Rancher isn't just another dashboard for your Kubernetes clusters tho. Want to provision clusters? Rancher does that. Need to manage virtual machines through Harvester? Rancher handles that too. Looking for GitOps automation with Fleet? Yes, Rancher's got you covered. From role-based access control to multi-cloud management, monitoring to logging, Rancher has evolved into a true one-stop shop for infrastructure management. Some might even say it's become the Swiss Army knife of the Kubernetes world - though that might be my SUSE bias showing.
As 'DevOps' (whatever that means anymore) evolves and grows more complex, engineers are having to manage Kubernetes clusters across multiple regions, datacenters, hardware configurations, and development environments - and everything needs to be managed from somewhere. The industry has moved on from the good old days of kubectl apply -f
. GitOps has become a necessity instead of a nice-to-have.
I find it interesting that Rancher has not only adapted to these changes but has come out on top. But just how powerful is Rancher?
Rancher is a cluster manager, not a pane of glass
In the example above, we can see a setup where we have a Rancher server (think upstream management cluster) and three RKE2 clusters (think downstream clusters). Instead of managing the clusters through each cluster’s local kube api with something like argo installed in each one, we can easily take advantage of the built in CD tool that comes installed with Rancher, Fleet. I have another article about Fleet that you can read. I think of fleet as a part of Rancher and not as a separate product.
It gets even sweeter when we are using a cloud provider with Rancher.
Using the built in “Cluster Drivers” and “Node Drivers” we never have to leave Rancher. We can one click a cluster into existence and manage it directly from Rancher.
A practical example of the differences here:
If you're setting up a cluster on AWS, you might use:
The Amazon EC2 node driver to provision the actual EC2 instances and use RKE2 under the hood as the engine.
The Amazon EKS cluster driver to use EKS as the engine.
Why not use Rancher and fleet to manage a cluster on each major cloud provider!
Rancher + Harvester = ❤️
The Rancher + Harvester combo is truly a match made in heaven. Imagine managing Harvester envs, with Rancher.
Imagine if you will:
a large deployment of several Harvester clusters
each harvester cluster is bootstrapped by a single Rancher server
OS images
cloud configs
VM networks
backup settings
VM templates
overprov settings
harvester addons
the list goes on…
Instead of individually bootstrapping each Harvester cluster and pushing out config changes one by one when needed, we now let Rancher and Fleet do the heavy lifting.
I have an article on how to do this which you can read here.
RBAC and IAM just work? Wow!
We've all been there - that Slack message asking for the kubeconfig, or worse, that cursed spreadsheet tracking cluster access that no one updates. You know the one. If you're nodding your head right now, you've felt that pain.
Rancher flips this whole mess on its head. Think of it as that super-organized friend who always knows who should be where. Whether your clusters are spread across AWS, GCP, or sitting in some datacenter downtown, Rancher's got you covered. Plug in your existing auth - AD, SSO, whatever you've got - and let Rancher do its thing.
"Hold up - what about keeping the dev team out of prod?" Yeah, that's basically day one stuff for Rancher. "And giving those pesky auditors just enough access to do their job?" Click, click, done.
Here's where it gets good though. Your infrastructure grows, you add more clusters, bring on new team members - and Rancher just... handles it. Your access policies follow along automatically, like they should have been doing all along. No more copying YAML between clusters or maintaining separate role bindings everywhere.
CIS Benchmarks and Gov Reqs (TL;DR: this thing’s secure)
If you've ever had to prep for a sec audit (soc2 noises intensify) or work with government workloads, you know the drill. CIS benchmarks, NIST frameworks, FedRAMP requirements... the list of acronyms goes on and on.
RKE2 is the only Kubernetes distribution with a DISA-approved STIG. Yeah, you read that right. Not "compatible with," not "working on it," but actually DISA-validated and published.
"What's the big deal?" I hear some of you ask. Well, if you know, you know. But for everyone else - this is like getting a blank check to deploy Kubernetes in the most security-conscious environments on the planet. We're talking Department of Defense, Intelligence Community, the works.
The cool part? This isn't just some bare-bones distribution that sacrifices usability for security. RKE2 comes with all the good stuff:
FIPS 140-2 compliance out of the box
SELinux support (because obviously)
Secure by default configurations
Built for everything from datacenters to tactical edge
And for our government friends dealing with air-gapped environments - Rancher's got you covered there too. No internet? No problem. Private registries, offline updates, and complete isolation when needed.
Shout out to the RGS team! I need to dip my foot into Hauler soon!
Why do I like rancher? Simple - it was built for infrastructure first, everything else second. While others were busy making dashboards pretty, Rancher was solving the actual problems that keep ops teams up at night. And that's why it works. At the end of the day, infrastructure isn't about looking good - it's about working. Every time. Anywhere. At scale.
Let Rancher manage all the things.
Cheers,
Joe
P.S. I used the word Rancher 34 times in this article. Sorry.