The shortest answer to that question is: Traefik is a reverse proxy and load balancer solution.
If that’s all it does, why do all homelabbers (and I mean ALL OF THEM) exclusively use and endorse Traefik. Heck, even I use Traefik and I am not like other homelabbers (sarcasm). This simple service can make your life sooooooooo much easier when it comes to hosting websites, services, or internal dashboards and monitoring.
Let’s look at a more in depth and see if we can pin-point why everyone and their mother uses this lil’ reverse proxy.
As you already know, Traefik is an open-source HTTP reverse proxy and load balancer that helps users deploy microservices easily. It automatically discovers the available services by inspecting the containers running on the host and routes requests to these services.
Traefik integrates with existing infrastructure components (Docker, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, and more) and configures itself automatically and dynamically. It handles advanced use cases such as protocol forwarding, load balancing, and HTTP/2.
Traefik is designed to be simple, modern, and easy to operate, with native support for HTTP/2 and automatic HTTPS via Let's Encrypt. It is a popular choice for microservices architectures and modern cloud-native applications.
Personally, I use Traefik inside of my Kubernetes cluster. If you are interested in learning more about my personal Kubernetes cluster, you can read my post on the here.
In a Kubernetes cluster, Traefik can be used as an ingress controller to route traffic from the internet to the appropriate service within the cluster. This can be useful if you have multiple services running in your cluster and you want to expose them to the outside world (DON’T EXPOSE NODE PORTS TO THE PUBLIC BRO).
Traefik is best explained with some pictures.
In this very simplified diagram, we can see two users making two different requests. One user is hitting demo.com while the other is hitting test.com. As you can see in the cluster, these web sites are being host on different workloads in Kubernetes (this diagram shows different nodes, but you need to think more along the lines of different pods). Traefik sees the request to demo.com and directs the traffic to the correct Kubernetes service which is connected to the web server pod or workload for demo.com. In the same way Traefik sees the request to test.com and directs the traffic to the correct Kubernetes service which is connected to the web server pod or workload for test.com.
This is a closer look at what happens when a specific URL is requested. Traefik gets a request to demo.com, the request gets sent to the correct service (by something called an ingress-route), and the service sends the request to the pod which it is connected to.
Zooming out even more, we can now see the full picture:
User hits demo.com or test.com
The public IP (in this case 102.420.69) has a public DNS A or CNAME record points to that URL (defined in cloudflare in my case).
Traefik has an internal IP (in this case 10.0.0.100) which has been port forwarded on the router. The request gets redirected from the public IP to the private internal IP that traefik is running on.
Traefik sends the request to the correct service.
The service sends the request to the correct pod.
The pod answers and the cycle reverses.
It actually is very simple when you boil it down and look at some diagrams. I think the simplicity of the product itself is the main selling point. I know for myself; I would rather have everything “just work” instead of battling with a super advanced tool that promises all the features and security in the world.