My friend and I have each bought an optiplex server. Our goal is to selfhost a web app (static html) with redundancy.
If my server goes down, his takes over and vice versa.
I've looked into Docker Swarm, but each server has to be totally independent (each runs its own apps, with a few shared ones).
I can't find a solution that allows each server to take over and manage loadbalancing between the two.
Ideally with traefik, because that's what we're currently using.
To me the real issue is the DNS A record that point to only one IP :(
Your challenge is that you need a loadbalancer. By hosting the loadbalancer yourself (e.g. on a VPS), you could also host your websites directly there...
My approach would be DNS-based. You can have multiple DNS A records, and the client picks one of them. With a little script you could remove one of the A Records of that server goes down. This way, you wouldn't need a central hardware.
Where would you host the script? If it's expected that the server that fires it off is always online and performing health checks, why not have it host a load-balancer? Or another local instance of the website? It's something fun to play around with, but if this is for anything beyond a fun exercise there are much better ways to accomplish this.
I'd host it on both webservers. The script sets the A record to all the servers that are online. Obviously, the script als has to check it's own service.
It seems a little hacky though, for a business use case I would use another approach.
Essentially you need a load balancer hosted somewhere that the traffic hits before getting routed to one of the 2 servers. That could be a VPS running Traefik if you prefer that.
Alternatively you could both run something like IPFS and run the static site on that, but anyone accessing the site would either need IPFS installed, or use a gateway hosted somewhere (Cloudflare has a public for example).
If you don't want to mess with another VPS you can use a global server load balancer (GSLB) provider like Akamai, Cloudflare, Azure, etc.
This being a self-host community though it's unlikely you'd want to pursue something like this, but without knowing more about your specific use case it's tough to make a recommendation.
If global high-availability is your primary goal then a hosted solution is probably best.
If this is just an exercise you and your friend are working on for giggles and it's not for a mission-critical Production instance, then presumably self-hosting a load-balancer on each of your servers that includes both nodes in a target group would achieve this, though somewhat counterintuitive; if the website goes down at either location, I would imagine there's a pretty high likelihood the LB itself would be down as well.
I think what you're looking for is what is sometimes called a "dns load balancer". Offerings like Azure Traffic Manager or AWS Route 53 do this. You can set up health checks that the service will use to determine if one of your locations is down and then automatically update the DNS record to point to the other one. You can also get clever and do things that allow the DNS to resolve the IP of whichever of your servers is physically closer so you get the best performance. I'm not sure what options there are for selfhosting a DNS service like this, however, these services are extremely affordable -- pennies -- and run on very reliable infrastructure, which is what you want.