Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NI
Nico @r.dcotta.eu
Posts 5
Comments 27
Kubernetes? docker-compose? How should I organize my container services in 2024?
  • Good luck on your Nix journey! Happy to help if you have questions.

    Of all the tech I use, I think Nix is the most 'avant-garde' in that it is super different from the usual methods (scripting, stateful things), but works very well once past the paradigm shift and the learning curve that entails.

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • Nomad has host volumes - so you can tell it to mount a folder from the machine on the container, and it will only schedule that container on machines that have that folder. So yes, effectively you pin the workload, thus introducing a SPOF - I do not love it but Grafana only supports sqlite and postgres, so making those HA would require failover setups which is a bit much for a homelab :')

    For backing up, you can use the sqlite command periodically (do cron job or Nomad periodic job) and then upload the backup to some external, safe storage (could be seaweedfs or S3!). For postgres you can use something like this.

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • I have never used NFS, but I think it would fare much better than seaweedfs because it uses Fuse to implement CSI. So for NFS I am sure the protocol would consider half-assed writes

    would be the same for any CSI plugin

    No, it would depend on the CSI plugin and how it is implemented. Ceph for example I know it has several, and cloud providers offer CSI volumes for their block storage (AWS EBS, GCP PD), and they will all perform differently. See this comment from a seaweedfs issue:

    [...] It is always better to run databases on host volumes if you can (or on volumes provided by AWS EBS or similar). But with Seaweedfs especially if you are running postgres with seaweedfs-csi volume be prepared for data corruption. Seaweefs-csi uses FUSE, if anything happens to seaweedfs-csi (Nomad client restart, docker restart, OOM) mount will be lost and data corruption will happen.

    Running on CEPH (since CEPH CSI using Kernel driver not FUSE) is acceptable if you fine with low TPS.

    I found it was easier to make recoverable, backed up, host volumes than to make DBs run on high availability filesystems like seaweedfs (I admit I have not tried Ceph - the deployment looked a bit complicated/overkill for a homelab).

    Postgres and sqlite are just not made for that environment. To run a high-availability DB, it is better to run a distributed DB made for that (think etcd, cassandra) than to run a non-distributed DB on top of a distributed filesystem.

    Good luck! :)

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • The problem with using seaweedfs to a back your DBs is more on the filesystem than the implementations of POSIX features. When you are writing to a file, and the connection to seaweedfs breaks (container restart, wifi, you name it), then you might end up with a half-written file. If you upload pictures, this is unlikely, but DBs are doing several writes per second usually. So it is more likely one of those gets interrupted. In my case, my grafana sqlite DB would get corrupted every other week.

    What I recommend is using DBs natively in your node's filesystem, and backing them up to seaweedfs periodically instead. That way your DBs 'work' but you can get them running again, and the backup is replicated in the distributed filesystem.

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • Good question! So it depends, but TLDR: imo it's worth it, or it's fine, but it's easy to try yourself and see

    most services in their docs will show how to deploy with kubernetes or docker, but rarely Nomad

    You are absolutely correct, but I do find that for the large large majority of things, either you can find an online Nomad config, or the Nomad config is easy enough to translate from Docker compose. Only some complicated larger deployments (think Immich) are harder to translate, but even then it just takes some trial and error. I really do think that extra trouble of translating is very much worth the pain you save yourself in terms of deploying k8s though. You might spend a bit longer typing out the Nomad job file yourself, but in exchange you are thankfully not maintaining the k8s cluster.

    As far Nomad-specific documentation goes, I think it the official one is more than good enough.

    You mentioned compatibility. So far I have not found anything I really wanted that was not possible to set up in Nomad. Nomad does CNI and CSI, which is the same API k8s uses, so thinkgs working there will work for Nomad. Other things you would use with docker compose or k8s don't work with Nomad, but you don't need them (for example: portainer or metrics exporters) because Nomad has them natively already (this blog discusses that).

    As you can see I am pretty opinionated towards Nomad - I have been using it in my previous job in prod, and in my home-lab for a year now, and I am very happy with it. If you would like to read more I recommend this blog post. For Nomad on NixOS I wrote this one.

    For now my advice is: just try nomad yourself (as simple as running nomad agent -dev on your laptop), run the tutorial, and see if it was easy enough that you see yourself using it for the rest of your containers. If you need more help you are welcome to DM me :)

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • I recommend starting with ZeroToNix's docs and then moving on to nixos.wiki, but here is a minimal, working example that I could deploy to a hetzner VPS that only has nix and ssh installed:

    { config, pkgs, ... }: {
      # generated, this will set up partitions and bootloader in a separate file
      imports = [ ./hardware-configuration.nix ];
      zramSwap.enable = true;
      networking.hostName = "miki";
      # configures SSH daemon with a public key so we can ssh in again
      services.openssh.enable = true;
      users.users.root.openssh.authorizedKeys.keys = [ ''ssh-ed25519 AAAAC3NzaC1lNDI1NTE5AAAAIPJ7FM3wEuWoVuxRkWnh9PNEtG+HOcwcZIt6Qg/Y1jka'' ];
      # creates a timmy user with sudo access and wget installed
      users.users.timmy = {
        isNormalUser = true;
        extraGroups = [ "networkmanager" "wheel" "sudo" ];
        packages = with pkgs; [ wget ];
      };
      # open up SSH port
      networking.firewall.allowedTCPPorts = [ 22 ];
      # start nginx, assumes HTML is present at `/var/www`
      services.nginx = {
        enable = true;
        virtualHosts."default" = {
          forceSSL = true;            # Redirect HTTP clients to an HTTPs connection
          default = true;             # Always use this host, no matter the host name
          root = /var/www;        # Set the web root to ser
        };
      };
      system.stateVersion = "22.11";
    }
    

    This sets up a machine, configures the usual stuff like the ssh daemon, creates a user, and sets up an nginx server. To deploy it you would run nixos-rebuild --target-host [email protected] switch. Other tools exist (I use colmena but the idea is the same). Note how easy it was to set up nginx! If I was setting Nomad up, I would just do services.nomad.enable = true.

    As you can see some things you will have to learn (the nix language, what the configs are...) but I think it is worth it.

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • I struggled a bit to get it up and running well, but now I am happy with it. It's not too hard to deploy (at least easier than the alternatives), it has CSI which for me was big, and it has erasure coding. The dev that maintains it (yes, the one dev) is very responsive.

    It has trade offs, so depending on your needs, I recommend it. Backing store for stateful workloads like postgres DBs? Absolutely not. Large S3 store (with an option for filesystem mount) for storing lots of files? Yes! In that regard it's good for stuff like Lemmy's pictrs or immich. I use it as my own Google drive. You can easily replicate in your own cluster, or back it up to an external cloud provider. You can mount it via FUSE on your personal machine too.

    Feel free to browse through my setup - if you have specific questions I am happy to answer them.

  • Kubernetes? docker-compose? How should I organize my container services in 2024?
  • I see no one else commented my stack, so I suggest:

    Nomad for managing containers if you want something high availability. Essentially the same as k8s but much much much simpler to deploy, learn, and maintain. Perfect for homelabs imo. Most of the concepts of Nomad translate well to k8s if you do want to learn it later. It integrates really well with Terraform too if you are also hoping to learn that, but it's not a requirement.

    NixOS for managing the bare metal. It's a lot more work to learn than say, Debian, but it is just as stable, and all configuration will be defined as code, down to the bootloader config (no bash scripts!). This makes it super robust. You can also deploy it remotely. Once you grow beyond a handful of nodes it's important to use a config management tool, and Nix has been by far my favourite so far.

    If you really want everything to be infra-as-code, you can manage cloud providers via Terraform too.

    For networking I use wireguard, and configure it with NixOS. Specifically, I have a mesh network where every node can reach every node without extra hops. This is a requirement if you don't want a single point of failure (hub and spoke) to disconnect your entire cluster.

    Everything in my setup is defined 'as-code', immutable, and multi-node (I have 7 machines) which seems to be what you want, from what you say in your post. I'll leave my repo here, and I'm happy to answer questions!

    --

    My opinions on the alternatives:

    Docker compose is great but doesn't scale if you want high availability (ie, have a container be rescheduled on node failure). If you don't want higher availability, anything more than docker might be overkill.

    Ansible and Puppet are alright but are super stateful, and require scripting. If you want immutability you will love Nix/NixOS

    k8s works (I use it at work) but is extremely hard to get right, even for well-resourced infra teams. Nomad achieves the same but with the leanings of having come afterwards, and without the history.

  • leng - a fast DNS server with adblocking, built for self-hosting, deployable with a NixOS module

    github.com GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    :zap: fast dns server, built to block advertisements and malware servers - GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    A few months ago I went on a quest for a DNS server and was dissatisfied with current maintained projects. They were either good at adblocking (Blocky, grimd…) or good at specifying custom DNS (CoreDNS…).

    So I forked grimd and embarked on rewriting a good chunk of it for it to address my needs - the result is leng.

    • it is fast
    • it is small
    • it is easy
    • you can specify blocklists and it will fetch them for you
    • you can specify custom DNS records with proper zone file syntax (SRV records, etc)
    • it supports DNS-over-HTTPS so you can stay private

    I just released a new version which includes full NixOS support via a module! ❄

    1
    leng - a fast DNS server with adblocking, built for self-hosting
  • I think there are two approaches to infrastructure as code (and even code in general):

    • as steps (ansible, web UI like pihole...)
    • declarative (nix, k8s, nomad, terraform...)

    Both should scale (in my company we use templating a lot) but I find the latter easier to debug, because you can 'see' the expected end result. But it boils down to personal preference really.

    As for your case, ideally you don't write custom code to generate your template (I agree with you in that it's tedious!), but you use the templating tool of your framework of choice. You can see this example, it's on grimd (what I forked leng from) and Nomad, but it might be useful to you.

    P.S also added to the docs on the signal reloading here

  • leng - a fast DNS server with adblocking, built for self-hosting
  • I have a similar use case where I also need my records to change dynamically.

    Leng doesn't support nsupdate (feel free to make an issue!), but it supports changing the config file at runtime and having leng reread it by issuing a SIGUSR1 signal. I have not documented this yet (I'll get to it today), but you can see the code here

    Alternatively, you can just reload the service like you do with pihole - I don't know how quick pihole is to start, but leng should be quick enough that you won't notice the interim period when it is restarting. This is what I used to do before I implemented signal reloading.

    Edit: my personal recommendation is you use templating to render the config file with your new records, then reload via SIGUSR1 or restart the service. nsupdate would make leng stateful, which is not something I desire (I consider it an advantage that the config file specifies the server's behaviour exactly)

  • leng - a fast DNS server with adblocking, built for self-hosting
  • Correct, and much like grimd you can specify several. But unlike grimd, leng will perform recursion when the upstream server is not capable of resolving queries completely (namely, because a CNAME resolved by upstream somewhere points to a domain that is part of your custom DNS records, or vice versa)

  • leng - a fast DNS server with adblocking, built for self-hosting
  • Leng will cache each step of recursion, and it relies on upstream resolvers to do recursion for it as well (like grimd), so you should not be seeing 200ms resolution in any scenario.

    I am keen for you to give it a shot - if you do please make an issue if it's not behaving like you were hoping for

  • leng - a fast DNS server with adblocking, built for self-hosting
  • I am working on adding a feature comparison to the docs. But in the meantime: leng has less features (like no web UI, no DHCP server) which means it is lighter (50MB RAM vs 150MB for adguard, 512MB for pihole), and easier to reproducibly configure because it is stateless (no web UI settings).

    I believe blocky and coredns are better comparisons for leng than "tries to achieve it all" solutions like adguard, pihole...

  • leng - a fast DNS server with adblocking, built for self-hosting
  • If you mean CNAME flattening I have an issue for it. If you mean recursively resolving CNAME until the end record is found, it does support it.

    For example, if you set a custom record mygoogle.lol IN CNAME google.com Leng will return a response with an A record with a google.com IP address when you visit mygoogle.lol

  • leng - a fast DNS server with adblocking, built for self-hosting
  • If it's helpful to you it's helpful in reality!

    If you are having trouble installing or the documentation is not clear, feel free to point it out here or in the issues on github. Personally I think it is simplest to use docker :)

  • leng - a fast DNS server with adblocking, built for self-hosting
  • What you described is correct! How to replicate this will depend heavily on your setup.

    In my specific scenario, I make the containers of all my apps use leng as my DNS server. If you use plain docker see here, if you use docker compose you can do:

    version: 2
    services:
     application:
      dns: [10.10.0.0] # address of leng server here!
    

    Personally, I use Nomad, so I specify that in the job file of each service.

    Then I use wireguard as my VPN and (in my personal devices) I set the DNS field to the address of the leng server. If you would like more details I can document this approach better in leng's docs :). But like I said, the best way to do this won't be the same if you don't use docker or wireguard.

    If you are interested in Nomad and calling services by name instead of IP, you can see this tangentially related blog post of mine as well

  • leng - a fast DNS server with adblocking, built for self-hosting
  • Including SRV records? I found that some servers (blocky as well) only support very basic CNAME or A records, without being able to specify parameters like TTL, etc.

    I also appreciate being able to define this in a file rather than a web UI

  • leng - a fast DNS server with adblocking, built for self-hosting

    github.com GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    :zap: fast dns server, built to block advertisements and malware servers - GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    GitHub - Cottand/leng: :zap: fast dns server, built to block advertisements and malware servers

    A few months ago I went on a quest for a DNS server and was dissatisfied with current maintained projects. They were either good at adblocking (Blocky, grimd...) or good at specifying custom DNS (CoreDNS...).

    So I forked grimd and embarked on rewriting a good chunk of it for it to address my needs - the result is leng.

    • it is fast
    • it is small
    • it is easy
    • you can specify blocklists and it will fetch them for you
    • you can specify custom DNS records with proper zone file syntax (SRV records, etc)
    • it supports DNS-over-HTTPS so you can stay private
    • it is well-documented
    • can be deployed on systemd, docker, or Nix

    I have been running it as my nameserver in a Nomad cluster since! I plan to keep maintaining and improving it, so feel free to give it a try if it also fulfils your needs

    45

    A Nomad job example setup for Lemmy

    github.com GitHub - Cottand/lemmy-on-nomad-example: Nomad job files for running Lemmy

    Nomad job files for running Lemmy. Contribute to Cottand/lemmy-on-nomad-example development by creating an account on GitHub.

    I am selfhosting Lemmy on a home Nomad cluster - I wrote the job files from scratch because I did not find anybody else who attempted the same.

    I thought I'd share them and maybe they will serve as a starting point for someone using a similar selfhosted infra!

    Nomad brings a few benefits from Lemmy specifically over Ansible/Docker, most notably some horizontal scaling across more than one machine.

    Feedback welcome!

    7