I've been doing small hosting off and on for a while. Mainly for accessing files at home and the occasional Minecraft server. Not smart, as I've never used a specialized router. I used to use ddwrt, but now it's impossible to flash most consumer grade routers.
id like to learn more stuff about cyber security, host other stuff, maybe host a website, but I'm just a guy who lives in an apartment. I'm stuck with 1 Internet service that claims it will terminate my service if they find me to be hosting anything. They must be semi-lax with that rule, because i haven't gotten terminated for using ssh and cockpit.
Do you guys own a house, or are just fortunate enough to have access to an ISP that will let you host your own stuff?
I have a yearly vps subscription with 16GB ram, 160 GB ssd and 8 cores, including 5TB network limit. It is some Lithuanian company (time4vps). I don't have a static ip at home, and if I want to get one I have to pay pretty much the same amount, so why bother?
It has Debian 11, and ufw as the only security measure, together with Caddy as reverse proxying everything so only a handful of ports are open (80,8080, 443, and one for syncthing and one for dot).
I have the following services running:
Nextcloud (for office tools, calendar, to do, boards)
firefly iii for self accounting
technitium dns server for doh and dot with blocking
grafana, prometheus and node exporter foe monitoring
libreddit for, well, you know
searcxng
trilium for private knowledge base
tailscale for tunneling and VPN
syncthing for file syncing and password sync together with keepassxc
my personal page, auto updating with github actions over sftp.
Hey! I never used Adguard, and I mainly installed technitium to have a private and secure dns-over-https and dns-over-tls server. The blocking is also quite good, it has predefined lists (labeled like "gambling", "scamming", "porn", "ads"..etc. ) that you can select. It also allows to use custom lists. The overall UI is also nice and has good UX.
For me it’s simple: my ISP has crippled the upload to 30mbps making it impossible to host something from my home publically (download is 300mbps or more) but I do selfhost on unraid .. it’s just for stuff in my house or for my privately with vpn outside. I run a TON of apps this way.. I just don’t need them to be .. public they are just for me to use at home mostly.
That for me is also selfhosting.
Now that said: I still ask the same question to my isp when they want to upsell me something: and what about the upload? The sales persons mostly don’t know what I mean or how it matters 🤦♀️.. anyway I’ve been doing this for 20+ years now…… kinda lost hope? But nah not yet 😏 ..
“hoop doet leven” we tell or selves over here (translates to: hope is live)
I also have limited upstream connectivity (40 Mbps nominally, in practice up to 43 sometimes), but I still host a nextcloud instance (files, addresses, calendars) for family and friends.
Fibre will become available soon with up to 300 Mbps upstream, then I may consider installing Lemmy or even a small peertube instance.
Yes, we own a house, so I have a dedicated server room in the basement with a small rack with a few old, but still moderately powerful servers, and our ISP (Deutsche Telekom) offers unlimited traffic volume and has no restrictions on hosting, as far as I can tell. Maybe if I did it commercially that would be different, but I don't think so.
Same here. I have multiple servers between unraid and proxmox. Everything I have set is for local use. I used to have a few things accessible externally but now revert to using WireGuard if I need to access things locally. Only exception is that I have Nextcloud publicly accessible.
I've been slowly adding things to the mini PC someone in my family gifted me. I keep thinking there's no way this little machine can take one more thing. But as I sit here it's idling between 10-20% and basically never peaks up to the top. I've had no issues with this little machine keeping up. I also have slow internet, the lowest package offered. But my little champion just keeps proving it's self time and time again. It's made me realize that most people on here are making this sound much more burdensome than it actually is for a personal use server. If it's just you and family, you probably don't need to worry about it.
The more I self host the more I realize everyone needs to be self hosting at least one thing. But really why not more? Things are getting easier and easier mostly because those amongst us with really in depth knowledge are making easy launch scripts and self deploying programs. It's taught me a lot and has been a lot of fun. It's also really practical. I especially like having the FTP which might be the first and easiest thing I setup. It's really cool to have my own cloud it's not super secure being that there's no data redundancy but it's more than adequate for things I already have redundancy for like my whole phone syncs up every night, but I also upload all my photos to a cloud service. I also have common space for me and my spouse to share files really really easily and keep these files all in a common space. This has been helpful when looking for medical records during an emergency. But also just to share larger files in general. We even keep a collection of ebooks on there, we read from my server every night. My kid watches most TV from my server too. This avoids my kid being exposed to most ads I find inappropriate.
nothing fancy, just pihole on my raspberry pi, and Terraria / Minecraft on my laptop for my friends and I (I would host on my computer but I have the really weird problem where something internet related on my computer randomly stops working when people are connected to these, and the internet doesn't work until I restart my computer)
I have a salvaged desktop in a closet which I use for:
pihole (adblock and local dns)
unbound for upstream dns (no more 8.8.8.8 dns for me)
VPN to access my home network and for some security on public wifi
NAS (only via sshfs, want to try nextcloud) where data is stored on a software raid array
a couple SQL databases for a hobby project
Since I have ports exposed (I know), I have it configured for no root login, some default ports are set to non default ports, and I have fail2ban installed.
I'm pretty proud of my setup and it's made my life and work flow pretty awesome and simplified, especially with the WFH/hybrid stuff.
I want to try nextcloud so I can consolidate my calendar(s?), and get rid of trello as a service, in addition to serving my NAS files. But i want to test drive it first and I dont have a system to do that properly at the moment.
I host Nextcloud and it is huge life saving tool. I use it for backuping photos, hosting calendars, tasks, contacts and RSS. I use Nextcloud Deck as Trello replacement. Nextcloud can also replace Google Docs.
I originally thought it was overkill for me, who just needed to access files, until I read about deck, calendar, and chat. Now I'm ultra sold. I'm tired of slack, trello, email, calendar all being in different places.
Also, you don't need a crazy router to get started. Mine is a crappy $100 router. Most will have port forwarding if you need to expose ports, or ddns if you want a domain name. There are some things you'd want a slightly more powerful router for (like maybe a media server serving most of your house). But you can always upgrade your router.
My self hosted setups have evolved over the years. I started out with a Raspberry Pi hosting a Drupal site flying under my ISPs radar with a dynamic IP address I had to adjust my DNS settings to point to pretty frequently. In time I had 3 Pis running hosting websites. Then I learned about apache virtual hosts and put all the sites on one Pi. These days I use a ODroid H3+ to host a Nextcloud instance. It sits on the back of my desk collecting dust. Glamour pic for reference. I have it propped up on some junk for better cooling. I love it for it's low power consumption and relatively good performance for a single board computer.
IPSec site-to-site to Oracle cloud (only open for Oracle VPN GW IP)
NSD for authoritative DNS
Unbound for DNS filtering (unbound adblock script)
script that updates my public IP to DNS provider should it change
Containers / Debian running on Asus PN62
Portainer for controlling local Docker as well as one in the Oracle Cloud
certbot with DNS auth to get certificate for local services, this way I don't need to open anything to the Internet
Traefik as reverse proxy configured via labels
Cloud
Cloudflare
This is in front of public services
Public DNS
Oracle Cloud
Free tier server (4x vCPU, 24GB RAM) with Docker, Traefik, Portainer agent
IPSec from home so I can control Docker on my cloud server
Azure
Azure blob storage for backups (Restic)
Everything is separated as much as I can. All stacks are on separate networks with strict firewall rules (iptables) on host to control which container can talk to others. For example Traefik can talk to Gitea but not vice versa. Everything on physical network is separated by VLANs.
You can use things like Tailscale or Cloudflare Tunnel for hosting things inside your home network.
I'd use Tailscale if only you or a couple of people need access to your internal network and services, or Cloudflare Tunnel if you want to expose your self-hosted services to the outside world.
I personally have the luxury to have 2 internet connections available to me.
I live in an apartment where ISP connection A is shared among the residents (they all have their own router connected, so using double-nat, which is not great but it works), and I managed to negotiate with the landlord that I could use a dedicated fiber connection since it does not disrupt the rest of the residents, and my work pays that bill. It's small virtual ISP, so I was also able to request a static public IP.
For my network at home, I'm using a Unifi stack: UDM-Pro and USW-Pro.
For running services on my network, I have a server running Unraid where I mostly host services in containers of which I expect a lot of data to be stored on.
Rest of my services I run on 6 thinclient grade hardware ( 4 Lenovo ThinkCenter M73 Tiny, 1 HP ProDesk 600 G3 and 1 Shuttle XH61V) using Nomad for the container clustering, docker as the runtime engine, and Consul for service discovery.
My router port-forwards a select number of ports (80 and 443 among things) to my reverse proxy (Traefik) which then routes the connections to the correct services based on the URL and other rules.
But, if your ISP is being difficult... renting a VPS these days is a viable option.
Honestly I don't recommend hosting your own public facing stuff yourself at all these days. Keep it all internal and if you have websites or other stuff you need to have publicly facing then put it on a cheap hosting service applicable for what you're doing.
Security of public facing services is very difficult for most people to get right especially on an ongoing basis. Better that your backed up VPS gets hacked then something on your personal home network.
My ISP used to block ports and have pretty strict anti hosting rules, but I moved to a place with more lax rules on hosting and set up a few things. When I moved back, I kept things exactly as I had them. They must have eased their rules because everything has worked and I've been back for 3 years now and they haven't dinged me.
I have a small StarTech rack in my living room (live in an apartment) with some rack mount gear in it (R720, R210II, SuperMicro 846) as well as an "off-site" seed box rented from one of the OVH sub brands out of Canada. I use that server for any services that need to be public facing, that way my home is not exposed (not that I could really, with CGNAT).
For example, I host my own lemmy instance, which was a PITA to set up and get federating correctly.
I have an ASUSTOR NAS with 16tb storage that runs my Plex server. It's hardwired to my old gaming rig (2070 GPU / i79700k CPU) which handles the transcoding. I never took the time to set up Sonarr/Radarr so I just manually manage the torrents from there when I add stuff. I own my house if that matters and I have AT&T Fiber, 1000 up/ 1000 down.
Not sure if I understand, are you behind CGNAT? do you want the service to be publicly accessible? If you can't do port forwarding, tailscale can help to access remotely.
Currently I use a normal desktop pc with proxmox and a few drives there to spin up some VMs and LXC. For the service I use podman. Works great.
I host my own website in my home. Internet speed is 20Mbps download, 4Mbps upload. I usually access my stuff inside my home so the speed gets upgraded to 100Mbps (my router Ethernet limit). My ISP still didn't contact me about anything.
I host my stuff at home from a combination of old desktops, laptops and other leftovers. In addition I have a cheap vps which I use as a wireguard server which routes traffic to the different servers hosted at my home over wireguard. I also use this vps also to enable my parents to use the server from their home (also over wireguard).
An added benefit is that my ISP does not see that Im hosting anything, although I never had issues befoe this setup.
My ISP (Fios) doesn't seem to care what I do. I can even open ports 80 and 443, which I have done. I host my website on an Orange Pi 5 and have a Cloudflare Tunnel securing it. The rest of my exposed services are proxied by Nginx Proxy Manager.
I use Proxmox and set up various virtual machines and containers, then Docker containers within a VM for my services. Some of the stuff I don't want Fios seeing is masked behind a VPN container (Gluetun).
Fortunately I have 700/700, it's a dynamic IP so when that changes my services can blip. I use a SWAG container that sorts out everything I need to map services to certified https addresses on my own domain.
I recently changed my main hypervisor from UNRAID to Proxmox.... though I'm running UNRAID as a VM under proxmox to manage my storage and I'm still porting services from UNRAID to proxmox.
My next step will be to setup a software router ( either OPNsense or pfsense ) and route some of my traffic through a VPS to remove the issues of having a dynamic IP.
Personally, I'm very lucky with my isp. I have gigabit fiber to my door and they charge me 10$ extra per month for a public IP and to be taken off of thier firewall (I also have to promise that I'm running my own firewall and traffic monitor, but that's not a hard promise to keep considering it's basic security).
I'm just getting back in to self hosting after years of IT work beating that hobby out of me, but my plan is to add a few hosts to my domain so I can share my photography and maybe a jellyfin instance for when I'm traveling for work.
After my workplace switched to HP I have a bunch of Lenovo m900s in storage so I figured that would be a good base to begin building my homelab back up.
I have a desktop computer (more powerful than my actual desktop computer tbh). I run libremdb, teddit, proxitok, jellyfin, radarr, sonarr, prowlarr, kavita, and maybe some other stuff that I'm forgetting.
My own house, internet line with ISP that is cool with selfhosting, Proxmox/TrueNAS, Opnsense/OpenWRT for network equipment, server hardware is asrock server board in one server, hp microserver gen8 for storage, https://zaggy.nl
If I didn't have the first 2 I would probably use a VPS.
I went all out for a bit with a server rack with a 12 bay hot swap Dell server and a separate application server with a bunch of VMs all hooked up to a fiber connection with no blocked ports, but I just downsized to a 6 bay synology and intel nuc10-i7 since I'll be moving and losing the fiber connection and basement space.
Now I just run most of my docker containers on the synology using the official docker plugin which provides native support for docker compose, with the exception of plex/jellyfin, which run on the nuc that has quicksync for low power transcoding. For internet connection I set up a digitalocean droplet with wireguard and haproxy which reverse proxies port 80, 443, and 22 to the nuc or synology over the wireguard tunnel depending on the subdomain, and the WG connection goes out from the nuc & synology to the vps so I don't need any forwarded ports - hopefully when I move it should just reconnect automatically in the new place without any setup. Kinda like a road warrior homelab that just needs an internet connection.
Are you me? I'm currently running 2 x R710s with an SA120 stuffed with drives. One R710 handles the storage and provided NFS storage for my other R710 running Proxmox with all my VMs/containers. I've been seriously considering downgrading my hardware to some lower power used former SFF workstations.
I have to use Cloudflare tunnels if I'm hosting from home (and want to expose the service outside). There's a lot of trouble involved in maintaining hardware so nowadays I just use a VPS.
I self-host on a Synology NAS in an apartment, but none* of my services are accessible outside of our local network. I spend enough time at home that I’m not really hurting if I can’t access radarr or sonarr from elsewhere.
*I do have web UI VS Code access via ssh tunneling set up using Netgear’s Dynamic DNS host name system, but that’s a one-off among the other services I run.
I started with a Raspberry Pi 4 4GB running Home Assistant with a bunch of add-ons. Moved on to a mini PC running Proxmox with some VMs (one for Home Assistant) and LXCs (NGINX Proxy Manager, Docker, AdGuard Home, Jellyfin and more). With a 4-core 8-thread Intel CPU and 16GB of RAM, it's got enough power for my usage so far.
My router is a regular consumer-grade router, but it's been pretty good at reassigning the same IP address to each of my services. My ISP doesn't restrict my uploads and hasn't complained about my self-hosted services, but there's not much traffic as I'm the only one using them.
I'm also adding a NAS to the mix soon for more storage!
I think that most ISP will say don’t host anything. But I do. Anyway, I use cloudflare ZT to reach back with an authentication mechanism aka Azure AD since I pay for Teams.
I have a house with a basement and a fiber connection I run my stuff in. I also have a pair of vps I use for things from racknerd that were black Friday deals (160 a year for 8 core 12 gb ram)