Hello! I have been struggling through a few tutorials on getting a lemmy instance to work correctly when setup with Docker.
I have it mostly done, but there are various issues each time that I do not have the knowledge to properly correct.
I am familiar with Docker, and already have an Oracle VPS set up on ARM64 Ubuntu.
I already have portainer and an NGINX proxy set up and working okay. I have an existing lemmy instance "running" but not quite working.
My best guess here would be to have someone assist with setting up the docker-compose to work with current updates/settings, as well as the config.hjson.
TIA, and I cant wait to have my own entry into the fediverse working right!
{
# for more info about the config, check out the documentation
# https://join-lemmy.org/docs/en/administration/configuration.html
# only few config options are covered in this example config
setup: {
# username for the admin user
admin_username: "noUsrHere"
# password for the admin user
admin_password: "noPassHere"
# name of the site (can be changed later)
site_name: "Bulwark of Boredom"
}
# the domain name of your instance (eg "lemmy.ml")
hostname: "lemmy.bulwarkob.com"
# address where lemmy should listen for incoming requests
bind: "0.0.0.0"
# port where lemmy should listen for incoming requests
port: 8536
# Whether the site is available over TLS. Needs to be true for federation to work.
tls_enabled: true
# pictrs host
pictrs: {
url: "http://pictrs:8080/"
api_key: "API_KEY"
}
# settings related to the postgresql database
database: {
# name of the postgres database for lemmy
database: "noDbHere"
# username to connect to postgres
user: "noUsrHere"
# password to connect to postgres
password: "noPassHere"
# host where postgres is running
host: "postgres"
# port where postgres can be accessed
port: 5432
# maximum number of active sql connections
pool_size: 5
}
}
I am not certain if I am somehow getting the wrong location of the config in the container. There is no volume or link for a conf file from host:container, so I am just grabbing from the default area /etc /nginx/nginx.conf:
spoiler
# run nginx in foreground
daemon off;
pid /run/nginx/nginx.pid;
user npm;
# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
error_log /data/logs/fallback_error.log warn;
# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;
events {
include /data/nginx/custom/events[.]conf;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
client_body_temp_path /tmp/nginx/body 1 2;
keepalive_timeout 90s;
proxy_connect_timeout 90s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
ssl_prefer_server_ciphers on;
gzip on;
proxy_ignore_client_abort off;
client_max_body_size 2000m;
server_names_hash_bucket_size 1024;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_cache off;
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';
access_log /data/logs/fallback_access.log proxy;
# Dynamically generated resolvers file
include /etc/nginx/conf.d/include/resolvers.conf;
# Default upstream scheme
map $host $forward_scheme {
default http;
}
# Real IP Determination
# Local subnets:
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
set_real_ip_from 192.168.0.0/16;
# NPM generated CDN ip ranges:
include conf.d/include/ip_ranges.conf;
# always put the following 2 lines after ip subnets:
real_ip_header X-Real-IP;
real_ip_recursive on;
# Custom
include /data/nginx/custom/http_top[.]conf;
# Files generated by NPM
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;
include /data/nginx/temp/*.conf;
# Custom
include /data/nginx/custom/http[.]conf;
}
stream {
# Files generated by NPM
include /data/nginx/stream/*.conf;
# Custom
include /data/nginx/custom/stream[.]conf;
}
# Custom
include /data/nginx/custom/root[.]conf;
I may be mistaken in my choice of proceeding, but as many are reporting, the install guide provided docker-compose and general docker instructions dont quite seem to work as expected. I have been trying to piecemeal this together, and the Included lemmy nginx service container was completely excluded (edited out/deleted) once I had the standalone nginx-proxy-manager setup and working for regular 80,443 ->1234 proxy requests to the lemmy-ui container.
Does the lemmy nginx have a specific role or tie in? I am still fairly new to reverse proxying in general.
All containers are running. I handle them with Portainer, though I build the stack from the CLI in the lemmy dir, so Portainer cant fully manage them. Reboots and logs and networking and such work fine though.
As for the nginx config, the nginx proxy manager I use currently has all proxy-host/settings setup from the webGUI, where I use the GUI to set up the proxy host information and SSL information. I did no manual edits to any configurations or settings of the container during or after compose. Only GUI actions.
When looking at the nginx.conf I replied with here (my current conf), I do not see anything related to that proxy host I created from the GUI. I am not sure if that is normal or not, or if I maybe have a wrong .conf included here.
With that in mind, would you suggest I simply overwrite and/or add your snippet to my existing conf file?
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;
include /data/nginx/temp/*.conf;
conf.d/* has two configurations that appear to be some form of default. default.conf and production.conf. production.conf is only for the admin GUI.
default.conf:
Container has a volume set /lemmy/docker/nginx-proxy-manager/data:/data
I have those folders and more, and they DO seem to have the correct custom item.
Specifically, in the proxy_host folder I have a configuration for the proxy host I set up (1.conf) in the GUI:
spoiler
# ------------------------------------------------------------
# lemmy.bulwarkob.com
# ------------------------------------------------------------
server {
set $forward_scheme http;
set $server "172.24.0.5";
set $port 1234;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name lemmy.bulwarkob.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-1/privkey.pem;
# Block Exploits
include conf.d/include/block-exploits.conf;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-1_access.log proxy;
error_log /data/logs/proxy-host-1_error.log warn;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I actually started with this tutorial a few days ago after failing the official guide. I followed it but was unable to get it running due to unexpected errors. Im guessing this tutorial is somewhat out of date.
Ive made progress since using that guide though so I will see if I can pull any useful bits out of it later today and continue.
Worst case, I could also just ditch NPM if I can get another NGINX set up in a way that you might know how to do correctly.
Ive been through a boatload of changes today since earlier.
Ive rebuilt using mostly the provided yml in the official guide, and after some tweaking, almost everything is working.
The internal proxy is now working, and the containers are working amongst themselves fully as far as I can tell.
I do not know how to setup a web facing reverse proxy in a way that works around the internal proxy already running (other than the already in place NPM).
I turned the NPM back on, and was able to get it working to reach the site, however I cannot reach any other communities from within my site. I believe the reverse proxy NPM is just not set up right.
Error message in lemmy:
spoiler
ERROR HTTP request{http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Request error: error sending request for url (https://midwest.social/.well-known/webfinger?resource=acct:[email protected]): operation timed out
0: lemmy_apub::fetcher::search::search_query_to_object_id
at crates/apub/src/fetcher/search.rs:17
1: lemmy_apub::api::resolve_object::perform
with self=ResolveObject { q: "[email protected]", auth: Some(Sensitive) }
at crates/apub/src/api/resolve_object.rs:21
2: lemmy_server::root_span_builder::HTTP request
with http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"
at src/root_span_builder.rs:16
I would be happy to remove NPM from this stack if its not too difficult to get a correctly working reverse proxy set up. The documentation doesnt give much to work with in it.
from the log it seems that lemmy cannot reach https://midwest.social/ - if you have more such operation timed outs - probably there is some networking issue with outgoing requests - maybe you have some kind of firewall? i can reach your instance from other direction: https://group.lt/c/[email protected]
probably the easiest way to setup lemmy and another front facing reverse proxy is to use nginx that comes with lemmy on another port and setup simple reverse proxying with NPM to it. i myself using caddy for reverse proxying, using this config:
https://join-lemmy.org/docs/en/administration/caddy.html
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream lemmy {
# this needs to map to the lemmy (server) docker service hostname
server "lemmy:8536";
}
upstream lemmy-ui {
# this needs to map to the lemmy-ui docker service hostname
server "lemmy-ui:1234";
}
server {
# this is the port inside docker, not the public one yet
listen 80;
# change if needed, this is facing the public web
server_name localhost;
server_tokens off;
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
# Upload limit, relevant for pictrs
client_max_body_size 20M;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
# frontend general requests
location / {
# distinguish between ui requests and backend
# don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top
set $proxpass "http://lemmy-ui";
if ($http_accept = "application/activity+json") {
set $proxpass "http://lemmy";
}
if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
set $proxpass "http://lemmy";
}
if ($request_method = POST) {
set $proxpass "http://lemmy";
}
proxy_pass $proxpass;
rewrite ^(.+)/+$ $1 permanent;
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# backend
location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
proxy_pass "http://lemmy";
# proxy common stuff
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Do you know if I should expect to have TWO unique NGINX proxy instances (assuming I use NGINX)? One in-stack, and one separate for web facing reverse proxy? Or do I need a combination of the two configs into one instance?
I am going to see if I can get a caddy reverse proxy setup in the meantime and see how it performs given your configuration there.