As you may have read in other threads, this version of Lemmy (0.19.1) seems to have bugs in outgoing federation on some instances.
As a temporary fix, we have added a scheduled restart of Lemmy every hour. It only takes a few seconds to restart, and the big advantage is that your comments and posts are only delayed up to 1 hour before they federate to other instances. You probably wont notice the restart even.
This will be in effect until a bug fix arrives from Lemmy developers, probably after new years sometime.
Thanks for reading and merry x-mas to everyone. :)
Yes, it seems the restart is not as reliable as I was hoping for.
Its a docker compose restart of the entire stack but that has proven to work only sometimes, which is not good enough. This afternoon it failed once (as you noticed), then worked 4 times, then failed once more.
We will figure out a more reliable way to restart, but for now, the restart has been removed again.
Really sorry about all this. Just trying to find a way to make federation work again since its very frustrating when it doesnt.
Anyway, merry Christmas and thanks for being understanding during all this.
For what it's worth, I set up a cronjob on my instance to restart just the lemmy container every 6 hours. It looks like 0 */6 * * * docker container restart lemmy-lemmy-1. Federation has been pretty good so far, but it has only been in practice for less than a day.
Yeah I could try to restart only the Lemmy container instead of the entire stack. Worth trying, thank you. :)
I tried it and it does give 20 seconds of downtime but could be worth doing a couple of times per day anyway. Its so frustrating with no outgoing messages...
As far as I can tell, not one of my comments has made it out in the last 4 days. No replies, no votes, not visible on any instance outside of lemmy.today.
Such is life on the bleeding edge...
Edit: seems like at least some got pushed out, as I'm finally getting responses again.
We had to remove the scheduled restarts since they didn't work reliably and brought the server down. So they are not in effect right now. We do manual restarts a few times per day only.
Well, at least it appears that they are aware of the problem, and one guy said he may have found a fix for it. There is a pull request open that might fix it, so hopefully, this'll be over soon.