It is temporary as lemmy.world was cascading duplicates at us and the only way to keep the site up reliably was to temporarily drop them. We’re in the process of adding more hardware to increase RAM, CPU cores and disk space. Once that new hardware is in place we can try turning on the firehose. Until then, please patient.
ORIGINAL POST:
Starting sometime yesterday afternoon it looks like our instance started blocking lemmy.world: https://lemmy.sdf.org/instances
This is kind of a big deal, because 1/3rd of all active users originate there!
Was this decision intentional? If so, could we get some clarification about it? @[email protected]
increased cores and memory, hopefully we never touch swap again
dedicated server for pict-rs with its own RAID
dedicated server for lemmy, postgresql with its own RAID
lemmy-ui and nginx run on both to handle ui requests
Thank you for everyone who stuck around and helped out, it is appreciated. We're working on additional suggested tweaks from the Lemmy community and hope to let lemmy.world try to DoS us again soon. Hopefully we'll do much better this time.
Killer stuff! Sorry for my contributing undue pressure on top of what was probably already a taxing procedure happening in the server room.
Out of curiousity: how do you feel about Lemmy performance so far? I'm actually a little bit surprised that we already managed to outstrip the prior configuration. I suppose that inter-instance ActivityPub traffic just punches really hard regardless of intra-instance activity?