I'm really enjoying lemmy. I think we've got some growing pains in UI/UX and we're missing some key features (like community migration and actual redundancy). But how are we going to collectively pay for this? I saw an (unverified) post that Reddit received 400M dollars from ads last year. Lemmy isn't going to be free. Can someone with actual server experience chime in with some back of the napkin math on how expensive it would be if everyone migrated from Reddit?
This is what I think, but if anyone understands it differently please correct me.
Vertical scalability refers to scaling within a single instance. More users join and they post more content, increasing the amount of disk space needed to hold that memory, network bandwidth to handle many users downloading comments and images at once, and processing power.
Horizontal scaling refers to the lemmyverse growing because of the addition of new instances. The problem in this form of scaling is due to the resources that an instance has to use due to its interactions with other instances. So, you may create a small instance without a lot of users, but the instance might still need a lot of resources if it attempts to retrieve a lot of information (posts, comments, user information, etc) from the other larger instances. For example, at some point a community in lemmy.ml might be so popular that subscribing to that community from a small instance would be too much of a burden on the smaller instance because of the amount of memory required to save the constant stream of new posts. The horizontal scaling is a problem when the lemmyverse becomes so large that a machine with only a small amount of resources is no longer able to be part of the lemmyverse because its memory gets filled up in a few hours or days.
Let's say your tiny 3 person instance is connected to a big one. I believe it only pulls in content from the communities somebody from the small instance is subscribed too. Correct me if I'm wrong.
Essentially - if someone from the small instance subscribes to a community that has a ton of data (huge post volume, images, whatever), the small instance needs to pull data over from the larger instance. At some point there may be communities that are so large small instances can't pull them in without tanking.
If I'm reading the protocol right, it's probably larger instances that will avoid more duplication, since:
There's a higher chance they're going to have more communities shared among users (for really tiny instances you're probably going to get a lot of overlap since those people likely have interconnected interests, but I expect that would fall off quickly, but then converge at scale).
The larger number of users will mean they 'use' more of the content they're pulling down (I can't read all of a highly active community in a day, but 1000 people together checking through the day might 'use' it all).
I'm not sure I see where you see caching fitting in.
I am surprised I don't see some kind of lower resolution digest concept in the protocol (which might be what you're looking for)
maybe I phrased that poorly and you didn't understand what I was trying to say. The size of the bigger instance shouldn't matter at all because only data from communities is pulled, that a member of the smaller instance is subscribed to. So if the bigger instance has 1000 members or 2 million members wouldn't make a difference. The only thing relevant should be how active the communities are that members are subscribed to.
Sure, the sizes of the communities is what matters (multiplied by the number of communities users on the server care about).
I think most of us are assuming larger instances are more likely to host the larger communities.
Actually, if I'm reading the protocol right, it'd be hard for a small server to host a highly active community anyway (for some value of highly active). So yes, some 2 person instance that was created to offload stuff could be the primary host for a massive community, but in practice it won't.
We are arguing about very specific things here anyway. And I generally do share your concerns about how well this is going to scale. I want this to do well.
That's what I've gathered, but I don't believe there's a way for instance owners to limit what's fetched - a user crafts the query and the server does the needful.
I imagine this could amount to a denial of service attack of sorts, if some high-churn communities are imported into tiny instances. How bad that could be, I have no idea - I'm speaking pretty theoretically, here. Text is tiny, after all, so it's probably not much of a concern, since most of the media is actually handled elsewhere...
I'm not a web developer. I'm sort of a sysadmin so i have some experiences maintaining machines for web apps for other people. And you are right...text will not create massive amounts of data. But a lot of tiny transactions can bring down machines surprisingly fast even if the total amount of data is relatively small.
I guess we are here to experience it first hand. I don't think anybody...not even the developers have a clear idea of how well this will scale. There is only one way to find out lol
Interesting, so would the smaller instance in this case have to perpetually store all content from the remote community, or does it just store the most recent X posts with the rest archived on the instance hosting the community? Or is it more an issue of the resources required to handle the transactions rather than the amount of data per se?