Skip Navigation

Missing comments from subscribed community on another instance?

I do the majority of my Lemmy use on my own personal instance, and I've noticed that some threads are missing comments, some large threads, even large quantities of them. Now, I'm not talking about comments not being present when you first subscribe/discover a community to your instance, in this case, I noticed it with a lemmy.world thread that popped up less than a day ago, very well after I subscribed.

At the time of writing, that thread has 361 comments. When I view the same thread on my instance, I can see 118, that's a large swathe of missing content for just one thread. I can use the search feature to forcibly resolve a particular comment to my instance and reply to it, but that defeats a lot of the purpose behind having my own instance.

So has anyone else noticed something similar happening? I know my instance hasn't gone down since I created it, so it couldn't be that.

32
32 comments
  • Does your server have enough power and workers to handle all the federated messages? Or is it constantly at 100% CPU?

    • The machine is a dedicated server with 6 cores, 12 threads, all of which are usually under 10% utilization. Load averages currently are 0.35 0.5 0.6. Maybe I need to add more workers? There should be plenty of raw power to handle it.

      • Yeah that sounds about enough to handle the load. How many workers do you use? And do you see any errors in your logs about handling messages? You could try to search for that particular thread to see if all replies are handled correctly?

  • I haven’t noticed it happening. But haven’t checked much.

    What I have noticed is that some of the overloaded and larger instances can be slow…to post comments….to subscribe to…to post threads on etc. especially from a separate federated instance.

    Lemmy.world is easily one I have noticed along with lemmy.ml and occasionally…beehaw (but much less so).

    My guess is that in general those instances may be slow to sync/update data or respond.

  • This arises from the good ol issue of everybody just migrating to the same three or four big servers which end overloaded with their own users and can't send updates to other instances.

    I remember the same happening to Mastodon during the first few exodus until a combination of people not staying, stronger servers and software improvements settled the issue.

    I can barely get updates from lemmy.ml and lemmy.world isn't much better

    Beehaw seems to perform okey.

    • About half of the communities on lemmy.ml I subscribed tomare on "Subscribe Pending" and have been since I started this server.

  • I've noticed something similar on my instance in some cases as well. Nothing obvious logged as errors either. It just seems like the comment was never sent. In my case cpu is minimal so it doesn't seem like a resource issue on the receiving side.

    I suspect it may be a resource issue on the sending side. Potentially, not able to keep up with the number of subscribers. I know there was some discussion from the devs around the number of federation workers needing to be increased to keep up, so another possibility there.

    It's definitely problematic though. I was contemplating implementing some kind of resync this entire post and all comments via the Lemmy API to get things back in sync. But, if it is a sending server resource issue, I'm also hesitant to add a bunch more API calls to the mix. I think some kind of resync functionality will be necessary in the end.

  • I seriously thought I'm alone with this issue, but it seems it's fairly common for people hosting on their own. Same as you guys, it won't sync everything, some communities are even "stuck" with posts from a day back, even though there were many new ones posted.

    Kind of off topic question, but I guess it's related? Is there anyone that can't pull a certain community from an instance? I seem to can't pull [email protected] or anything from that community, that includes posts and comments. No matter how many times I try, it won't populate on my instance.

    EDIT: Caught this in my logs:

    lemmy | 2023-06-20T08:48:21.353798Z ERROR HTTP request{http.method=GET http.scheme="https" http.host=versalife.duckdns.org http.target=/api/v3/ws otel.kind="server" request_id=cf48b226-cba2-434a-8011-12388c351a7c http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Failed to resolve actor for [email protected]

    EDIT2: Apparently it's a known issue with [email protected], and a bug to be fixed in a future release.

  • I have the exact same issue with my own instance. On the post you mentioned, I'm seeing 383 comments on lemmy.world, but my own instance only shows 128 comments.

  • I've noticed the same situation in some threads on my own instance too. But I'm under the impression that it might just be backlogged on the responsible instance that's supposed to send out the federated content. I've noticed this when just having my home feed set to New and then suddenly seeing like thirty posts from lemmy.world come across all at once with widely varied timestamps.

    I suppose the best way to test if this is the case would be to note down any threads that are missing substantial amounts of comments on your local server and then check back with that thread periodically to see if and when they start to fill in.

  • Even this post is doing it to me. On your instance this post has 12 comments, on my instance it has 4 comments.

  • I have the same issue and I also get the warning for the expired headers. I have tried increasing the federation.worker_count(to 99999) and nginx workers(to 10000), but the issue still occurs for me.

    There is also a lot of missing comments for me on my own instance. Have to view this post on lemmy.world to get all the comments, since there are a lot missing on my own instance.

  • I've noticed some similar issues on my instance, but I'm wondering if it's related to how much strain is currently on the bigger instances like lemmy.world or beehaw.

    • From what I read in the troubleshooting guide, if their worker count isn't high enough, the issue can start on their end, too.

      Maybe one day the servers will implement a call to backtrack for missing content. Because I could see federation failures like this being a big missing point for wide adoption.

      It's possible to do via the API as is if you were to connect to the first instance, then call resolveobject enough times on your home instance if there's a discrepancy. But that would require an individual API call for every missing object, and it would be painful for big instances.

  • Watching. I’m noticing the same thing on my instance. In fact I don’t see any comments or upvotes coming through. Just the initial post.

  • Viewing this post on my own instance as well as a few other non-lemmy.world instances shows only 18/19 comments. When I look at it on lemmy.world I see that it has 31. It goes up to 32 when you include this comment so it looks like it probably isn’t a problem on your end.

32 comments