Skip Navigation

Search

Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: User deleting their account causes database locking #3649

github.com [Bug]: User deleting their account causes database locking · Issue #3649 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: User deleting their account causes database locking · Issue #3649 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

We have a user with a few hundred comments who has crashed the site twice today while trying to delete his account.

This query ends up running for a long time and locks subsequent updates to comment:

UPDATE "comment" SET "content" = $1, "deleted" = $2, "updated" = $3 WHERE ("comment"."creator_id" = $4) RETURNING "comment"."id", "comment"."creator_id", "comment"."post_id", "comment"."content", "comment"."removed", "comment"."published", "comment"."updated", "comment"."deleted", "comment"."ap_id", "comment"."local", "comment"."path", "comment"."distinguished", "comment"."language_id"

This was running for 8 minutes before I killed it. The user in question has 352 comments and 3073 entries in comment_like. This doesn't seem like such a large amount that there should be significant impact from a user deletion.

Steps to Reproduce

I haven't been able to reproduce this with a test user, so far only this one external user keeps causing it on our site.

I've had to disable the /api/v3/user/delete_account URL for now.

Technical Details

Logs are too noisy but this is triggered by a post to /api/v3/user/delete_account from Jerboa

Version

0.18.2

Lemmy Instance URL

lemmy.ca

3
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Remote instance RSS returns "record not found" #3650

github.com [Bug]: Remote instance RSS returns "record not found" · Issue #3650 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Remote instance RSS returns "record not found" · Issue #3650 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

When on the page of a user on a remote instance, the linked RSS feed returns "Record not found"

Steps to Reproduce

  1. Go to the page of a remote user, on a local instance (https://lemmy.world/u/[email protected])
  2. Click the RSS button
  3. The linked page returns "Record not found" (https://lemmy.world/feeds/u/[email protected])

Technical Details

For example, https://lemmy.world/u/[email protected] links to https://lemmy.world/feeds/u/[email protected] and returns "Record not found"

The page of a local user correctly returns an RSS feed (https://lemmy.ml/u/dessalines links to https://lemmy.ml/feeds/u/dessalines.xml which returns an RSS record)

The RSS feed for a remote community on a local instance returns the same "Record not found". (https://lemmy.world/feeds/c/[email protected])

See lemmy-ui issue (https://github.com/LemmyNet/lemmy-ui/issues/1954) for a related user interface bug.

Version

BE 0.18.2

Lemmy Instance URL

lemmy.ml, lemmy.world, lemmy.ca, etc

0
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Removed comments are publically visible through the api. #3652

github.com [Bug]: Removed comments are publically visible through the api. · Issue #3652 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Removed comments are publically visible through the api. · Issue #3652 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

When calling api/v3/comment/list you get a list of all comments on that post. Including any removed ones. The only change removed ones have is that the "removed" field is set to true. This is a massive problem because it delegates the obscuring of removed content to the front end. You can view these despite not being being logged in. When I used a mobile app that didn't take into account this removed flag, I was met with some disgusting NSFL imagery I'd rather not have seen.

I strongly recommend obscuring the content of removed comments from the API if the user is not logged in or if the user is not a mod of the community/ not an instance owner. I understand the need to keep this information in the case of reversing moderation decisions and the modlog, but there is zero reason for non-mods and non-admins to have access to it in the normal endpoints. Furthermore, the baton should not be passed to front end developers either. The source of truth should be the backend, and the backend should enforce it.

Finally this should happen with other places where comments are listed and posts are viewed. Comments deleted by the user should not be visible to anyone in the API besides the user and maybe mods/admins. I'm not sure what other endpoints it'll apply to, but in my opinion this is paramount.

Steps to Reproduce

  1. Create a post in a community you moderate
  2. Create a comment on that post
  3. Remove that comment
  4. Open up dev tools
  5. Go to that post again
  6. Look at the http response.

The JSON response has all identifying info removed, and the "removed" flag circled.

!image

Technical Details

n/a

Version

0.18.2

Lemmy Instance URL

No response

1
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Pagination with "Show Read Posts" disabled leads to skipped pages of content #3659

github.com [Bug]: Pagination with "Show Read Posts" disabled leads to skipped pages of content · Issue #3659 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Pagination with "Show Read Posts" disabled leads to skipped pages of content · Issue #3659 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

If you browse with show read posts enabled, and read every post on page 1 of content, when you navigate to page 2, it will show you page 2 of "unread" content, and you need to go back to page one.

Steps to Reproduce

  1. Disable "Show Read Posts"
  2. Browse a community
  • I suggest a community with consistent content but a relatively static sort - e.g. [email protected] w/TopMonth
  1. "Read" each post on the 1st page (e.g. upvote all of them)
    • Pay attention to top couple of posts on page
    • Notice vote counts at bottom of page
  2. Browse to Page 2
    • Notice large drop in post vote count between end of page 1 and page 2
    • Pay attention to top couple of posts on page
  3. Browse back to page 1
    • Notice vote counts at top of page and bottom of page aligns between the end of the original page 1 and the viewed page "2"
    • Notice Page 1 content is different from original page 1 content

Technical Details

I believe this is a pagination issue when constructing the offsets used for pagination, there may need to be a mechanism to deduct or track the state of read posts.

Version

BE: 18.

Lemmy Instance URL

lemmy.fmhy.ml

1
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Users have no way to remove abusive messages from inbox without admin intervention #3629

github.com [Bug]: Users have no way to remove abusive messages from inbox without admin intervention · Issue #3629 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Users have no way to remove abusive messages from inbox without admin intervention · Issue #3629 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

If you receive an abusive DM there is no way to remove it from your inbox without admin help.

Steps to Reproduce

Ideally a person would be able to delete all private messages in their inbox, regardless of whether they created them or not.

But if this is not possible, blocking the abuse account should hide all DMs from them.

Reproduction:

  1. Person B send messages to person A
  2. Person A block person B
  3. See person B messages still show up in inbox of person A
  4. Observe person A also has no way to delete person B's messages. Their stuck it person A's inbox forever unless an admin intervenes.

Technical Details

N/A

Version

0.18.2

Lemmy Instance URL

No response

1
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: ActivePub federation incoming "Page" is not properly parsed from RawAnnouncableActivities into a AnnouncableActivities:Page, code logic to detect "Page" is not being reached #3639

github.com [Bug]: ActivePub federation incoming "Page" is not properly parsed from RawAnnouncableActivities into a AnnouncableActivities:Page, code logic to detect "Page" is not being reached · Issue #3639 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: ActivePub federation incoming "Page" is not properly parsed from RawAnnouncableActivities into a AnnouncableActivities:Page, code logic to detect "Page" is not being reached · Issue #3639 · LemmyNet/lemmy

Requirements

  • [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a single bug? Do not put multiple bugs in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Summary

lemmy_server code fails to convert RawAnnouncableActivities into a AnnouncableActivities:Page before reaching logic to reject Page.

https://github.com/LemmyNet/lemmy/blob/e9e76549a88cfbdab36f00d302cceabcaaa24f4c/crates/apub/src/activities/community/announce.rs#L47

let activity: AnnouncableActivities = self.clone().try_into()?; // This is only for sending, not receiving so we reject it. if let AnnouncableActivities::Page(_) = activity { return Err(LemmyErrorType::CannotReceivePage)?; }

The code errors on the first line with data did not match any variant of untagged enum AnnouncableActivities, not reaching the second line of code that would return Err(LemmyErrorType::CannotReceivePage).

Steps to Reproduce

  1. Subscribe a lemmy instance to various bots mentioned below
  2. Look in server error logs for data did not match any variant of untagged enum AnnouncableActivities
  3. capture the raw incoming data to analyze

Example of incoming raw data causing this match problem:

RawAnnouncableActivities { id: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmit.online")), port: None, path: "/activities/announce/f513b770-c440-48f3-b0f0-21317b9e85b7", query: None, fragment: None }, actor: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmit.online")), port: None, path: "/c/comics", query: None, fragment: None }, other: {"to": Array [String("https://www.w3.org/ns/activitystreams#Public")], "object": Object {"id": String("https://lemmit.online/post/201322"), "actor": String("https://lemmit.online/u/bot"), "type": String("Page"), "attributedTo": String("https://lemmit.online/u/bot"), "to": Array [String("https://lemmit.online/c/comics"), String("https://www.w3.org/ns/activitystreams#Public")], "name": String("I guess my son qualifies as my pet as well."), "cc": Array [], "content": String("<h5>This is an automated archive made by the <a href=\"https://lemmit.online/post/14692\">Lemmit Bot</a>.</h5>\n<p>The original was posted on <a href=\"https://old.reddit.com/r/comics/comments/151el8i/i_guess_my_son_qualifies_as_my_pet_as_well/\">/r/comics</a> by <a href=\"https://old.reddit.com/u/AlloyComics\">/u/AlloyComics</a> on 2023-07-16 21:08:16.</p>\n"), "mediaType": String("text/html"), "source": Object {"content": String("##### This is an automated archive made by the [Lemmit Bot](https://lemmit.online/post/14692).\nThe original was posted on [/r/comics](https://old.reddit.com/r/comics/comments/151el8i/i_guess_my_son_qualifies_as_my_pet_as_well/) by [/u/AlloyComics](https://old.reddit.com/u/AlloyComics) on 2023-07-16 21:08:16.\n"), "mediaType": String("text/markdown")}, "attachment": Array [Object {"href": String("https://i.redd.it/q53smvggldcb1.png"), "type": String("Link")}], "commentsEnabled": Bool(true), "sensitive": Bool(false), "published": String("2023-07-17T01:58:34.158916+00:00"), "language": Object {"identifier": String("en"), "name": String("English")}, "audience": String("https://lemmit.online/c/comics")}, "cc": Array [String("https://lemmit.online/c/comics/followers")], "type": String("Announce")} }

Technical Details

It seems bods are generating these type: Page

https://lemmy.world/u/MatchThreadBot https://lemmit.online/u/bot

Example of a post that comes as type: Page and causes this problem: https://lemmit.online/post/201322

Version

BE: 0.18.2

Lemmy Instance URL

No response

0
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: 0.18.0; 0.18.1-rc.9 - Timeouts when submitting images to pictrs container #3447

github.com [Bug]: 0.18.0; 0.18.1-rc.9 - Timeouts when submitting images to pictrs container · Issue #3447 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: 0.18.0; 0.18.1-rc.9 - Timeouts when submitting images to pictrs container · Issue #3447 · LemmyNet/lemmy

> ### Requirements > > - [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support > - [X] Did you check to see if this issue already exists? > - [X] Is this only a single bug? Do not put multiple bugs in one issue. > - [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues. > > ### Summary > > When attempting to submit images on a self-hosted Lemmy instance to the attached pictrs container it consistently times out. I have once successfully managed to get a small image successfully uploaded to post. > > Of note: This is related to my attempt to build up a traefik-based reverse proxy configuration that can be contributed; See: > > - https://gitlab.com/Matt.Jolly/lemmy-traefik-docker - Lemmy for Traefik > - https://gitlab.com/Matt.Jolly/traefik-grafana-prometheus-docker/ - traefik turnkey service > > ### Steps to Reproduce > > Attempt to submit any image to any community on a lemmy instance provisioned as below: > > Docker Stack: > > ````yaml > version: "3.9" > > x-logging: &default-logging > driver: journald > options: > tag: "{{.Name}}[{{.ID}}]" > > services: > web: > image: dessalines/lemmy:0.18.0 > restart: always > logging: *default-logging > environment: > - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info" > volumes: > - ./lemmy.hjson:/config/config.hjson > depends_on: > - db > networks: > - internal > - traefik-network > deploy: > labels: > - traefik.enable=true > - traefik.http.routers.lemmy-http.rule=Host(lemmy.srcfiles.zip) && (PathPrefix(/api) || PathPrefix(/pictrs) || PathPrefix(/feeds) || PathPrefix(/nodeinfo) || PathPrefix(/.well-known) || Method(POST) || HeaderRegexp(Accept, ^[Aa]pplication/.*)) > - traefik.http.routers.lemmy-https.rule=Host(lemmy.srcfiles.zip) && (PathPrefix(/api) || PathPrefix(/pictrs) || PathPrefix(/feeds) || PathPrefix(/nodeinfo) || PathPrefix(/.well-known) || Method(POST) || HeaderRegexp(Accept, ^[Aa]pplication/.*)) > - traefik.http.routers.lemmy-http.entrypoints=web > - traefik.http.routers.lemmy-https.entrypoints=websecure > - traefik.http.routers.lemmy-http.middlewares=redirect-https@file > - traefik.http.routers.lemmy-https.tls.certresolver=letsencrypt > - traefik.http.routers.lemmy-https.service=lemmy > # Define the Traefik service for this docker container > - traefik.http.services.lemmy.loadbalancer.server.port=8536 > > web-frontend: > image: dessalines/lemmy-ui:0.18.0 > environment: > - LEMMY_UI_LEMMY_INTERNAL_HOST=web:8536 > - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.srcfiles.zip > - LEMMY_HTTPS=true > depends_on: > - web > restart: always > logging: *default-logging > networks: > - internal > - traefik-network > deploy: > labels: > - traefik.enable=true > - traefik.http.routers.lemmy-static-http.rule=Host(lemmy.srcfiles.zip) > - traefik.http.routers.lemmy-static-https.rule=Host(lemmy.srcfiles.zip) > - traefik.http.routers.lemmy-static-http.entrypoints=web > - traefik.http.routers.lemmy-static-https.entrypoints=websecure > - traefik.http.routers.lemmy-static-http.middlewares=redirect-https@file > - traefik.http.routers.lemmy-static-https.tls.certresolver=letsencrypt > - traefik.http.routers.lemmy-static-https.service=lemmy-static > # Define the Traefik service for this docker container > - traefik.http.services.lemmy-static.loadbalancer.server.port=1234 > > db: > image: postgres:15-alpine > hostname: db > environment: > - POSTGRES_USER=lemmy > - POSTGRES_PASSWORD=DB_PASS > - POSTGRES_DB=lemmy > volumes: > - db:/var/lib/postgresql/data > restart: always > logging: *default-logging > networks: > - internal > > pictrs: > image: asonix/pictrs:0.4.0-rc.9 > # this needs to match the pictrs url in lemmy.hjson > hostname: pictrs > # we can set options to pictrs like this, here we set max. image size and forced format for conversion > # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp > environment: > #- PICTRS_OPENTELEMETRY_URL=http://otel:4137 > - PICTRS__API_KEY=API_KEY > - RUST_LOG=debug > - RUST_BACKTRACE=full > - PICTRS__MEDIA__VIDEO_CODEC=vp9 > - PICTRS__MEDIA__GIF__MAX_WIDTH=256 > - PICTRS__MEDIA__GIF__MAX_HEIGHT=256 > - PICTRS__MEDIA__GIF__MAX_AREA=65536 > - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400 > user: 991:991 > volumes: > - ./volumes/pictrs:/mnt:Z > restart: always > logging: *default-logging > networks: > - internal > > networks: > traefik-network: > external: true > internal: > > volumes: > db: > > > Lemmy Config: > >json > { > # settings related to the postgresql database > database: { > # Configure the database by specifying parts of a URI > # Username to connect to postgres > user: "lemmy" > # Password to connect to postgres > password: "DB_PASSWORD" > # Host where postgres is running > host: "db" > # Port where postgres can be accessed > port: 5432 > # Name of the postgres database for lemmy > database: "lemmy" > # Maximum number of active sql connections > pool_size: 10 > } > # Settings related to activitypub federation > # Pictrs image server configuration. > pictrs: { > # Address where pictrs is available (for image hosting) > url: "http://pictrs:8080/" > # Set a custom pictrs API key. ( Required for deleting images ) > api_key: "API_KEY" > } > # Email sending configuration. All options except login/password are mandatory > email: { > # Hostname and port of the smtp server > smtp_server: "localhost:25" > # Login name for smtp server > smtp_login: "string" > # Password to login to the smtp server > smtp_password: "string" > # Address to send emails from, eg "[email protected]" > smtp_from_address: "[email protected]" > # Whether or not smtp connections should use tls. Can be none, tls, or starttls > tls_type: "none" > } > # Parameters for automatic configuration of new instance (only used at first start) > setup: { > # Username for the admin user > admin_username: "Admin_Username" > # Password for the admin user. It must be at least 10 characters. > admin_password: "Admin_Password" > # Name of the site (can be changed later) > site_name: "Srcfiles.zip" > # Email for the admin user (optional, can be omitted and set later through the website) > admin_email: "[email protected]" > } > # the domain name of your instance (mandatory) > hostname: "lemmy.srcfiles.zip" > # Address where lemmy should listen for incoming requests > bind: "0.0.0.0" > # Port where lemmy should listen for incoming requests > port: 8536 > # Whether the site is available over TLS. Needs to be true for federation to work. > tls_enabled: true > # The number of activitypub federation workers that can be in-flight concurrently > worker_count: 0 > # The number of activitypub federation retry workers that can be in-flight concurrently > retry_count: 0 > } > > > > ### Technical Details > > Logs for all containers redirected to journald: > > Jul 02 00:24:25 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:24:25.413244565Z" level=info msg="initialized VXLAN UDP port to 4789 " > Jul 02 00:24:27 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.5qgz5t3iv4u7knbysp8mutc5x[78b59192fec9][1529]: thread 'main' panicked at 'Error connecting to postgres://lemmy:8KLH%266zh%25Pk%25RfnNs%23o6X@db:5432/lemmy: could not translate host name "db" to address: Name does not resolve > Jul 02 00:24:27 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.5qgz5t3iv4u7knbysp8mutc5x[78b59192fec9][1529]: ', crates/db_schema/src/utils.rs:161:56 > Jul 02 00:24:27 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.5qgz5t3iv4u7knbysp8mutc5x[78b59192fec9][1529]: note: run with RUST_BACKTRACE=1 environment variable to display a backtrace > Jul 02 00:24:27 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:24:27.223705239Z" level=info msg="ignoring event" container=78b59192fec975b15eb5bd56495428191d43c5e681aeaf1650f8f6649d923c6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" > Jul 02 00:24:28 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:24:28.067529555Z" level=error msg="fatal task error" error="task: non-zero exit (101)" module=node/agent/taskmanager node.id=eqmkexmxt9n5sykvr336cgcxn service.id=ly7nzeslpzoldz6x4ka1w7q8n task.id=5qgz5t3iv4u7knbysp8mutc5x > Jul 02 00:24:29 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web-frontend.1.voxru8b5oq76e8wxvtqxooayb[74ab5638fae6][1529]: Inferno is in development mode. > Jul 02 00:24:29 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web-frontend.1.voxru8b5oq76e8wxvtqxooayb[74ab5638fae6][1529]: http://0.0.0.0:1234 > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: PostgreSQL Database directory appears to contain a database; Skipping initialization > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.517 UTC [1] LOG: starting PostgreSQL 15.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.527 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.528 UTC [1] LOG: listening on IPv6 address "::", port 5432 > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.531 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.538 UTC [25] LOG: database system was shut down at 2023-07-02 00:23:47 UTC > Jul 02 00:24:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_db.1.gcomi8rfl05c01d2wsckvdf7l[7e492452ab59][1529]: 2023-07-02 00:24:31.549 UTC [1] LOG: database system is ready to accept connections > Jul 02 00:24:33 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:24:33.996938Z INFO actix_server::builder: starting 1 workers > Jul 02 00:24:33 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:24:33.997291Z INFO actix_server::server: Actix runtime found; starting in Actix runtime > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.421929Z INFO lemmy_db_schema::utils: Running Database migrations (This may take a long time)... > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.430160Z INFO lemmy_db_schema::utils: Database migrations complete. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.460890Z INFO lemmy_server::code_migrations: Running user_updates_2020_04_02 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.471034Z INFO lemmy_server::code_migrations: 0 person rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.472553Z INFO lemmy_server::code_migrations: Running community_updates_2020_04_02 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.478290Z INFO lemmy_server::code_migrations: 0 community rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.479144Z INFO lemmy_server::code_migrations: Running post_updates_2020_04_03 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.482847Z INFO lemmy_server::code_migrations: 0 post rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.483502Z INFO lemmy_server::code_migrations: Running comment_updates_2020_04_03 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.493459Z INFO lemmy_server::code_migrations: 0 comment rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.494267Z INFO lemmy_server::code_migrations: Running private_message_updates_2020_05_05 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.496764Z INFO lemmy_server::code_migrations: 0 private message rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.497262Z INFO lemmy_server::code_migrations: Running post_thumbnail_url_updates_2020_07_27 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.498811Z INFO lemmy_server::code_migrations: 0 Post thumbnail_url rows updated. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.499369Z INFO lemmy_server::code_migrations: Running apub_columns_2021_02_02 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.502009Z INFO lemmy_server::code_migrations: Running instance_actor_2021_09_29 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.519404Z INFO lemmy_server::code_migrations: Running regenerate_public_keys_2022_07_05 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.532065Z INFO lemmy_server::code_migrations: Running initialize_local_site_2022_10_10 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: federation enabled, host is lemmy.srcfiles.zip > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Starting http server at 0.0.0.0:8536 > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.582458Z INFO lemmy_server::scheduled_tasks: Updating active site and community aggregates ... > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.629545Z INFO lemmy_server::scheduled_tasks: Done. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.629578Z INFO lemmy_server::scheduled_tasks: Updating hot ranks for all history... > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.641287Z INFO lemmy_server::scheduled_tasks: Done. > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.641314Z INFO lemmy_server::scheduled_tasks: Updating banned column if it expires ... > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.643321Z INFO lemmy_server::scheduled_tasks: Clearing old activities... > Jul 02 00:24:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:24:34.650950Z INFO lemmy_server::scheduled_tasks: Done. > Jul 02 00:25:00 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:25:00.666966Z INFO lemmy_server::scheduled_tasks: Updating hot ranks for last week... > Jul 02 00:25:00 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:25:00.678157Z INFO lemmy_server::scheduled_tasks: Done. > Jul 02 00:25:08 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:25:08.573500Z INFO HTTP request{http.method=POST http.route=/image http.flavor=1.1 http.scheme=https http.host=lemmy.srcfiles.zip http.client_ip=119.xxx.xxx.251 http.user_agent=Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0 http.target=/image otel.name=HTTP POST /image otel.kind="server" request_id=62733fb4-d8a0-467a-84ba-17b8d6817792}: tracing_actix_web::root_span_builder: new > Jul 02 00:25:18 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:25:18.573847Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (http://pictrs:8080/image): operation timed out > Jul 02 00:25:18 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pictrs")), port: Some(8080), path: "/image", query: None, fragment: None }, source: TimedOut }) > Jul 02 00:25:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:25:31.753707Z INFO HTTP request{http.method=POST http.route=/image http.flavor=1.1 http.scheme=https http.host=lemmy.srcfiles.zip http.client_ip=119.xxx.xxx.251 http.user_agent=Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0 http.target=/image otel.name=HTTP POST /image otel.kind="server" request_id=c0ad7192-c565-41df-9117-a7c11158e02d}: tracing_actix_web::root_span_builder: new > Jul 02 00:25:41 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:25:41.751831Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (http://pictrs:8080/image): operation timed out > Jul 02 00:25:41 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pictrs")), port: Some(8080), path: "/image", query: None, fragment: None }, source: TimedOut }) > Jul 02 00:26:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:26:34.072053549Z" level=info msg="NetworkDB stats ip-172-xxx-xxx-210.ap-southeast-2.compute.internal(b4b857b480fa) - netID:q52fvnhhdx2bsx58a1up9eejd leaving:false netPeers:1 entries:2 Queue qLen:0 netMsg/s:0" > Jul 02 00:26:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:26:34.072124607Z" level=info msg="NetworkDB stats ip-172-xxx-xxx-210.ap-southeast-2.compute.internal(b4b857b480fa) - netID:z41ch1tirqlsl7ewldco9yr48 leaving:true netPeers:0 entries:9 Queue qLen:0 netMsg/s:0" > Jul 02 00:26:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:26:34.072140102Z" level=info msg="NetworkDB stats ip-172-xxx-xxx-210.ap-southeast-2.compute.internal(b4b857b480fa) - netID:z0mkh59efda2n8owy2k0abtem leaving:false netPeers:1 entries:11 Queue qLen:0 netMsg/s:0" > Jul 02 00:26:34 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal dockerd[1529]: time="2023-07-02T00:26:34.072152674Z" level=info msg="NetworkDB stats ip-172-xxx-xxx-210.ap-southeast-2.compute.internal(b4b857b480fa) - netID:y3r7f2sf7ztu3kol77irag4c9 leaving:false netPeers:1 entries:17 Queue qLen:0 netMsg/s:0" > Jul 02 00:27:55 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:27:55.387654Z INFO HTTP request{http.method=POST http.route=/image http.flavor=1.1 http.scheme=https http.host=lemmy.srcfiles.zip http.client_ip=119.xxx.xxx.251 http.user_agent=Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0 http.target=/image otel.name=HTTP POST /image otel.kind="server" request_id=b722387b-a05f-4577-a5fd-b9149a1ad2e0}: tracing_actix_web::root_span_builder: new > Jul 02 00:28:05 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:28:05.388191Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (http://pictrs:8080/image): operation timed out > Jul 02 00:28:05 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pictrs")), port: Some(8080), path: "/image", query: None, fragment: None }, source: TimedOut }) > Jul 02 00:28:12 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:28:12.300053Z INFO HTTP request{http.method=POST http.route=/image http.flavor=1.1 http.scheme=https http.host=lemmy.srcfiles.zip http.client_ip=119.xxx.xxx.251 http.user_agent=Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0 http.target=/image otel.name=HTTP POST /image otel.kind="server" request_id=31a00146-27cc-4818-aefe-e106fe20e4b1}: tracing_actix_web::root_span_builder: new > Jul 02 00:28:22 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:28:22.301154Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (http://pictrs:8080/image): operation timed out > Jul 02 00:28:22 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pictrs")), port: Some(8080), path: "/image", query: None, fragment: None }, source: TimedOut }) > Jul 02 00:28:31 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_pictrs.1.rqbyzp7wpl85mrvssos32xv08[2483c9bdda3a][1529]: 2023-07-02T00:28:31.002835Z INFO HTTP request{http.method=POST http.route=/image http.flavor=1.1 http.scheme=https http.host=lemmy.srcfiles.zip http.client_ip=119.xxx.xxx.251 http.user_agent=Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0 http.target=/image otel.name=HTTP POST /image otel.kind="server" request_id=00d20eec-49d0-4e97-b143-89d6cccecd0b}: tracing_actix_web::root_span_builder: new > Jul 02 00:28:41 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: 2023-07-02T00:28:41.002705Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (http://pictrs:8080/image): operation timed out > Jul 02 00:28:41 ip-172-xxx-xxx-210.ap-southeast-2.compute.internal lemmy_web.1.3zd37bj4pr2pgycd9ph4obgnz[5e22b6ec01a7][1529]: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pictrs")), port: Some(8080), path: "/image", query: None, fragment: None }, source: TimedOut }) > ``` > > ### Version > > BE 0.18.0 > > ### Lemmy Instance URL > > lemmy.srcfiles.zip > > Originally posted by Kangie in #3447

1
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Local instance is processing remote updates even if not subcribed anymore #3568

github.com [Bug]: Local instance is processing remote updates even if not subcribed anymore · Issue #3568 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Local instance is processing remote updates even if not subcribed anymore · Issue #3568 · LemmyNet/lemmy

> ### Requirements > > - [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support > - [X] Did you check to see if this issue already exists? > - [X] Is this only a single bug? Do not put multiple bugs in one issue. > - [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues. > > ### Summary > > If you simply prune your database, so not unsubscribing to every previously subscribed communities, your local instance is still processing every update pushed by the remotes instances, populating your database and your media folder with unwanted stuff. > > ### Steps to Reproduce > > 1. Stash your database > 2. Restart your federated instance > 3. Every POST /inbox is processed with code 200, tables are updated, images stored, etc > 4. Remove federation and every POST /inbox is going to error 400 > > ### Technical Details > > I understand you SHOULD unsubscribe for the remote instances to be aware you don't want updates anymore. But if for whatever reason this information is lost or not processed correctly, the local instance is bombarded with remote junk. > Moreover, with this system, a rogue instance can push everything to everyone. > The best would be the local instance to refuse any unwanted updates, and tell the remote to stop pushing these unsolicited uodates. It would be safer, more robust and less detrimental for both server's resources. > > ### Version > > 0.18.1 > > ### Lemmy Instance URL > > michelsup.org > > Originally posted by michelsup in #3568

2
Issue Tracker @lemm.ee issue_tracking_bot @lemm.ee
BOT

: Lemmy 0.18.2 instances are not federating deletion of a post by user who created post #3588

github.com [Bug]: Lemmy 0.18.2 instances are not federating deletion of a post by user who created post · Issue #3588 · LemmyNet/lemmy

Requirements Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support Did you check to see if this issue already exists? Is this only a single bug? Do not put multipl...

[Bug]: Lemmy 0.18.2 instances are not federating deletion of a post by user who created post · Issue #3588 · LemmyNet/lemmy

> ### Requirements > > - [X] Is this a bug report? For questions or discussions use https://lemmy.ml/c/lemmy_support > - [X] Did you check to see if this issue already exists? > - [X] Is this only a single bug? Do not put multiple bugs in one issue. > - [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues. > > ### Summary > > I suspect some bug in newer changes since 0.18.0 - as I think we would have noticed this earlier. All servers involved in example are running 0.18.2 > > ### Steps to Reproduce > > 1. Use a login on Lemmy.ml and go to the remote-homed community https://lemmy.ml/c/[email protected] and create a New Post. > 2. Delete your new post after 30 seconds. Confirm it is deleted locally on Lemmy.ml - it should return an error page indicating couldnt_find_post > 3. Go to the home instance of the community, see if the post still appears: https://lemm.ee/c/zztestlemmy000?dataType=Post&page=1&sort=New > __ > > ### Technical Details > > Using lemmy-ui front end, did not test with other API clients. The user is neither a moderator of the community or admin of either site. > > Noteworthy that editing the title of a post did replicate correctly. It is deleting that seems to be the trouble. Testing of deleting a comment also worked fine. > > ### Version > > BE: 0.18.2 > > ### Lemmy Instance URL > > lemm.ee lemmy.ml > > Originally posted by RocketDerp in #3588

5