Skip Navigation

[Discussion] Why voting should not be used here at all

Everyone I have something very important to say about The Agora.

The Problem

Let me be super clear here to something people don't seem to understand about lemmy and the fediverse. Votes mean absolutely nothing. No less than nothing.

In the fediverse, anyone can open a instance, create as many users as they want and one person can easily vote 10,000 times. I'm serious. This is not hard to do.

Voting at best is a guide to what is entertaining.

As soon as you allow a incentive the vast majority of votes will be fake. They might already be mostly fake.

If you try to make any decision using votes as a guide someone WILL manipulate votes to control YOU.

one solution (think of others too!)

A counsel of trusted users.

The admin, top mods may set up a group to decide on who to ban and what instances to defederate from. You will not get it right 100% of the time but you also won't be controlled by one guy in his basement, running 4 instances and 1,000 alts.

Now i'm gonna go back to shit posting.

101

You're viewing a single thread.

101 comments
  • I think OP raises a valid concern. In the near term, I don't know what will be voted on that will be worth the effort of spinning up a bot army. But it could happen eventually. Large floods of votes might be easier to detect. Smaller bot armies could be harder, but still impactful to the outcome.

    Perhaps we could fire up some kind of identity service. A user goes there, puts in their username, solves a CAPTCHA, and gets back a url to a page that contains their username. The pages can be specific to a particular vote so urls aren't reusable. Every time a user votes, they need to solve a new CAPTCHA. User will include their identity url when voting.

    Admins can confirm that user names and identity urls match.

    Could be more efficient ways to do it, this was my first thought.

    • A public/private key pair is more effective. Thats how "https" sites work. SSL/TLS uses certificates to authenticate who is who. Every site with https has a SSL certificate which basically contains the public key of the site. The site can then use its private key to sign all data it sends to you, and you can verify that it actually came from them by trying to decrypt it with their public key. Certificates are granted by a certificate authority, which are basically the identity service you are talking about. Certificates are usually themselves signed by the certificate authority so that you can tell that someone didnt just man-in-the-middle-attack you and swap out the certificate, and the site can still directly serve you the certificate instead of you needing to go elsewhere to find the certificate

      The problem with this is severalfold. You would need some kind of digital identity organization(s) to be handling sensitive user data. This organization would need to

      1. Be trusted. Trust is the key to having these things work. Certificate authorities are often large companies with a vested interest in having people keep business with them, so they are highly unlikely to mess with people's data. If you can't trust the organization, you can't trust any certificate issued or signed by them.

      2. Be secure. Leaking data or being compromised is completely unnaceptable for this type of service

      3. Know your identity. The ONLY way to be 100% sure that it isnt someone just making a new account and a new key or certificate (e.g. bots) would be to verify someone's details through some kind of identification. This is pretty bad for several reasons. Firstly it puts more data at risk in the event of a security breach. Secondly there is the risk of doxxing or connecting your real identity to your online identity should your data be leaked. Thirdly it could allow impersonation using leaked keys (though im sure theres a way to cryptographically timestamp things and then just mark the key as invalid). Fourth, you could allow one person to make multiple certificates for various accounts to keep them separately identifiable, but this would also potentially enable making many alts.

      There may be less agressive ways of verifying individual humanness of a user, or just preventing bots as in that 3rd point may be better. For example, a simple sign up with questions to weed out bots, which generates an identity (certificate / key) which you can then add to your account. That would then move the bot target from various lemmy instances, solely to the certificate authorities. Certificate authorities would probably need to be a smaller number of trusted sources, as making them "spin up your own" means that anyone could do just that, with less pure intentions or modified code that lets them impersonate other users as bots. That sucks because it goes against the fundamental idea that anyone should be able to do it themselves and the open source ideology. Additionally, you would need to invest in tools to prevent DDOS attacks and chatgpt bots.

      There most certainly exists user authentication authorities, however it wouldn't surprise me a bit if there were no suitable drop in solutions for this. This in and of itself is a fairly difficult project because of the scale needed to start as well as the effort put into verifying users are human. It's also a service that would have to be completly free to be accepted, yet cannot just shut down at risk of preventing further users from signing up. I considered perhaps charging instances a small fee (e.g. $1/mo) if they have over a certain threshold of users to allow issuing further certificates to their instance, but its the kind of thing I think would need to be decoupled from Lemmy to have a chance of surviving through more widespread use.

101 comments