Skip Navigation
13 comments
  • This technology should be illegal, and the databases containing the facial/biometric data seized and destroyed.

    • Yeah, this shit would get laughed out of the European Union

    • This technology should be illegal

      Why?

      and the databases containing the facial/biometric data seized and destroyed.

      What would that do?

      Edit: To everyone just downvoting me. Give me a reason. I'm asking these questions for a reason.

      Edit 2: you know what? Screw it.

      Making it illegal would do nothing. You already have a tracker in your pocket that's far more invasive and far more reaching. It knows what you do at night, where you go ALL DAY, who you talk to, what you talk about, which websites you visit, the kind of kinks you have, etc.

      Banning FR would be like banning cars. Yes, it would literally save lives if we banned cars, but are you going to advocate for that? No, you won't. And that also ignores the literal hundreds of millions of people that safely drive around each and every day.

      It's the same for FR. Believe it or not, it's used everywhere and everyday successfully. You just don't hear about the hundreds of millions of successful scans because that's not catchy news.

      Out of the handful of stories we hear in the news about people being wrongfully arrested "because of facial recognition", there are many millions that don't. And let's be clear here, this isn't a problem with FR, as these systems perform consistently better (the data is there to prove it) than the best humans at recognition faces, it's a problem with using the system wrong or a massive policy problem at the departments using it.

      Another thing is that FR is rather limited. It can scan you, sure, but it can only do so where the cameras are set up and where you personally are put into the database. So we're talking a store or a property.

      But it won't know what you're talking about, where you go at night, how long you spend at a friend's house, what your political views are, what your online searches are, or anything that really matters. Your cellphone will.

      And deleting the biometric databases is pointless. If you have a picture somewhere on the internet, then your biometric data is out there. The model representation of your face that these systems create are not special and not compatible with literally every single other FR system in the world. FR is an umbrella term for an end goal, not a methodology or algorithm. There's nothing to steal. Literally nothing. If a hacker did get their hands on the database, there's nothing they could do with the model data. If they did have access to a FR system, they would need to regenerate the models from the photos anyways, because the systems aren't compatible. And it's incredibly trivial to do anyways.

      Your "biometric data" is a photo of your face. You want to protect it? Then don't post a single photo of yourself anywhere ever.

      If you really want to complain about real invasion of privacy? Start with Google, Facebook, ad trackers, and your cellphone.

      FR may be creepy, but it's not the invasion of privacy that it's been made out to be. I'm not saying it isn't a privacy concern, but I am saying people are getting disproportionately upset by it compared to other things that already exist and are worse.

  • This wasn’t just a bad facial recognition issue. During the photo lineup the detective intentionally used a many years old photograph of the victim that more closely matched the person in the surveillance video even though she had a current photo available. She also knew the woman in the surveillance video was definitely not 8 months pregnant, so it couldn’t possibly be the woman they identified, but she still arrested the victim anyway.

    Yes the facial recognition gave a false positive but any reasonable person would recognize instantly that they had the wrong person. The detective was either incompetent or a liar.

  • This is the best summary I could come up with:


    According to The New York Times, this incident is the sixth recent reported case where an individual was falsely accused as a result of facial recognition technology used by police, and the third to take place in Detroit.

    Advocacy groups, including the American Civil Liberties Union of Michigan, are calling for more evidence collection in cases involving automated face searches, as well as an end to practices that have led to false arrests.

    A 2020 post on the Harvard University website by Alex Najibi details the pervasive racial discrimination within facial recognition technology, highlighting research that demonstrates significant problems with accurately identifying Black individuals.

    Further, a statement from Georgetown on its 2022 report said that as a biometric investigative tool, face recognition "may be particularly prone to errors arising from subjective human judgment, cognitive bias, low-quality or manipulated evidence, and under-performing technology" and that it "doesn’t work well enough to reliably serve the purposes for which law enforcement agencies themselves want to use it."

    The low accuracy of face recognition technology comes from multiple sources, including unproven algorithms, bias in training datasets, different photo angles, and low-quality images used to identify suspects.

    Reuters reported in 2022, however, that some cities are beginning to rethink bans on face recognition as a crime-fighting tool amid "a surge in crime and increased lobbying from developers."


    I'm a bot and I'm open source!

  • All six individuals falsely accused have been Black. The Detroit Police Department runs an average of 125 facial recognition searches per year, almost exclusively on Black men, according to data reviewed by The Times.

    Oh.

    It's particularly risky for dark-skinned people. A 2020 post on the Harvard University website by Alex Najibi details the pervasive racial discrimination within facial recognition technology, highlighting research that demonstrates significant problems with accurately identifying Black individuals.

    Oh. I see.

  • She was so dehydrated that she had to be hospitalized.

    Pigs aren't human.

  • They should run the developers and directors' faces in the system before deploying it. Modify the images so their skin is a bit darker.

    Either they actually make it work or they get to live with a constant fear of being arrested for something bullshit.

    I know I might as well ask to have the moon from the sky. :(

  • Edit: if you downvote me without even rebutting a single thing I've said, then you're wrong, a coward, and you know it.

    Reposting my verbatim (plus an extra clarification) reply to someone who didn't want to reply to my questions and just downvoted me instead. I just want greater visibility to the points I made.

    This technology should be illegal

    Why?

    and the databases containing the facial/biometric data seized and destroyed.

    What would that do?

    Edit: To everyone just downvoting me. Give me a reason. I'm asking these questions for a reason.

    Edit 2: you know what? Screw it.

    Making it illegal would do nothing. You already have a tracker in your pocket that's far more invasive and far more reaching. It knows what you do at night, where you go ALL DAY, who you talk to, what you talk about, which websites you visit, the kind of kinks you have, etc.

    Side note: I know what what-aboutism is. This isn't that. Just keep reading.

    Banning FR would be like banning cars. Yes, it would literally save lives if we banned cars, but are you going to advocate for that? No, you won't. And that also ignores the literal hundreds of millions of people that safely drive around each and every day.

    It's the same for FR. Believe it or not, it's used everywhere and everyday successfully. You just don't hear about the hundreds of millions of successful scans because that's not catchy news.

    Out of the handful of stories we hear in the news about people being wrongfully arrested "because of facial recognition", there are many millions that don't. And let's be clear here, this isn't a problem with FR, as these systems perform consistently better (the data is there to prove it) than the best humans at recognition faces, it's a problem with using the system wrong or a massive policy problem at the departments using it.

    Another thing is that FR is rather limited. It can scan you, sure, but it can only do so where the cameras are set up and where you personally are put into the database. So we're talking a store or a property.

    But it won't know what you're talking about, where you go at night, how long you spend at a friend's house, what your political views are, what your online searches are, or anything that really matters. Your cellphone will.

    And deleting the biometric databases is pointless. If you have a picture somewhere on the internet, then your biometric data is out there. The model representation of your face that these systems create are not special and not compatible with literally every single other FR system in the world. FR is an umbrella term for an end goal, not a methodology or algorithm. There's nothing to steal. Literally nothing. If a hacker did get their hands on the database, there's nothing they could do with the model data. If they did have access to a FR system, they would need to regenerate the models from the photos anyways, because the systems aren't compatible. And it's incredibly trivial to do anyways.

    Your "biometric data" is a photo of your face. You want to protect it? Then don't post a single photo of yourself anywhere ever.

    If you really want to complain about real invasion of privacy? Start with Google, Facebook, ad trackers, and your cellphone.

    FR may be creepy, but it's not the invasion of privacy that it's been made out to be. I'm not saying it isn't a privacy concern, but I am saying people are getting disproportionately upset by it compared to other things that already exist and are worse.

13 comments