Soliciting Feedback for Improvements to the Media Bias Fact Checker Bot
Hi all!
As many of you have noticed, many Lemmy.World communities introduced a bot: @[email protected]. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.
The [email protected] mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.
Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.
My personal view is to remove the bot. I don't think we should be promoting one organisations particular views as an authority. My suggestion would be to replace it with a pinned post linking to useful resources for critical thinking and analysing news. Teaching to fish vs giving a fish kind of thing.
If we are determined to have a bot like this as a community then I would strongly suggest at the very least removing the bias rating. The factuality is based on an objective measure of failed fact checks which you can click through to see. Although this still has problems, sometimes corrections or retractions by the publisher are taken note of and sometimes not, leaving the reader with potentially a false impression of the reliability of the source.
For the bias rating, however, it is completely subjective and sometimes the claimed reasons for the rating actually contradict themselves or other 3rd party analysis. I made a thread on this in the support community but TLDR, see if you can tell the specific reason for the BBC's bias rating of left-centre. I personally can't. Is it because they posted a negative sounding headline about Trump once or is it biased story selection? What does biased story selection mean and how is it measured? This is troubling because in my view it casts doubt on the reliability of the whole system.
I can't see how this can help advance the goal (and it is a good goal) of being aware of source bias when in effect, we are simply adding another bias to contend with. I suspect it's actually an intractable problem which is why I suggest linking to educational resources instead. In my home country critical analysis of news is a required course but it's probably not the case everywhere and honestly I could probably use a refresher myself if some good sources exist for that.
Thanks for those involved in the bot though for their work and for being open to feedback. I think the goal is a good one, I just don't think this solution really helps but I'm sure others have different views.
One issue with poor media literacy is that I don’t think people are going to go out of their way to improve their literacy on their own just from a pinned post. We could include a link in the bot’s comment to a resource like that though.
Do you think that the bias rating would be improved by aggregating multiple factors checkers’ opinions into one score?
Yeah it's definitely a good point, although I would argue people not interested in improving their media literacy should not be exposed to a questionable bias rating as they are the most likely to take it at face value and be misled.
The idea of multiple bias sources is an interesting one. It's less about quantity than quality though I think. If there are two organisations that use thorough and consistent rating systems it could be useful to have both. I'm still not convinced that it's even a solvable problem though but maybe I'm just being too pessimistic and someone out there has come up with a good solution.
Either way I appreciate that it's a really tough job to come up with a solution here so best of luck to you and thanks for reading the feedback.
One problem I’ve noticed is that the bot doesn’t differentiate between news articles and opinion pieces. One of the most egregious examples is the NYT. Opinion pieces aren’t held to the same journalistic standards as news articles and shouldn’t be judged for bias and accuracy in the same way as news content.
I believe most major news organizations include the word “Opinion” in titles and URLs, so perhaps that could be something keyed off of to have the bot label these appropriately. I don’t expect you to judge the bias and accuracy of each opinion writer, but simply labeling them as “Opinion pieces are not required to meet accepted journalistic standards and bias is expected.” would go a long way.
Thanks for this. As a mod of /c/news, I hadn’t really thought about that. We don’t allow opinion pieces, but this is very relevant if we roll out a new bot for all the communities that currently use the MBFC bot.
Try to make it more clear that this is not a flawless rating (as that is impossible).
Ways to implement:
Make sure the bot says something along the lines of “MBFC rates X news as Y” and not “X news is Y”.
Make a caveat (collapsable) at the bottom, that says something along the lines of “MBFC is not flawless. It has an american-centric bias, is not particularly clear on methodology, to the point where wikipedia deems it unreliable; however, we think it is better to have this bot in place as a rough estimate, to discourage posting from bad sources”
If possible, add other sources, Like: “MBFC rates the Daily Beast as mostly reliable, Ad Fontes Media rates it as unreliable, and Wikipedia says it is of mixed reliability”
Remove the left right ratings. We already have a reliability and quality rating, which is much more useful. The left-right rating is frankly poorly done and all over the place, and honestly doesn’t serve much purpose.
Interesting that people say that opinion pieces should not be held to the same standard. I personally see such pieces contribute to fake news going around. Shouldn't a platform with reach, held accountable for wrong information, they hide behind an opinion piece?
It’s not a question of “should” - an opinion piece is rhetoric, not reporting. You can fact check some of it sometimes but functionally can’t hold it to the same standards as a regular news article. I agree that this can sometimes lead to “alternative facts” and disingenuous arguments, but the only other option is to forbid the publication of them which is obviously an infringement of first amendment rights. It’s messy, and it can lead to people being misinformed, but it’s what we’re stuck with.
You don't need every post to have a comment basically saying "this source is ok". Just post that the source is unreliable on posts with unreliable sources. The definition of what is left or right is so subjective these days, that it's pretty useless. Just don't bother.
I agree with that. Having a warning message when the source is known to be extremely biased and/or unreliable is probably a good thing, but it doesn't need to be in every single thread.
My personal view is that the bot provides a net negative, and should be removed.
Firstly, I would argue that there are few, if any, users whom the bot has helped avoid misinformation or a skewed perspective. If you know what bias is and how it influences an article then you don't need the bot to tell you. If you don't know or care what bias is then it won't help you.
Secondly, the existence of the bot implies that sources can be reduced to true or false or left or right. Lemmy users tend to deal in absolutes of right or wrong. The world exists in the nuance, in the conflict between differing perspectives. The only way to mitigate misinformation is for people to develop their own skeptical curiosity, and I think the bot is more of a hindrance than a help in this regard.
Thirdly, if it's only misleading 1% of the time then it's doing harm. IDK how sources can be rated when they often vary between articles. It's so reductive that it's misleading.
As regards an open database of bias, it doesn't solve any of the issues listed above.
In summary, we should be trying to promote a healthy sceptical curiosity among users, not trying to tell them how to think.
Thanks for the feedback. I have had the thought about it feeling like mods trying to tell people how to think, although I think crowdsourcing an open source solution might make that slightly better.
One thing that’s frustrating with the MBFC API is that it reduces “far left” and “lean left” to just “left.” I think that gets to your point about binaries, but it is a MBFC issue, not an issue in how we have implemented it. Personally, I think it is better on the credibility/reliability bit, since it does have a range there.
That's perhaps a small part of what I meant about binaries. My point is, the perspective of any given article is nuanced, and categorising bias implies that perspectives can be reduced to one of several.
For example, select a contentious issue like abortion. Collect 100 statements from 100 people regarding various related issues, health concerns, ethics, when an embryo becomes a fetus, fathers rights. Finally label each statement as either pro-choice or pro-life.
For sobering trying to understand the complex issues around abortion, the labels are not helpful, and they imply that the entire argument can be reduced to a binary choice. In a word it's reductive. It breeds a culture of adversity rather than one of understanding.
In addition, I can't help but wonder how much "look at this cool thing I made" is present here. I love playing around with web technologies and code, and love showing off cool things I make to a receptive audience. Seeking feedback from users is obviously a healthy process, and I praise your actions in this regard. However, if I were you I would find it hard not to view that feedback through the prism of wanting users to find my bot useful.
As I started off by saying, I think the bot provides a net negative, as it undermines a culture of curious scepticism.
Who fact-checks the fact-checkers? Fact-checking is an essential tool in fighting the waves of fake news polluting the public discourse. But if that fact-checking is partisan, then it only acerbates the problem of people divided on the basics of a shared reality.
This is why a consortium of fact-checking institutions have joined together to form the International Fact-Checking Network (IFCN), and laid out a code of principles. You can find a list of signatories as well as vetted organizations on their website.
MBFC is not a signatory to the IFCN code of principles. As a partisan organization, it violates the standards that journalists have recognized as essential to restoring trust in the veracity of the news. I've spoken with @[email protected] about this issue, and his response has been that he will continue to use his tool despite its flaws until something better materializes because the API is free and easy to use. This is like searching for a lost wallet far from where you lost it because the light from the nearby street lamp is better. He is motivated to disregard the harm he is doing to [email protected], because he doesn't want to pay for the work of actual fact-checkers, and has little regard for the many voices who have spoken out against it in his community.
By giving MBFC another platform to increase its exposure, you are repeating his mistake. Partisan fact-checking sites are worse than no fact-checking at all. Just like how the proliferation of fake news undermines the authority of journalism, the growing popularity of a fact-checking site by a political hack like Dave M. Van Zandt undermines the authority of non-partisan fact-checking institutions in the public consciousness.
Thanks, this was a very informative comment. I assume none of the IFCN signatories have a free API? Just asking since you seem pretty well versed on this
More than half of these occurred in a community you moderate. Do you approve of this use of the term 'spamming' to silence criticism?
Exposing a free API for anyone to use is not typical trade practice for respectable fact-checking operations. You may be able to get free access as a non-profit organization, and that may be worth persuing. On the other hand, there's a fundamental problem in the disconnect between the goals of real fact-checking websites and the kind of bot you are trying to create.
Our methodology incorporates findings from credible fact-checkers who are affiliated with the International Fact-Checking Network (IFCN). Only fact checks from the last five years are considered, and any corrected fact checks do not negatively impact the source’s rating.
Just like every good lie has a little bit of truth in it, MBFC wouldn't be able to spin its bullshit as well without usurping the credibility of real fact-checking organizations.
To clarify what MBFC considers "MIXED" factual reporting (the same rating they give known disinformation factory Breitbart):
Further, while The Guardian has failed several fact checks, they also produce an incredible amount of content; therefore, most stories are accurate, but the reader must beware, and hence why we assign them a Mixed rating for factual reporting.
They list like five fact checks, while The Guardian puts out basically quintuple that every day. And moreover, this is the sort of asinine nitpick that they classify as a "fact check".
"Private renting is making people ill." "Private renting is making people ill, but maybe this happens with other housing situations too, we don't know, so we rate this as false."
MBFC's ratings for "factual reporting" are a joke.
This is my problem with MBFC, and which seems to consistently get ignored by the admins and mods pushing for the bot.
MBFC seems to rate every even slightly "left wing" news source as "mixed factual reporting" for absolutely any excuse whatsoever. The fact that they deem The Guardian as reliable as Breitbart should really tell you something.
No need for a bot. Obvious misinformation should be removed by the mods. Bias is too subjective to be adjudicated by the mods. Just drop it already. It's consistently downvoted into oblivion for a reason. The feedback has been petty damn obvious. This whole thread is just because the mods are so sure they're right that they can't listen to the feedback they already got. Just kill the bot.
The bot is basically a spammer saying "THIS ARTICLE SUCKS EVEN THOUGH I DIDN'T READ IT" on every damn post. If that was a normal user account you'd ban it.
Yeah lol, i cant help but laugh every time i see the mods replies in this thread. i dont understand shit about his train of thought, i dont know if he is denyal or was surprised most people didnt end up aligning with his bias and is in damage control replying nonsense.
I apologize if this thread was misunderstood. Perhaps I was not clear that this was meant for improvements, it is not a vote on removal. Should that vote ever happen, the post would be clear about that.
All of my questions were only seeking to gain more information about people’s feelings. I apologize if it came off as a promise to enact anything in particular or an endorsement of any particular stance on the bot.
Yes, you've been very clear from the start that you do not want to remove the bot. However, the feedback you've consistently received is that it provides no benefit, is misleading, reductive, and the best improvement you could make would be to remove it. You don't seem willing or able to respond to that.
It has been helpful and we would like to keep it around in one form or another.
Bull fucking shit. The majority of feedback has been negative. I can't recall a single person arguing in its favor, but I can think of many, myself included, arguing against it. I hope you can find my report of one particular egregious example, because Lemmy doesn't let me see a history of things I reported. I recall that MBFC rated a particular source poorly because they dared to use the word "genocide" to describe what's going on in Gaza. Trusting one person, who clearly starts from an American point of view, and has a clearly biased view of world events, to be the arbiter of what is liberal or conservative, or factual or fictional, is actively harmful.
No community, neither reddit nor Lemmy nor any other, has suffered for lack of such a bot. I strongly recommend removing it. Non-credible sources, misinformation, and propaganda are already prohibited under rule 8. If a particular source is so objectionable, it should be blacklisted entirely. And what is and is not acceptable should be determined in concert with the community, not unilaterally.
Edit: And another thing! It's obnoxious for bot comments to count toward the number of comments as shown in the post list. Nobody likes seeing it and thinking "I wonder what people are saying about this" and it's just the damn bot again. But that's really a shortcoming in Lemmy.
Yes! The mods starting out the discussion with their preferred outcome is so incredibly telling. This is a tool to reinforce the mods bias, deliberately or not
I will start by saying that I feel like we are trying to address the criticism in your first paragraph with these changes. That being said, thanks for your feedback. I particularly like the comment you shared under the “edit,” because I hadn’t seen that sentiment shared before (not saying nobody else had that issue, just appreciating you for contributing that and challenging me to think more about how we execute things).
Just as a point of clarification, there is certainly not a community consensus among the feedback.
While you are absolutely correct in stating that there are vocal members of the community opposed to it in any form, there is also a significant portion of the community that would prefer to keep or modify how it works. The mod team will be taking all of these perspectives into account. We hope that you will be respectful of community members with whom you disagree.
Please, move the bias and reliability outside of the first accordion/spoiler. This is the sole purpose the bot was meant to provide. If we can't see that at a glance, it's bad. I don't see how these few words are "too long" either. I feel like a lot of the space could be cleared by turning the "Search Ground News" accordion into another link in the footer.
While I personally don't see the point of the controversy, it wouldn't be too hard to manually enter Wikipedia's Perennial Sources list into the database that the bot references, especially with MediaWiki's watchlist RSS feed. This would almost certainly satisfy the community.
Open source the database and the bot. Combined with #2, this could also offer an API to query Wikipedia's RSP for everyone to use in the spirit of fedi and decentralization.
Yes. A certain amount of my complaint about MBFC bot is not that it's a bad idea per se, it's just that the database and categorizations are laughably bad. It puts Al Jazeera in the same factual classification as TASS. It lists MSNBC as factually questionable and then when you look at the actual list, a lot of them are MSNBC getting it right and MBFC getting it wrong. It might as well be retitled "The New York Times's Awful Neoliberal Idea of Reality Check Bot". (And not talking about the biases ranking -- if that one is skewed it is fine, but they claim things are not factual if they don't match the appropriate bias, and the bias is unapologetic center-right.)
You can't set yourself up to sit in judgement of sources that write dozens of articles every single day about unfolding world events where the "objectively right" perspective isn't always even obvious in hindsight, and then totally half-ass the job of getting your basic facts straight about the sources you're ranking, and expect people to take you seriously. I feel like mostly the Lemmy hivemind is leaps and bounds ahead of MBFC bot at determining which sources are worth listening to.
it wouldn't be too hard to manually enter Wikipedia's Perennial Sources list into the database that the bot references
FUCK FUCK FUCK YES
This is an actual up-to-date and very extensive list that people who care bother to keep up to date in detail (even making distinctions like "hey this source is ok for most topics but they are biased when talking about X, Y, Z"). This would immediately do away with like 50% of my complaint about MBFC bot.
For example, if we retain MBFC, the layout could look something like this:
Rolling Stone Bias: Left, Credibility: High, Factual Reporting: High - United States of America
MBFC report | bot support | Search topics on Ground.News
in which "Rolling Stone" is linked to the Wikipedia article.
With RSP, it could look something like this:
Rolling Stone is **generally reliable** on culture
There is consensus that Rolling Stone has generally reliable coverage on culture matters (i.e., films, music, entertainment, etc.). Rolling Stone's opinion pieces and reviews, as well as any contentious statements regarding living persons, should only be used with attribution. The publication's capsule reviews deserve less weight than their full-length reviews, as they are subject to a lower standard of fact-checking. See also Rolling Stone (politics and society), 2011–present, Rolling Stone (Culture Council).
Rolling stone is **generally unreliable** on politics and society, 2011–present
According to a 2021 RfC discussion, there is unanimous consensus among editors that Rolling Stone is generally unreliable for politically and societally sensitive issues reported since 2011 (inclusive), though it must be borne in mind that this date is an estimate and not a definitive cutoff, as the deterioration of journalistic practices happened gradually. Some editors have said that low-quality reporting also appeared in some preceding years, but a specific date after which the articles are considered generally unreliable has not been proposed. Previous consensus was that Rolling Stone was generally reliable for political and societal topics before 2011. Most editors say that Rolling Stone is a partisan source in the field of politics, and that their statements in this field should be attributed. Moreover, medical or scientific claims should not be sourced to the publication.
RSP listing | bot support | Search topics on Ground.News
Both examples with everything necessary linked, of course
For 3 they said they'd release the code when it was announced, but have been completely silent since. Maybe it'll be public when sublinks goes live lol
I appreciate the joke lol. But on a serious note, it sounds like you’re saying it’s not actually 100% useless, just that it’s being deployed too widely. Any specific suggestions on what the bot should say on those questionable sources?
My main issue is that it doesn't provide any real value.
If I see a Guardian/BBC news article about international events, I'll give it a lot of trust. But when it's talking about England, my eyebrows are raised. Calling it Left/right/center doesn't help a reader understand that.
Worse it hot garbage like The Daily Mail. They have no fact check or provide real journalism. It means nothing to me what it aligns to.
Then the bottom of the barrel is some random news site that was spun up a month ago like Freedom Patriot News. Of course we know where it lands in the political spectrum. But it's extreme propaganda.
The challenge here is that trust has become subjective. Conservatives don't trust CNN. Democrats don't trust Fox News. It becomes difficult to rate the quality of the organization in a binary way.
Current ownership and governance of the media outlet, generally speaking. Noting if an outlet is state owned or public traded, etc might help.
Does the bot even tell the difference between an opinion piece and investigative journalism?
If a source is a proven misinformation generator then noting the proof with direct links to evidence, cases, rulings, etc. However those sources tend to disappear quickly and are constantly being generated. It is whack a mole and generates an endlessly outdated list.
The problem is it likely isn't any information a bot can just scoop up and relay, and instead requires research and human effort.
I’ll be honest, that’s probably outside of the scope of what we can do for now. It’s definitely valuable feedback in general and I wish I could offer some kind of solution but that’s probably even outside the control of the instance admins.
Get rid of it entirely. In another one of your comments you acknowledged that it "seemed" like the bot is an extension of the mods telling everyone else what to think. You are close. It doesn't seem that way, it is that way.
Also, bot is annoying AF. If you really are in love with so much, make it an opt-in service and it can DM all the psychos who want to be spammed by it.
I blocked it straight away so I don't have a dog in this fight but I'm instantly skeptical of any organization that claims to be the arbiter of what is biased and to what degree.
@[email protected] Why did you stop replying to posts here? Most people is telling you the bot is bullshit. You stopped commenting in this thread while being active elsewhere, are you going to take action or not?
I’m not the admin who created the bot. I’m a mod who is collecting feedback on behalf of the entire mod team.
Just to be perfectly clear: because I am the face of this feedback, you can feel free to say whatever you want to me. It’s odd that you seem to harbor ill feelings towards me in particular just because I pushed for collecting user feedback on this issue.
my cat likes to sit in my window seal but I accidentally knocked the curtain rod down. She has been laying in the bunched up curtain that's laying on the floor, I think she likes it better than then window seal. However the window is right out the front of the house so anytime I come home after a long day I see her watching me roll up the driveway and it makes me feel good. I don't know if it would be best to move the bunched up curtain back to the window or let her stay on the floor and not see her when I get home :(
Remove it please. It's an obtrusive advertisement for Ground News.
It's incredibly annoying to see comments: 1, only to click the post to see an ad. It makes me less inclined to interact with Lemmy at all. It's the same kind of crap that ruined Reddit.
The overwhelming majority of comments I'm seeing indicate they'd like to see it gone. Why are you opposed to listening to the people who create and consume all of the content in this space?
You're not really looking for feedback if you've already made up your mind. Stop pretending to listen to the community if you're ignoring the countless blocks and downvotes. That's your feedback right there
How about you remove the bot and then fix whatever problems you have without doubling down on the bot solution? If you want community feedback on mod overburden, I'm sure people will be willing to help with that. But stop forcing the bot.
That would certainly be preferable. I don't think we should be advertising.
Beyond that, it would be much better if there were a way to not have the bot be counted as a comment. Comments are what humans do. They're meant to be interacted with. I can't interact with a bot, other than rolling my eyes at it.
How much more feedback do you need to gather on the subject to understand that a bot with a garbage datasource is no use to anyone? Even opening this thread is an insult and a sign of how little you recognize and care for your community. Remove the shit bot already instead of fishing for excuses to keep it active.
Honestly, have a look through the whole thread. There are comments from those who, like yourself, oppose it in any form. However, please also be respectful of the many community members who are saying that it is useful or could be made useful.
The mods will be taking all of these comments into account.
The mods aren't neutral though. They already start with a pretty strong "the bot is here to stay" which is borderline insulting to the community. Then they're asking for ideas to make it better, which already presumes the idea is feasible or a good idea in the first place. Sure, I would make it less spammy, put the details behind a link, etc etc, but they're already committed to the bot as a solution to their stated problem of overloaded mods. Well that could be solved in much better ways. All the energy going to this controversial bot is adding to the mod overburden!
Honestly, the first time I had heard of Ground News was in a discussion about implementing it with the bot. Do you have any thoughts on alternatives or would you prefer that bit just removed from the bot’s comment?
Can you elaborate? Like, do you think the bot would be better if it didn’t label things as “left” or “right” (ie: remove the bias rating) or do you think the reliability/credibility ratings have the same issue?
Im sorry but the sole concept of the bot is bullshit and as many have said already the idea is biased per se. I wish i lived in the same world as mbfc where it seems like all media is left-center.
If anything, what would be needed would be a bot that checked if the information on that article has any known missinformation or incorrect/wrong facts. And that would be extremely hard to maintain and update as a lot of news are posted before any fact checking can be done.
I'm frankly rather concerned about the idea of crowdsourcing or voting on "reliability", because - let's be honest here - Lemmy's population can have highly skewed perspectives on what constitutes "accurate", "unbiased", or "reliable" reporting of events. I'm concerned that opening this to influence by users' preconceived notions would result in a reinforced echo chamber, where only sources which already agree with their perspectives are listed as "accurate". It'd effectively turning this into a bias bot rather than a bias fact checking bot.
Aggregating from a number of rigorous, widely-accepted, and outside sources would seem to be a more suitable solution, although I can't comment on how much programming it would take to produce an aggregate result. Perhaps just briefly listing results from a number of fact checkers?
I second this. This community is better than most social media, but it's still that, and social media popularity is pretty bottom of the barrel as a means of determining accuracy. Additionally, that'd just open it up to abuse from people trying to weight the votes with fake accounts, scripts, whatever.
Here's the comment reply from when I first asked what was wrong with MBFC. Gotta say. I agree with that comment. I'm surprised more people haven't posted similar examples here.
The Jerusalem Report (Owned by Jerusalem Post) and the Jerusalem Post
This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight, provide only the most favorable context imaginable, yadda yadda. By no stretch of the imagination would these publications be considered unbiased as sources, yet according to MBFC they're near perfect.
This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight
Overall, we rate The Jerusalem Post Right-Center biased based on editorial positions that favor the right-leaning government. We also rate them Mostly Factual for reporting rather than High due to two failed fact checks.
Until 1989, the Jerusalem Post’s political leaning was left-leaning as it supported the ruling Labor Party. After Conrad Black acquired the paper, its political position changed to right-leaning, when Black began hiring conservative journalists and editors. Eli Azur is the current owner of Jerusalem Post. According to Ynetnews, and a Haaretz article, “Benjamin Netanyahu, the Editor in Chief,” in 2017, Azur gave testimony regarding Prime Minister Benjamin Netanyahu’s pressure. Current Editor Yaakov Katz was the former senior policy advisor to Naftali Bennett, the former Prime Minister and head of the far-right political party, “New Right.”
In review, The Jerusalem Post covers Israeli and regional news with strongly emotionally loaded language with right-leaning bias with articles such as this “Country’s founding Labor party survives near extinction” and “Netanyahu slams settler leader for insulting Trump.” . . . During the 2023 Israel-Hamas conflict, the majority of stories favored the Israeli government, such as this Netanyahu to Hezbollah: If you attack, we’ll turn Beirut into Gaza. In general, the Jerusalem Post holds right-leaning editorial biases and is usually factual in reporting.
They literally mention their bias over and over. Center-right is consistent with how they're rated everywhere. Allsides rates them center with the note that the community thinks they lean right. Wikipedia rates them as centre-right/conservative. Your "VEEEERY slightly" bit is pure fabrication. They specifically note that they're a highly biased source on the conflict in Gaza.
Tell the bot to never be the first comment. I find it very frustrating when I see "a comment on this post" and it's just the bot. I'm here to read what people have to say so it is very annoying when I think someone said something and it's just the bot.
There was even a front page meme about this last year, but with another noisy bot. Lemmy doesn't bury new comments like Reddit does, so there's no real penalty to making the bot wait.
I think the problem is with the whole concept. Most news organizations have more than one person working there, so unless the bot is measuring the bias of individual journalists it seems really silly. It presupposes that there's someone at the top of a large news organization dictating to the staff to make an article "more left" or "more right" or whatever. Sure at some news organizations (like FoxNews) that may happen, but I doubt that happens at AP or Reuters and many other news organizations.
I've seen many articles where the headline was incredibly biased (to get clicks I guess?) while the article was not. Clearly the editor that wrote the headline had more bias than the person that wrote the article who might've been a freelancer.
And many news articles don't have any bias at all. "Earthquake in California" is that a left or right biased article? I think it's neither. Even a quote from a politician, Kamala Harris said "XYZ" or Donald Trump said "ZYX" is it biased to report on what people said? It's a fact they said those words, is it biased to tell people what someone said? I think it's just treating people like adults who can read what a person said and make their own conclusions.
At the end of the day people have to learn how to spot bias themselves, there's no quick-fix-life-hack-work-around to skip having to build some experience with media literacy. Ground News or a bot or whatever will have their own biases, and if people are trusting someone on the internet to tell them what is biased, they've failed at media literacy from the get go.
Ban it and all bots honestly. I hate seeing a comment on a thread just to find out it's a bot. If not use like this continues we might see a fresh post with 6 new comments, all of them bots that don't add to the discussion.
Thanks for the feedback. Can you elaborate a bit about the 50% of your screen thing? Is it the text itself, or is the issue that the app provides links at the bottom of the comment? I’m thinking of my experience on Voyager, where the links are summarized at the bottom of each comment, which does lead to a decent amount of screen being taken up. Would it be better if there weren’t any links?
yep I'm using Voyager on my iPhone. Maybe a super short summary without links. People could open the bot's profile and look at the bot's posts (not comments) if they want to dig deeper to understand a source.
While I think it's important to have some sort of media bias understanding, I dislike the bot being the first (and sometimes only) comment on a post. Maybe it should be reserved only for posts that are garnering attention, and has a definitive media bias answer for (the no results comments are just damn annoying to see).
It also has the knock-on effect of boosting the post higher in whichever sorting algorithm users are using. So it often feels artificially controlled whenever something has 100+ upvotes and less than 10 comments, knowing the first comment is always a bot. Like, would it be fair for me to have 10 bots that comment factual information of posts I personally like, just to boost their visibility?
It adds no value to the posts, incites arguments (how is that helping with modding? Why do the mods need to announce MBFC’s rating on every post?), and exports critical thinking to a site that has its own biases while maintaining a veneer of “neutrality”. The ratings often have no justification, making them little better than some dude’s opinion. I can keep going but I think that covers most of it.
If news@world had rules that reflected a coherent politics it could be political or even propagandistic.
Because no such rules exist to direct action and development, ideas like the fact checker bot crop up. In lieu of direction, the fact checker bot reflects a laundered western liberal political line back onto the news@world community.
An echo chamber is not an area where everyone says the same things, it’s an environment where a certain type of waves (or just all waves) are reenforced due to structural elements of the chamber.
By using the fact checker bot to do the work of policing speech, you have created a structural element which reenforces certain kinds of speech.
It’s a component of an echo chamber in the metaphor.
That’s significantly different than taking the more difficult route of determining the news@world mod team political line, struggling internally and externally with its contradictions and acting in ways that reflect it because the latter requires that the mod team use judgement rather than just act on voices who are not reenforced by the built structural elements of the news@world community.
The bot has no purpose. Either an article can be posted or not there's no reason for the bot prompt. It just looks like thought policing using a bias checker which 'coincidentally' prefers what the current Democrats position is.
I can hardly imagine the bot stopping any fake news from being posted either.
Unfortunately the bot is fatally flawed as long as it's just repeating MBFC information. I would be interested in a community program but I have the same end worry. What's the risk that we create an echo chamber? It might be better than an echo chamber based on MBFC ratings but it's still an issue worth worrying about.
Yes, maybe don't have the bot be the first and only response on every single post. Let them gain the tiniest bit of traction first. It's beyond annoying to see an article, go to the comments, and your bot be the only response.
Yeah… there was a whole big to do about this. One dev actually quit (can’t remember which one) because it was publicly noted that their app “scored” lower in terms of feature implementation. But feedback has been made available for app developers.
Addressing the Overton window issue is the main fix I would hope for.
The proposed solution of a home-brewed open-source methodology of determining bias without the Overton influence would be a very welcome improvement in my opinion.
Why does that image only have the sun and the rebel from Canadian media? Both are given more credibility than they deserve, the rebel in particular has history, bunch of white supremacists and alt right personalities were or are still involved, publication absolutely stokes hate and fear.
Edit: I'm still at a loss, why those? The Globe and Mail, McLean's, The Toronto Star, National Post, CBC all have better reputations domestically (though natpost and the sun are a circle these days and most print media is owned by American Hedge Funds so...), far more likely to actually get the news instead of opinion masquerading as news in one of those.
Personally I'm in favor of the bot. One complaint I've seen that I agree with is that it doesn't need to float high up in the comments. If it was simply made to not upvote itself, it would stay nearer to the bottom naturally, which I think would be preferable.
Although I do see that that bot has a very slight right wing bias I like it. It prevents the normalization of the use of literal propaganda outlets as news sources.
I have a suggestion that might be a good compromise.
The bot only comments on posts that are from less factual news sources or are from extreme ends of the spectrum.
On a post from the AP the bot would just not comment.
On a post from Alex Jones or RT the bot would post a warning.
That way there is less “spam”, but people are made aware when misinformation or propaganda is being pushed.
Also with such a system smaller biasses are less relevant and therefore become less important.
I don't trust MBFC to tell me anything useful about left-leaning sources, or discussion about the Israel-Palestine conflict, but if a right-biased credibility gatekeeper tells me a site I've never encountered before is far-right, I do consider that useful.
Not directly related to MBFC bot, but what's your opinion on other moderation ideas to improve the nature of the discussion? Something Awful forums have strawmanning as a bannable offense. If someone says X, and you say they said Y which is clearly different from X, you can get a temp ban. It works well enough that they charge a not-tiny amount of money to participate and they've had a thriving community for longer than more existing social media has been alive. They're absolutely ruthless about someone who's being tricksy or pointlessly hostile with their argumentation style simply isn't allowed to participate.
I'm not trying to make more work for the moderators. I recognize that side of it... the whole:
This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.
... makes perfect sense to me. I get the idea of mass-banning sources to get rid of a certain type of bad faith post, and doing it with automation so that it doesn't create more work for the moderators. But to me, things like:
Blatant strawmanning
Saying something very specific and factual (e.g. food inflation is 200%) and then making no effort to back it up, just, that's some shit that came into my head and so I felt like saying it and now that I've cluttered up the discussion with it byeeeeee
... create a lot more unpleasantness than just simple rudeness, or posting something from rt.com or whatever so-blatant-that-MBFC-is-useful type propaganda.
It’s tricky because we could probably make 100 rules if we wanted to define every specific type of violation. But a lot of what you’re talking about could fall under Rules 1 and 8, which deal with civility and misinformation. If people are engaging in bad faith, feel free to report them and we’ll investigate.
I can try it -- I generally don't do reports; I actually don't even know if reports from mbin will go over properly to Lemmy.
For me it's more of a vibe than a set of 100 specific rules. The moderation on political Lemmy feels to me like "you have to be nice to people, but you can argue maliciously or be dishonest if you want, that's all good." Maybe I am wrong in that though. I would definitely prefer that the vibe be "you can be kind of a jerk, but you need to be honest about where you're coming from and argue in good faith, and we'll be vigorous about keeping you out if you're not." But maybe it's fair to ask that I try to file some reports under that philosophy before I assume that they wouldn't be acted on.
Some of what you describe is likely against our community rules. We do not allow trolling, and we do not allow misinformation. We tend to err on the side of allowing speech when it is unclear, but repeat offenders are banned.
When you see these behaviors, please make a report that that we can review it. We cannot possibly see everything.
I feel like bots on lemmy get way too much hate in general. There aren't that many and if you don't like you can block this one/all bots. I for one find it useful as it is.
I noticed you got a couple downvotes, so this comment is more for the voters: if you have thoughts on this, please comment them so we can understand why you feel the way you do.
BTW, I really like the suggestion to just get rid of the bias rating from the bot's comments. That should be a lot less work than implementing a bias crowdsourcing system. Given limited volunteer time and all that.
This only applies to beautiful geniuses that include MBFC links in their posts, but the bot probably doesn't need to include the MBFC entry for MBFC. It's pretty useless and that could free up a little space. And, hey, that's something people are pretending to care about, right?
Holy moly, people seem to really be upset with this bot. I like it because it can call out when someone is doing something shady with their news sources when people like me (that don't know news sources by heart) read a posting.
We have a lot of repeat users in here that I personally feel (and I could be wrong) that have ulterior motives, like being a foreign actor spreading misinformation, trying to sew division, and lots of other foreign and domestic actors that are obsessed with one thing and throw the baby out with the bathwater (for example people obsessed with Gaza and Israel war just being nasty in general because they're angry - I'm not saying that scenario is not wrong and fucked, but this bot can help illuminate patterns in their behavior which can help us regular people tag them accordingly as a single issue participant so they are more informed when engaging that person)
My suggestion is to be very careful about crowd-sourcing the rating process. Nearly every post I go into this bot is super negative on its downvotes. Rather than just simply blocking the bot, people are retaliating against something they don't agree with. You would likely see that translate to your crowd-sourcing rating also at best. At worst you would see bad actors focused on division and misinformation making a fuckery of it all.
I'm not saying don't include the community, but brainstorm with this potential pitfall in mind.
I like this community, and want to see it continue to be as factually correct and represented fairly, and appreciate the mods and their ongoing challenges with the people that would seek to upset the apple cart at any opportunity.
I think the bot adds value and applaud the honest effort to make improvements.
The down voting is for several reasons, Jeff laid it out well. The people who don't like it's ratings though have a larger worry that blocking it does not help. If MBFC, and thus the bot, are biased then the entire conversation is shifted around that bias. Blocking is useful if you find something an eyesore. It's not useful in fighting misinformation.
it also does this with a bunch of weird little local newspapers or etc which I've never heard of, which is like the one time I actually want it to be providing me with some kind of frame of reference for the source. MSNBC and the NYT, I feel like I already know what I think about them.
It's this uninvited commenting on the bots part that has me downvoting it. It's presenting itself at an authority here. If a user in the comments called the bot to fact check something and the bot did a bad job, i'd just block the bot. I'd even be able to look over that users history to get an idea of the bot's purpose. But this bot comes in and says "here's the truth", then spits out something i'd expect to see on twitters current itteration.
If the problem you're trying to solve is the reliability of the media being posted here. Take the left/right bias call out and find a decent databse on new source quality. Start the bots post out with resources for people to develop their own skill at spotting bad news content.
If the problem you're trying to solve is the visibility of political bias in content posted here. So the down vote button isnt acting as a proxy for that. Adding a function for the community to rate left/right lean like rotten tomatoes sounds interesting, so long as you take the reliability rating out of the bot. You can't address both media reliability and political bias in one automated post. nyt and npr being too pearl clutchy for my taste. and some outlet that exists only on facebook having the same assumed credibility as the associated press. are wildly different issues.
*stupid phone, i'll live with the spelling but not repeated words.
I think the bot is incredibly useful. The criticism falls under a very specific group of users being very loud about their preferred source not ranking the way they expect.
Linking additional sources will improve it. Wikipedia maintains an active list and has an incentive to do so. Personally, I'd like to see a transparent methodology applied to a source: number of articles retracted silently, corrections issued in last 30 days, etc.
That having been said, I'd rather see efforts invested in other areas rather than inventing yet another "weighing" function for multiple ratings. Let us decide if mbfc is good enough or if we prefer ad fontes or Wikipedia or whoever. Give us two or three options and let us decide on our own.
It seems bizarre to me that the only user I have seen actually trying to provide constructive criticism for the bot so far in this thread is the one that already likes it. Especially when others instead advocate for things like the mods taking a political stance to endorse and using mod powers to reinforce it.
I like the bot. It's valuable to have context for the organization pushing a story. I agree that others are reading too much from the orgs they like being labeled as biased. It's assumed a news source will have some bias, and trying to avoid acknowledging that is dangerous. The takeaway is simply to be wary of any narrative being pushed (intentionally or not) by framing or omission, and get news from a variety of sources when possible. Instead, people tend to think identifying bias is advocating that the article should be disregarded, which is untrue.
To your suggestion, I do think adding more sources for reliability and bias judgements is a good idea. It would give more credibility if multiple respected independent organizations come to the same conclusion. More insight into their methodology in the comment itself could also be nice. The downside of adding these is that it would make the comment even longer when people have already complained about its size.
Other than that, I have seen people dislike using the American political center as a basis for alignment, but I have yet to see a good alternative. I expect a significant plurality of users are from the US, and US politics are globally relevant, so it seems to be a natural choice.
Nearly every critic I have seen so far just thinks it should be removed entirely because they find it annoying. I would say even if it isn't considered useful for the majority of users, the amount of value it provides people who do use it justifies whatever minor annoyance it is to others. Anyone who gets really tired of collapsing the comment or scrolling past it can block it in seconds.
Thank you to the mod who created this thread. Even if it's good to gather feedback, it's obviously not easy to get bombarded with negative comments. I'm impressed with the patience you have shown in this thread.
Improvements to automod, such as checking for opinion articles by regex (and building up that list). Or automatically marking/linking duplicate posts.
Also, regex scanning of comments to autoban would be useful for moderation well outside of the news/politics realm.
Most of the changes I'd like to see would require major changes to Lemmy though. Things like rate limiting posts/comments/votes, and allowing complex conditions for using those quotas. Also more nuanced moderation such as unlisting a post/comment (or potentially rehoming them).