Last Updated on 15/12/2021 by Sanskriti
Because Facebook is a private corporation, it has the authority to restrict anybody it wants. But what Facebook has been doing lately is just nasty.
Facebook along with their so-called fact-checker group called, ‘Science Feedback.’
A recent incident came into light when Maggie Williams, a high-school runner from Oregon, became disoriented, passed out, and landed face-first just across the finish line during a recent 800-meter competition. Her coach and she blamed her collapse on a lack of oxygen caused by the mask she’d been obliged to wear, and state officials responded to the public outrage by loosening their mask regulations during sporting events.
However, even before the epidemic, scientists had discovered that wearing a mask might cause oxygen deprivation. Why had this danger gone unnoticed?
One reason is that on social media platforms, a new breed of sensors has been suffocating scientific discourse regarding masks. Last year, when Scott Atlas, a member of Trump’s coronavirus task team, questioned the usefulness of masks, Twitter took down his comment. When top experts from Stanford and Harvard urged Florida Governor Ron DeSantis that children should not be compelled to wear masks, YouTube took down the video conversation. Although these acts of censorship were publicly condemned, the social-media science police have not been deterred.
The item was quickly labeled as “Partly False Information” by Facebook. City Journal filed an appeal, which proved to be both fruitless and instructive. Facebook declined to remove the label, which displays every time the story is shared, but we did get an inside look at how social media firms and progressive groups manipulate science and public policy.
Recently Facebook was sued by one of the users for claiming false information and continued to lie about it.
Now Facebook has responded to the lawsuit in court. Surprisingly, their lawyers now argue that Facebook’s “fact-checks” are just “opinion” and hence immune to defamation.
But fact-checks are now just “opinion’, at before they were facts that were checked.
On Facebook’s website, they are shown as follows: “Each time a fact-checker rates a piece of content as false, Facebook significantly reduces the content’s distribution … We … apply a warning label that links to the fact-checker’s article, disproving the claim.”
“Disproving.” It appears that Facebook considers its labeling to be assertions of fact.
When Tucker Carlson and Rachel Maddow were sued, they used an “opinion” argument identical to Facebook’s. They said that we just share our opinions and that our viewers understood that we aren’t reliable sources of objective information.
Carlson and Maddow, on the other hand, have a stronger case. They’re recognized for expressing their viewpoints. “Fact-checks” are posted on Facebook.
The firm, which has now changed its name to Meta, has also urged a judge to dismiss its case. “Because Section 230 of the Communications Decency Act protects Meta from liability for material posted to the Facebook platform by third parties.”
But it was Facebook, not a third party, who ruled out the posts “partly false,” and Facebook’s warning was written in Facebook’s voice.
As Facebook’s own website says: “We … apply a warning label …”
I brought Facebook’s defamation to their attention a year ago, and they did nothing to correct it.
I did not want to sue Facebook. I hate lawsuits. But after they defamed me, I felt I had no choice.
How did Facebook defame me?
She produced a video claiming that the majority of the wildfires in California were caused by inadequate government management. That was blocked by Facebook because it was “misleading.” They linked to a Science Feedback article that uses quote marks around the following line as if it were something I said: “Forest fires are driven by inadequate management.” “Not by climate change.”
The majority of the wildfires in California were caused by inadequate government management, according to a video. That was blocked by Facebook as “misleading.”
They linked to a Science Feedback article that uses quote marks around the following line, as if it were something said: “Forest fires are caused by poor management.” “Not by climate change.”
But it was claimed that the user never said that.
That quotation was taken from somewhere else by Facebook’s reviewers. Or perhaps they made it up?
“Climate change has made things worse!” I admit in my video. I merely made the case that government inefficiency was a major issue. Many blocks of wood were affected by climate change, but well-managed forests fared far better.
when questioned by all of Science Feedback’s reviewers about the designation “Misleading.” Two of them volunteered to be interviewed on camera. When they inquired what they thought was deceptive about the video, they told me they hadn’t even seen it! They made no excuses for using quote marks around words that were never spoken.
Facebook’s hesitation to admit its error is irritating because when it fact-checks anything, its algorithm ensures that fewer people see it.
Just because they didn’t like the tone in the video, they labeled it as misleading or false? Is this a joke or what?
Without even watching the video or checking for the facts how can someone label it as incorrect?
These big tech firms like Facebook are the voice for the people. People share their views, thoughts through these social media sites. Millions of people watch posts, videos, comments shared on such platforms and create an impact on the mindset of the people. It was a blunder, it needs to be correct.