Law of Facebook

Tram Ho

“The end of June 2018 is a real crisis,” said Phuong Yen, CEO of a retail startup. At that time, the whole system just started to operate stably not long by locating the right customers on Facebook, with hundreds of orders per day. Before that, Ms. Yen and her colleagues spent many months studying business training courses as well as advertising campaigns on social networking platforms.

But “a short happy day is not hard”, after a successful battle, the company leaders decided to put all their strength to hit hard on new customers to expand the market. That was also the time when Facebook started updating its advertising transparency policy, making all information about the strategy, content, ideas and products that Mrs. Yen devoted a lot to planning to be publicized.

“Dead halfway” because of the transparency of Facebook

Not only being copied ideas, a series of articles running advertisements to introduce the company’s products are “flagged” by competitors playing badly. In addition, due to the agency’s activities, the disclosure of a series of confidential client information also made Yen accept a small amount of compensation even though it was a force majeure case. The pressure from the deteriorating business situation, the impact of her reputation, she once thought about dissolving the company.

“Users in general and even businesses like me in particular are always passive before Facebook. They offer a lot of irrational policies, especially emphasizing transparency, but the reality is not the same ”, Ms. Yen shared. Especially in the coming time, the IDFA (identifier for advertisers) policy, as well as the ATT (transparent application tracking) feature will be deployed, businesses like Ms Yen’s company will be Great influence because it will be difficult for advertisements to reach the right customers.

Luật của Facebook - Ảnh 1.

The Cambridge Analytica incident is proof of the existence or not “transparency” of Faceboook

This is not her own concern, because just last week, Facebook CEO continued to propose a new policy that allows technology giants to use data reporting to delete infringing posts. However, industry experts are not optimistic about this, because this is just a move to realize self-monitoring of content on network platforms. In fact, Facebook inherently maintains a similar system, but it is ineffective.

Last Thursday, Mark Zuckerberg told Congress: “Transparency will help big tech companies take responsibility for deleted posts.” If such a transparent system becomes the norm, the first beneficiary will be Facebook. Before that, Mark Zuckerberg has repeatedly affirmed that Facebook is always the leader in transparency on the platform.

Social networks like Facebook are the cradle of fake news

Facebook boss has come up with many similar initiatives, calling for more responsible social media platforms for user-posted content. In recent years, social networks, including Facebook, are believed to be the cradle of spreading fake news and misinformation (such as inciting speech, threats of violence …). Therefore, the big technology companies behind these platforms have always become the target of public criticism.

Influenced by the policies of former President Donald Trump, the US Congress continues to discuss the reform of Article 230 of the Communications Regulations Act, aimed at freeing social networks from accountability for internal affairs. user-generated content. On the contrary, public opinion continues to condemn the big technology guys and the social media tycoons, because the misinformation situation has not yet been thoroughly limited.

Opponents linked the riots at Dien Capital earlier this year with the responsibility of social networks such as Facebook and Twitter. In addition, fake news related to Covid-19 continues to be widely distributed, causing confusion among users. Therefore, public opinion thinks that it is time to impose strong measures on social media platforms to end this situation.

However, last weekend’s hearing with the Big Tech team did not allow Congress to achieve a legislative plan on this, thus creating an opportunity for Facebook to continue exerting influence. Regarding the reform of Article 230 of the proposed Media Statute Act, Jenny Lee, partner of law firm Arendt Fox, which represents the interests of major technology companies, commented: The technology company at least has left room for further negotiations ”.

For one thing, many analysts have turned their attention to Facebook’s self-criticism report and argue that it is not as transparent as what the platform claims. According to Facebook, more than 97% of content classified as provocative language was discovered by the system in February 2021 before users reported it. In the fourth quarter of last year, the social network also targeted 49% of the violent content. After implementing strategies such as flagging reports, the percentage of negative posts has dropped to 26%.

The problem is that the statistical method of these statistics is recorded by Facebook’s AI, and in fact this is not the total amount of harmful content. In addition, Facebook does not publish information about the number of people who accessed these information before they were deleted or when they were deleted. “This is a disappointing report,” said an expert. This person believes that it is not transparent that Facebook does not disclose how long malicious information is deleted (within minutes or days).

Is Facebook really transparent?

The focus of this report is on artificial intelligence, which means that Facebook is looking to evade responsibility for the monitoring of flagged content from users, and does not want to publicize censorship and removal rates. these contents.

Currently, social networking platforms, including Facebook, are dependent on the content rating system of machine learning, which has many serious flaws in the actual application process. “These automatic review systems are increasingly easy to bypass, even mistakenly delete content that does not violate or even ignore content that has been flagged by users.”

Earlier this year, the Supervisory Commission pointed out flaws on Facebook’s AI system. This committee is an independent group set up by Facebook to evaluate content of internal controversy. They require users to be notified of posts deleted by the AI, and to manually review user protests. However, Facebook did not approve these requests.

Facebook has more than 3 billion users and only about 15,000 employees reviewing content. During the Covid-19 outbreak, the majority of employees worked remotely and for legal reasons they were unable to monitor the sensitive content that had been reported at home. The lack of human resources and AI flaws present special challenges to Facebook’s content rating.

This Facebook Content Transparency Report does not contain data on the language or geolocation of deleted posts. It also doesn’t mention anything about misinformation – and this is another important area that lawmakers are concerned with. Therefore, many people have commented: “This transparency report of Facebook has almost no transparency”.

In fact, not only with the aforementioned review report, Facebook has always been vague about transparency in content monitoring, user monitoring and even monitoring of advertising campaigns of businesses. Meanwhile, hostile provocative content, fake news and misinformation are something that needs to be dealt with, Facebook is always hesitating for unknown reasons. “I do not believe in the transparency of Facebook, which is just a joke,” said Ms Yen.

Share the news now

Source : Genk