US Department of Defense is mobilized to deploy anti-fake program

Tram Ho

Fake news and social media posts have become such a threat to US security that the Department of Defense has launched a project to reverse ” automated and automated fake attacks. large scale , “as top Republican officials are blocking efforts to protect the integrity of the election.

The Advanced Defense Research Projects Agency wants to customize the software so it can detect fake news in more than 500,000 stories, images, videos and audio clips. If successful, the system could be expanded to detect malicious intent and prevent the spread of fake news to divide society.

The effort comes as US officials are studying plans to prevent outside hackers from spreading fake news on social networks ahead of the 2020 election.

Bộ Quốc phòng Mỹ được huy động để triển khai chương trình chống tin giả - Ảnh 1.

Fake stories can now become more and more dangerous with the emergence of deepfake technology becoming increasingly sophisticated and making it harder to detect fake news using data-based software. . Researchers have shown that synthetic opposing networks – or GANs – can be used to create fake videos.

So by increasing the number of algorithmic checks, the military research agency hopes it can detect fake news with malicious intent before it spreads widely.

A comprehensive set of semantic conflict detection tools will greatly increase the burden on those who create fake news, forcing them to do all the semantic details exactly, while those who block only need to discover one, or several conflicting points . ” The agency said its Semantic Forensics semantic monitoring program.

Grammar errors

The agency added: ” This SemaFor technology will help identify, prevent, and understand misinformation campaigns that are detrimental .”

Bộ Quốc phòng Mỹ được huy động để triển khai chương trình chống tin giả - Ảnh 2.

Current monitoring systems often point to ” semantic errors .” An example of this is that the software will not notify the wrong ear rings in fake videos or images. Other signs that are often overlooked by machines include: weird teeth, unkempt hair and an abnormal background.

The algorithm’s testing process will include the ability to scan and evaluate over 250,000 news articles and 250,000 social media posts with 5,000 falsified details. The program goes through 3 phases in 48 months, starting from scanning news and social networks, before analyzing by technology. The project will also include a “hackathon” period lasting up to weeks.

Technology gap

Bộ Quốc phòng Mỹ được huy động để triển khai chương trình chống tin giả - Ảnh 3.

The agency also has another current research program called MediFor, created to bridge the technology gap in image recognition, when no end system can authenticate the correction. Edit images on smartphones and digital cameras.

Along with the rise of digital imaging are related skills that allow even relatively low-level people to manipulate and distort the message of visual media .” According to the agency’s website. ” While such manipulations are still quite peaceful, for entertainment or artistic purposes, as well as for purposes of conflict, such as propaganda or spreading false news .”

Given the four-year scale of the SemaFor project, it is unlikely that it will be implemented in time for the next US election.

Refer to Bloomberg

 

Share the news now

Source : Trí Thức Trẻ