Home > News > South Korean Police Launch Deepfake Detection Tool Ahead of Elections

South Korean Police Launch Deepfake Detection Tool Ahead of Elections

dall·e 2024 05 13 20.36.40 create a feature image for the article titled south korean police deepfake detection tool run up elections. visualize a digital environment showing

South Korea has been grappling with the issue of deepfakes, particularly in the lead up to the 2024 elections. The National Office of Investigation (NOI) has announced that it will deploy a new software designed to detect whether video clips or image files have been manipulated using deepfake techniques. This move follows a revised law that came into effect on January 29, 2024, stating that the use of deepfake videos, photos, or audio in connection with elections can earn a citizen up to seven years in prison.

South Korean police have developed a deepfake detection tool to clamp down on crimes using deep learning technology. The new software is different from most existing AI detection tools, which are traditionally trained on. The large number of false information investigations underlines the authorities' concern, which came to the fore in the run-up to local elections in 2022. The South Korean police have taken a proactive approach to tackling deepfakes, which have the potential to manipulate public opinion.

Key Takeaways

  • South Korean police have developed a deepfake detection tool to clamp down on crimes using deep learning technology.
  • The use of deepfake videos, photos, or audio in connection with elections can earn a citizen up to seven years in prison.
  • The South Korean police have taken a proactive approach to tackling deepfakes, which have the potential to manipulate public opinion.

Korea's Deepfake Problem

South Korea has been dealing with the problem of deepfakes for quite some time, with the issue coming to the forefront during the 2022 provincial elections. A video surfaced on social media that appeared to show President Yoon Suk Yeol endorsing a local candidate for the ruling party, which was later discovered to be a deepfake. This type of deception has become increasingly common in the country, with the National Election Commission detecting 129 deepfakes in violation of election laws between Jan. 29 and Feb. 16, 2024, ahead of the April 10 Election Day.

To combat this problem, the Korean National Police Agency's National Office of Investigation developed a deepfake detection tool that can detect whether video clips or image files have been manipulated using deepfake techniques. The software is designed to detect manipulated images or video content, using AI technology trained on western-based data and pre-trained AI models. However, the tool is not perfect and may lead to erroneous detection, so it is important for investigators to cross-check the results with other investigative processes.

The revised law that came into effect on Jan. 29, 2024, states that use of deepfake videos, photos, or audio in connection with elections can earn a citizen up to seven years in prison and fines up to 50 million won (around $37,500). The Korean government is also encouraging companies and academia to develop deepfake detection tools to combat the spread of AI-generated media content. With the increasing prevalence of deepfakes, it is crucial for police forces and cybersecurity experts to stay vigilant and keep up with the latest developments in deepfake detection technology.

Not Just Disinformation

While disinformation and misinformation are the most commonly discussed topics in the context of AI-powered misinformation, there is a broader issue at play. The problem is not just the spread of false information but also the selective consumption of information. People tend to pick and choose the information they want to believe in, creating a distorted view of reality. This issue is not new, but the rise of social media platforms has amplified it.

The impact of selective consumption of information is most evident during election campaigns. In South Korea, for example, the National Election Commission detected 129 deepfakes in violation of electoral disinformation offenses between Jan. 29 and Feb. 16, 2024. The newly revised election law has introduced a clampdown on such crimes. The police have developed a deepfake detection tool to probe crimes using the technology, and there have been criminal investigations into fake news and defamation against opposing candidates' speeches.

AI-powered misinformation is not just a matter of disinformation, but also a matter of selective consumption of information. Therefore, policy changes and AI-powered deepfake policy changes are essential to combat this issue.

Conclusion and Recommendation

The development and deployment of a deepfake detection tool by the South Korean Police is a significant step towards combating the spread of political deepfakes during elections. The software is designed to detect whether video clips or image files have been manipulated using deepfake techniques, making it a valuable asset in the fight against disinformation.

While the effectiveness of the tool is yet to be fully tested, its deployment ahead of the elections is a positive step towards securing the democratic process. It is recommended that other countries follow suit and invest in similar technology to safeguard their elections against deepfake threats.

Moreover, it is important to note that deepfake technology is constantly evolving, and new methods of manipulation are being developed. Therefore, it is crucial that law enforcement agencies continue to invest in research and development to stay ahead of the curve.

In conclusion, the South Korean Police's deepfake detection tool is a significant development in the fight against disinformation during elections. It is recommended that other countries invest in similar technology to secure their democratic processes.

Frequently Asked Questions

How do South Korean police identify deepfake videos?

South Korean police use a newly developed deepfake detection software to identify manipulated videos or image files using deepfake techniques. The software is capable of analyzing the video's facial expressions and movements to determine if they are natural or artificially generated. The software can also detect audio manipulation.

What measures are in place to prevent deepfake interference in elections?

The South Korean National Election Commission has put in place measures to prevent deepfake interference in elections. The commission has increased its monitoring of social media platforms and other online sources for deepfake content. Additionally, the commission has partnered with Sensity AI, an artificial intelligence platform that specializes in detecting deepfakes, to help identify and remove deepfakes from the internet.

What are the capabilities of the Sensity AI platform in detecting deepfakes?

Sensity AI is an artificial intelligence platform that specializes in detecting deepfakes. The platform uses machine learning algorithms to analyze videos and images for signs of manipulation. Sensity AI can detect deepfakes by analyzing facial expressions, body movements, and audio. The platform can also analyze metadata and other digital traces to determine if the video or image has been manipulated.

How can the public discern between real and deepfake-generated content?

The public can discern between real and deepfake-generated content by looking for signs of manipulation. Deepfakes often have subtle inconsistencies in facial expressions, body movements, and audio that can be detected with careful examination. Additionally, the public can use tools such as Sensity AI to analyze videos and images for signs of manipulation.

What legal actions can be taken against the creators of deepfake content in a political context?

The creators of deepfake content in a political context can face legal consequences for their actions. In South Korea, the creators of deepfake content can be charged with violating election laws and face fines or imprisonment. Additionally, the creators of deepfakes can be sued for defamation or other civil offenses.

What role does artificial intelligence play in the future of election security?

Artificial intelligence plays an important role in the future of election security. AI platforms such as Sensity AI can help detect deepfakes and other forms of election interference. Additionally, AI can be used to monitor social media platforms and other online sources for signs of election interference. As technology continues to advance, AI will play an increasingly important role in ensuring the security and integrity of elections.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.