top of page
Lanon Wee

Meta's Newest Threats Report Highlights Risk of Chinese Interference in Elections

Meta reported that it had disrupted three CIB networks during the third quarter, two of which originated from China. In its report regarding adversarial threats, Meta specified that Russia and Iran posed a greater threat in terms of CIB campaigns than China. Additionally, Meta announced that it has not uncoverd any indications that generative AI is being used for clandestine influence operations. In its latest quarterly report on adversaries' hazards, Meta declared on Thursday that China is a growing source of clandestine influence and propaganda campaigns, which could be empowered by progressions in generative artificial intelligence. China places second just behind Russia and Iran with regards to organized fake behavior (CIB) campaigns, typically involving counterfeit user accounts and other methods intended to "influence public discourse for a strategic purpose," Meta said in the report.Meta reported it disrupted three CIB networks in the third quarter, two from China and one from Russia. One of the Chinese CIB networks was a wide-reaching operation that required Meta to eliminate 4,780 Facebook accounts. "The people behind this activity used basic false accounts with profile pictures and names taken from somewhere else on the internet to post and befriend people from all around the world," Meta said regarding China's network. "Only a small portion of such acquaintances were located in the United States. They feigned to be Americans to post the same content across various platforms."Disinformation on Facebook became a significant problem prior to the 2016 U.S. elections, when foreign actors, mainly from Russia, had the capability to ignite emotions on the website, mostly in an effort to increase the campaign of then-candidate Donald Trump. Since then, the company has faced greater evaluation to monitor disinformation threats and campaigns and to offer more transparency to the public.Meta eliminated a previous China-related disinformation campaign, as described in August. The company said it eliminated more than 7,700 Facebook accounts related to that Chinese CIB network, which it characterized at the time as the "largest known cross-platform covert influence operation in the world."If China becomes a political discussion point as part of the upcoming election cycles around the world, Meta argued "it is likely that we'll see China-based influence operations adjust to attempt to influence those debates.""Additionally, the more domestic discussions in Europe and North America concentrate on assistance for Ukraine, the more likely that we ought to anticipate Russian attempts to meddle in those debates," the company added.One pattern Meta has observed regarding CIB campaigns is the increasing use of multiple online platforms such as Medium, Reddit and Quora, as opposed to the bad actors "concentrating their activity and coordination in one location."Meta said that evolution appears to be connected to "bigger platforms keeping up the pressure on threat actors," resulting in offenders urgently utilizing smaller sites "in the hope of facing less oversight."The corporation said the ascend of generative AI presents additional difficulties in terms of the spread of disinformation, but Meta said it hasn't "seen proof of this technology being utilized by known covert influence operations to make hack-and-leak claims."Meta has been investing heavily in AI, and one of its uses is to help identify material, including computer-generated media, that could transgress company regulations. Meta said nearly 100 separate fact-checking partners will help review any questionable AI-generated content."Although the utilization of AI by known threat actors we've seen so far has been limited and not very successful, we want to remain alert and prepare to answer as their strategies develop," the report said.Nevertheless, Meta cautioned that the coming elections will likely mean that "the protector community across our society needs to prepare for a bigger amount of synthetic content.""This implies that just as possibly infringing material may expand, defences must scale as well, in addition to continuing to enforce against adversarial behaviors that may or may not involve posting AI-generated content," the company said.

コメント


bottom of page