top of page

Evaluating Intel's Deepfake Detector with Real and Synthetic Videos

In March of the preceding year, a video seemed to demonstrate President Volodymyr Zelensky encouraging Ukrainians to relinquish their armaments and yield to Russia. It was evidently a deepfake - a sort of artificial intelligence-driven creation of false video which swaps out faces or produces a computerized version of someone. As Artificial Intelligence advances make the production of deepfakes easier, the need for quick recognition is of utmost significance. Intel is confident that it has a solution that will make a big impact. The organization has dubbed the system "FakeCatcher". At Intel's posh, mostly vacant, offices in Silicon Valley, we encounter Ilke Demir, a research scientist at Intel Labs, who explains how it functions. She inquires, "What is genuine about genuine recordings? What is genuine about us? What is the stamp of being human?" A technique called Photoplethysmography (PPG) is essential to the system, as it detects alterations in blood flow. According to her, deepfakes do not produce the same facial cues. The system additionally carries out an eye movement analysis to ascertain validity. She states that typically, when a person looks at someone, it is like a "beam" of vision is directed toward the other person. However, with deepfakes, it is similar to having "googly eyes," because the gaze appears to be diverging. Intel feels that, in a matter of seconds, it can spot the distinction between a genuine video and a bogus one through the evaluation of these two characteristics. We asked Intel for the chance to try FakeCatcher, the system the company maintains is 96% accurate. They agreed. We utilized approximately twelve clips of both former US President Donald Trump and the current President Joe Biden. Some were bona fide, while others were computer-generated simulations conjured up by the Massachusetts Institute of Technology (MIT). This video is unable to be played The BBC's James Clayton puts the deepfake video detector to the assessment. The system seemed to be quite effective at uncovering deepfakes. We mostly opted for lip-synced imitations - genuine clips in which the mouth and vocal track had been modified. It answered every question correctly, except for one. When we moved onto genuine videos, it began to struggle. The system erroneously claimed that a genuine video was counterfeit multiple times. The more pixelated a video is, the more difficult it is to detect blood flow. The system does not assess audio either, leading to some videos that sounded very real based on the sound of the voice being classified as false. There is concern that if the programme erroneously identifies a genuine video as being phony, that could lead to serious consequences. When bringing up this issue with Ms Demir, she has claimed that it is not the same to "verify" something as being false, compared to saying "be cautious, this may be fake". She is asserting that it is better for the system to be too vigilant, even if it may mean filtering out some genuine videos in the process, in order to make sure that all of the counterfeits are caught. Deepfakes can range from extremely discreet - like a two-second excerpt in a political campaign advertisement - to glaringly low-quality and sometimes only consisting of altered voices. The capability of FaceCatcher to operate in actual circumstances has been brought into question. Matt Groh is an assistant professor at Northwestern University in Illinois who specializes in deepfakes. He expressed no skepticism when it came to the statistics presented in the first examination; however, he did query the applicability of the data to situations outside of the laboratory setting. It can be hard to assess FakeCatcher's technology here. Many programmes, including facial-recognition systems, may provide exaggerated accuracy figures. Once tested in reality, however, their accuracy can be reduced. Earlier this year, the BBC put Clearview AI's facial recognition system through its paces by using our own photos. The technology's capabilities were remarkable; however, it was evident that the more blurry and side-on the face in the photo was, the more difficult it was for the program to make a correct identification. The difficulty of the test determines the accuracy. Intel asserts that FakeCatcher has been subjected to exhaustive inspections. This includes a "wild" examination - during which the firm assembled 140 imitated videos - and their genuine counterparts. Intel states that this test saw a 91% success rate. Despite this, Matt Groh and others expect that the system must be independently evaluated. They think Intel should not be given the responsibility of examining its own system. Mr Groh expresses his desire to assess these systems. He states that it is vital when constructing audits and attempting to comprehend its validness in a real-world context. It is astonishing to discover how hard it is to differentiate between a genuine and phony video - and this technology has great possibility. Our testing has so far not been exhaustive, so there is still improvement to be made.

Comments


bottom of page