The Iran-Israel war isn’t only being fought on the battlefield–there’s an online front as well. Today Forbes reports that the U.S. isn’t well prepared for any potential destructive cyberattacks by Iran. On the flip side, that nation is so concerned about U.S. and Israeli cyber and online psychological warfare that it closed off its internet, making it largely unusable across the country.
In the disinformation space, AI has been a key weapon in amplifying false narratives. Google’s Veo 3 model has been at the center of some campaigns, according to GetReal Security, which tracks faked or manipulated content online. Emmanuelle Saliba, chief investigative officer at GetReal, told Forbes that Veo 3 is behind “a slew of fabricated hyper realistic fakes circulating claiming to depict scenes from the Israel-Iran conflict.”
Google hadn’t responded to a request for comment at the time of publication.
“This perhaps the first time we’ve seen generative AI be used at scale during a conflict,” Saliba said. “It’s also being used to replicate missile strikes, sometimes night ones which are particularly challenging to verify using visual investigations tactic.
“When both countries deny an incident, how can we be sure of what we are seeing? Technology will be key.”
She noted that Veo 3 images include an invisible watermark designed to make it easy to detect AI-created content. She described it as “pretty robust.”
That’s not to say the model isn’t open to abuse–in part because you only know the watermark is there with software that’s looking for it. But fixing that isn’t as easy as just adding a visible watermark. “The perceptible watermarks are nice because everyone can see them. But they are also relatively easy to remove and/or mimic, making them less secure,” says Hany Farid, cofounder at GetReal. “A benefit of the imperceptible watermark is that they are more difficult but not impossible to remove. The drawback is that we need customized software to scan content for their presence.”
Last week, the BBC reported it had found dozens of AI-generated videos attempting to prove the effectiveness of Iran’s response to Israel’s attacks. These included fake clips showing the aftermath of Iranian strikes, while another showed missiles raining down on Tel Aviv. On the other side, pro-Israel accounts have been posting old protest clips, falsely claiming they show current dissent against Iran’s regime.
The efficacy of such disinformation campaigns is difficult to measure, even as these videos amass tens of millions of views. In a world where a president openly says both Iran and Israel “don’t know what the fuck they’re doing,” the content with the most impact still appears to come from real people.
Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. |