The broader government and commercial cyber threat intelligence community is missing an opportunity to not only diffuse hacktivist propaganda, but to collect significant cyber behavioral threat information on adversarial hacktivist collectives and personalities that often influence global policymaking.
With the return and rise of so-called patriotic hacktivist collectives, including nation states masquerading as Islamic hacktivist brands, there is a growing need to introduce more rigor to attribution, because the involvement of these hacktivist collectives with unclear origins and players often have unanticipated impact have on the daily geopolitical mix.
While most cyber threat intelligence involves collection and analysis of forensic and observed anomalous or malicious activity, manipulating hacktivist propaganda content creates a privileged moment to collect unique cyber behavioral threat information.
Hacktivist propaganda content is generally created to masquerade as a hacktivist collective, or to promote genuine hacktivist branding or reputation.
Manipulating that content so that the brand of that hacktivist is ridiculed, for example, would likely diffuse the influence or disparage the reputation of that hacktivist brand.
More importantly, channeling the attention of that hacktivist collective or personalities would likely cause them to engage or respond in some manner, which creates opportunity for collection. Rather than just observing the controlled content created by these hacktivist collectives and personalities or actors masquerading as hacktivists, we would likely see behavioral responses reflecting the emotional and cognitive vulnerabilities of those actors, whomever they are.
In my experience, that’s when actors are most likely to make mistakes.
This article will introduce a simple model for evaluating content online for manipulation and then behaviorally manipulating that content.
This model will be applied to content created by a suspected Gaza-based hacktivist group that targeted Israel in approximately 2020, known as the Jerusalem Electronic Army (JEA).
This is a practitioner’s model, based on my experience as one of few former FBI profilers in the world specializing in cyber deception and online influence. Additionally, this model is foundationally based on multi-disciplinary theoretical and analytical frameworks of content evaluation and communication and emotional response.
A Timely Historical Model of Islamic Hacktivist Propaganda
The Jerusalem Electronic Army created content online several years ago claiming to have the ability to access and compromise Israeli critical national infrastructure.
Israeli security companies at the time in approximately April 2020 suggested the group responsible for those claims on several social media platforms was an Islamic hacktivist group, based in Palestine, who were likely affiliated with the Gaza Cybergang.
While the content from JEA primarily included photos and video of a masked and uniformed in black unknown individual speaking Arabic at a desk with a laptop computer making threats, including the claim of compromising an Israeli water treatment plant, Israeli authorities claimed at that time that there were only attempts to gain unauthorized access to the treatment plant’s network, but no actual known or reported compromise.
The Gaza Cybergang has not been as closely aligned publicly with HAMAS as other hacktivist suspected pro-HAMAS collectives, but the talent pool is limited in Gaza, so arguably any of these hacktivist collectives believed to be based in Gaza could be affiliated with HAMAS actors. An Atlantic Council report at the time suggested the same content from JEA could be an “attribution front” for Iran, masquerading as HAMAS-aligned or affiliated hacktivist collectives.
JEA may have been largely ignored because there was no known evidence of any reported successful attacks against significant targets or identified malware.
But even in a historical context where a mostly unknown hacktivist group of some sort like JEA has stopping creating content, there is opportunity to collect cyber behavioral threat information on the users of this content, to further inform attribution and cyber threat intelligence analysis.
A Theoretical Framework for Evaluation of Jerusalem Electronic Army Content
Warranting theory helps explain how people evaluate content online, given the potential for manipulation by whoever controls that content or has created that content.
People generally judge the warranting value of content online by their perception or evaluation of how likely it is that content has been manipulated in some way to some degree.
If someone believes content has not been manipulated, they often assign high warranting value, meaning they are more likely to trust that content or they consider that content to be authentic.
As an example, researchers studied whether eBay consumers would place more bids and make more purchases of either a stock photograph of a package of golf balls or a photograph taken of a package of golf balls on one of the researcher’s basement rugs. The study found that there were significantly more bids and more purchases of the package of golf balls photographed on the basement rug. Researchers concluded that consumers believed they were more likely to receive the package of golf balls photographed on the basement rug from a real person, rather than a stock photograph which could suggest the package of golf balls was some kind of scam. Consumers evaluated the photograph of the golf balls on the basement rug as less likely to have been manipulated by whomever created and controlled the photograph of the golf balls.
When evaluating JEA content within this theoretical framework, arguably there are no identifying characteristics of the individual in this screen shot.
Every cue in this screen shot, including the desk, the laptop, the background, the chair, the uniform including the mask and hat, could be simply replicated by anyone anywhere.
The logo and font could arguably be replicated graphically or manipulated easily, in terms of creating a similar background or removing written content or font or rewriting the same font. While audiences evaluating this content are likely to believe that a real person is wearing that uniform and is being photographed or filmed making statements, this person could be anyone representing any organization or collective that wants to masquerade as another organization. The simpler the content, the more likely that content has been or could be manipulated.
As a contrast, if this same content was created with what appeared to be a distinct Gaza landmark in the background, most audiences would be more likely to believe this was created in Gaza.
That landmark would be much more difficult to manipulate into content in some manner, so audiences would likely believe it was less likely the content was manipulated by someone masquerading as a Gaza-based hacktivist collective, for example.
Every additional cue, such as movement of the camera where we can see other people or activity in the vicinity of the landmark and hear ambient noise such as wind, makes it more authentic or more difficult to manipulate, for example.
This other content from JEA likewise appears to be commonly available clipart or ‘hacker’ symbology that audiences have likely seen often in all kinds of content online.
While this content may have been created to cause audiences to fear JEA, more than likely the content used above was used because it was available or cheap or appeared to be what’s normal.
Applying Other Fields of Research To Warranting Theory
Researchers and activists found less than a decade ago that creating or repurposing content created by Islamic State representatives and personalities to make it humorous to audiences outside Islamic State but offensive to audiences inside Islamic State not only channeled the attention of these Islamic State personalities but also the audiences they were trying to reach. Further, the repurposed or manipulated content such as memes of Islamic State soldiers with silly outfits or YouTube videos or an actor portraying the Islamic State leader getting a lap dance was believed to have diffused the influence of Islamic State communications and messaging efforts. Even recently there has been additional research confirming manipulation or creation of emotional content designed to anger the target audience often drives them to share that content.
As an example, above I manipulated one sample of JEA branding by simply crossing out part of their title, and including a mildly humorous renamed title to suggest we know who might really be behind this hacktivist collective. This manipulated content proof-of-concept also suggests whoever is part of this collective may not be as skilled as the corporate cyberterrorist brand.
When you channel the attention of hacktivist personalities or collectives to this kind of manipulated content, you generally create a need for them to find out who created this and what they know. This is the privileged moment where you can stage other methods to collect and observe how they respond behaviorally to this kind of targeted reputational manipulated content.
In my experience, even seasoned cybercriminals who should not click on any link to any content have clicked on links to content I have manipulated, because they had a need to find out more. The drive of curiosity is powerful enough that people will encounter danger to find out more.
When you have their attention and hold their attention, you have a public stage for everyone to question their capabilities.
In such a heightened, vulnerable space online, there is likely a need for that collective or personalities to respond in some way, if only to gather information on whoever manipulated their content or to vocalize some response that may diminish the influence of that altered content. This is where there is great opportunity to collect cyber behavioral threat information that is both observational and forensic, depending on collection.
Strategically, this public reputational issue may influence the leadership of that collective or controlling organization to question the autonomy of their cyber operators.
Tim Pappa is a certified former FBI profiler on the Behavioral Analysis Unit, one of the few profilers in the world specializing in cyber deception and online influence. Pappa was also previously assigned to the FBI Cyber Division’s Cyberterrorism Unit, where he oversaw the FBI’s cyber threat programs focused on Middle East and Southeast Asia cyberterrorism.
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.