TA06 "Develop Content"

Belongs to phase P02 Prepare

Summary: Create or acquire text, images, and other content

TA06 Tasks
disarm_idnamesummary
TK0017 content creation content creation
TK0018 content appropriation content appropriation
TK0036 OPSEC for TA06 OPSEC for TA06

TA06 Techniques
disarm_idnamesummary
T0015 Create hashtags and search artifacts Create one or more hashtags and/or hashtag groups. Many incident-based campaigns will create hashtags to promote their fabricated event. Creating a hashtag for an incident can have two important effects: 1. Create a perception of reality around an event. Certainly only "real" events would be discussed in a hashtag. After all, the event has a name!, and 2. Publicize the story more widely through trending lists and search behavior. Asset needed to direct/control/manage "conversation" connected to launching new incident/campaign with new hashtag for applicable social media sites).
T0019 Generate information pollution Flood social channels; drive traffic/engagement to all assets; create aura/sense/perception of pervasiveness/consensus (for or against or both simultaneously) of an issue or topic. "Nothing is true, but everything is possible." Akin to astroturfing campaign.
T0019.001 Create fake research Create fake academic research. Example: fake social science research is often aimed at hot-button social issues such as gender, race and sexuality. Fake science research can target Climate Science debate or pseudoscience like anti-vaxx
T0019.002 Hijack Hashtags Hashtag hijacking occurs when users “[use] a trending hashtag to promote topics that are substantially different from its recent context” (VanDam and Tan, 2016) or “to promote one’s own social media agenda” (Darius and Stephany, 2019).
T0023 Distort facts Change, twist, or exaggerate existing facts to construct a narrative that differs from reality. Examples: images and ideas can be distorted by being placed in an improper content
T0023.001 Reframe Context Reframing context refers to removing an event from its surrounding context to distort its intended meaning. Rather than deny that an event occurred, reframing context frames an event in a manner that may lead the target audience to draw a different conclusion about its intentions.
T0023.002 Edit Open-Source Content An influence operation may edit open-source content, such as collaborative blogs or encyclopedias, to promote its narratives on outlets with existing credibility and audiences. Editing open-source content may allow an operation to post content on platforms without dedicating resources to the creation and maintenance of its own assets.
T0084 Reuse Existing Content When an operation recycles content from its own previous operations or plagiarizes from external operations. An operation may launder information to conserve resources that would have otherwise been utilized to develop new content.
T0084.001 Use Copypasta Copypasta refers to a piece of text that has been copied and pasted multiple times across various online platforms. A copypasta’s final form may differ from its original source text as users add, delete, or otherwise edit the content as they repost the text.
T0084.002 Plagiarize Content An influence operation may take content from other sources without proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources.
T0084.003 Deceptively Labeled or Translated An influence operation may take authentic content from other sources and add deceptive labels or deceptively translate the content into other langauges.
T0084.004 Appropriate Content An influence operation may take content from other sources with proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. Examples include the appropriation of content from one inauthentic news site to another inauthentic news site or network in ways that align with the originators licensing or terms of service.
T0085 Develop Text-based Content Creating and editing false or misleading text-based artifacts, often aligned with one or more specific narratives, for use in a disinformation campaign.
T0085.001 Develop AI-Generated Text AI-generated texts refers to synthetic text composed by computers using text-generating AI technology. Autonomous generation refers to content created by a bot without human input, also known as bot-created content generation. Autonomous generation represents the next step in automation after language generation and may lead to automated journalism. An influence operation may use read fakes or autonomous generation to quickly develop and distribute content to the target audience.
T0085.002 Develop False or Altered Documents Develop False or Altered Documents
T0085.003 Develop Inauthentic News Articles An influence operation may develop false or misleading news articles aligned to their campaign goals or narratives.
T0086 Develop Image-based Content Creating and editing false or misleading visual artifacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include photographing staged real-life situations, repurposing existing digital images, or using image creation and editing technologies.
T0086.001 Develop Memes Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns.
T0086.002 Develop AI-Generated Images (Deepfakes) Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures.
T0086.003 Deceptively Edit Images (Cheap fakes) Cheap fakes utilize less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event.
T0086.004 Aggregate Information into Evidence Collages Image files that aggregate positive evidence (Joan Donovan)
T0087 Develop Video-based Content Creating and editing false or misleading video artifacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include staging videos of purportedly real situations, repurposing existing video artifacts, or using AI-generated video creation and editing technologies (including deepfakes).
T0087.001 Develop AI-Generated Videos (Deepfakes) Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures.
T0087.002 Deceptively Edit Video (Cheap fakes) Cheap fakes utilize less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event.
T0088 Develop Audio-based Content Creating and editing false or misleading audio artifacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include creating completely new audio content, repurposing existing audio artifacts (including cheap fakes), or using AI-generated audio creation and editing technologies (including deepfakes).
T0088.001 Develop AI-Generated Audio (Deepfakes) Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures.
T0088.002 Deceptively Edit Audio (Cheap fakes) Cheap fakes utilize less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event.
T0089 Obtain Private Documents Procuring documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can include authentic non-public documents, authentic non-public documents have been altered, or inauthentic documents intended to appear as if they are authentic non-public documents. All of these types of documents can be "leaked" during later stages in the operation.
T0089.001 Obtain Authentic Documents Procure authentic documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can be "leaked" during later stages in the operation.
T0089.002 Create Inauthentic Documents Create inauthentic documents intended to appear as if they are authentic non-public documents. These documents can be "leaked" during later stages in the operation.
T0089.003 Alter Authentic Documents Alter authentic documents (public or non-public) to achieve campaign goals. The altered documents are intended to appear as if they are authentic can be "leaked" during later stages in the operation.

TA06 Counters
disarm_idnamesummary
C00014 Real-time updates to fact-checking database Update fact-checking databases and resources in real time. Especially import for time-limited events like natural disasters.
C00032 Hijack content and link to truth- based info Link to platform
C00071 Block source of pollution Block websites, accounts, groups etc connected to misinformation and other information pollution.
C00072 Remove non-relevant content from special interest groups - not recommended Check special-interest groups (e.g. medical, knitting) for unrelated and misinformation-linked content, and remove it.
C00074 Identify and delete or rate limit identical content C00000
C00075 normalise language normalise the language around disinformation and misinformation; give people the words for artifact and effect types.
C00076 Prohibit images in political discourse channels Make political discussion channels text-only.
C00078 Change Search Algorithms for Disinformation Content Includes “change image search algorithms for hate groups and extremists” and “Change search algorithms for hate and extremist queries to show content sympathetic to opposite side”
C00080 Create competing narrative Create counternarratives, or narratives that compete in the same spaces as misinformation narratives. Could also be degrade
C00081 Highlight flooding and noise, and explain motivations Discredit by pointing out the "noise" and informing public that "flooding" is a technique of disinformation campaigns; point out intended objective of "noise"
C00082 Ground truthing as automated response to pollution Also inoculation.
C00084 Modify disinformation narratives, and rebroadcast them Includes “poison pill recasting of message” and “steal their truths”. Many techniques involve promotion which could be manipulated. For example, online fundings or rallies could be advertised, through compromised or fake channels, as being associated with "far-up/down/left/right" actors. "Long Game" narratives could be subjected in a similar way with negative connotations. Can also replay technique T0003.
C00085 Mute content Rate-limit disinformation content. Reduces its effects, whilst not running afoul of censorship concerns. Online archives of content (archives of websites, social media profiles, media, copies of published advertisements; or archives of comments attributed to bad actors, as well as anonymized metadata about users who interacted with them and analysis of the effect) is useful for intelligence analysis and public transparency, but will need similar muting or tagging/ shaming as associated with bad actors.
C00086 Distract from noise with addictive content Example: Interject addictive links or contents into discussions of disinformation materials and measure a "conversion rate" of users who engage with your content and away from the social media channel's "information bubble" around the disinformation item. Use bots to amplify and upvote the addictive content.
C00087 Make more noise than the disinformation
C00091 Honeypot social community Set honeypots, e.g. communities, in networks likely to be used for disinformation.
C00094 Force full disclosure on corporate sponsor of research Accountability move: make sure research is published with its funding sources.
C00106 Click-bait centrist content Create emotive centrist content that gets more clicks
C00107 Content moderation includes social media content take-downs, e.g. facebook or Twitter content take-downs
C00142 Platform adds warning label and decision point when sharing content Includes “this has been disproved: do you want to forward it”. Includes “"Hey this story is old" popup when messaging with old URL” - this assumes that this technique is based on visits to an URL shortener or a captured news site that can publish a message of our choice. Includes “mark clickbait visually”.
C00165 Ensure integrity of official documents e.g. for leaked legal documents, use court motions to limit future discovery actions
C00202 Set data 'honeytraps' Set honeytraps in content likely to be accessed for disinformation.
C00219 Add metadata to content that’s out of the control of disinformation creators Steganography. Adding date, signatures etc to stop issue of photo relabelling etc.

TA06 Detections
disarm_idnamesummary
F00028 Associate a public key signature with government documents
F00029 Detect proto narratives, i.e. RT, Sputnik
F00030 Early detection and warning - reporting of suspect content
F00031 Educate on how to identify information pollution Strategic planning included as innoculating population has strategic value.
F00032 Educate on how to identify to pollution DUPLICATE - DELETE
F00033 Fake websites: add transparency on business model
F00034 Flag the information spaces so people know about active flooding effort
F00035 Identify repeated narrative DNA
F00036 Looking for AB testing in unregulated channels
F00037 News content provenance certification. Original Comment: Shortcomings: intentional falsehood. Doesn't solve accuracy. Can't be mandatory. Technique should be in terms of "strategic innoculation", raising the standards of what people expect in terms of evidence when consuming news.
F00038 Social capital as attack vector Unsure I understood the original intention or what it applied to. Therefore the techniques listed (10, 39, 43, 57, 61) are under my interpretation - which is that we want to track ignorant agents who fall into the enemy's trap and show a cost to financing/reposting/helping the adversary via public shaming or other means.
F00039 standards to track image/ video deep fakes - industry
F00040 Unalterable metadata signature on origins of image and provenance