Media and Disinformation Handling
Learn how to identify, counter, and contain false reports and hostile narratives while coordinating accurate information flow with pods and the public.
π° Media and Disinformation Handling
Info
This course is in Level 3: Dispatcher Certification (Verified Dispatcher).
It teaches how to identify false reports, hostile narratives, and misinformation campaigns while ensuring secure, accurate updates to the public and pods.
Why It Matters
False ICE sightings, fake police activity alerts, or targeted disinformation can waste resources, create panic, or discredit the network.
Strong verification and messaging protocols keep trust intact and ensure dispatch only mobilizes when real threats exist.
What You'll Learn
- Verification Standards β A matrix for cross-checking reports by type.
- Spotting Disinformation β Recognizing bot patterns, narrative flags, and deepfake tactics.
- Secure Media Handling β Sharing only whatβs safe, avoiding leaks or predictable patterns.
- Counter-Messaging Protocols β Quiet corrections, public neutral statements, or escalated responses.
- Escalation Flow β Knowing when to involve pod leads or zone admins.
Verification Matrix
Report Type | Minimum Verification Required | Tools/Methods |
---|---|---|
ICE Sightings | Visual proof + trusted witness confirmation | Encrypted photo/video, location timestamp |
Police Activity | Two pods cross-confirm via independent means | Radio or LoRa check + GPS ping |
Viral Social Posts | Reverse image search + timestamp validation | Google Lens, InVID, EXIF metadata check |
Rule: Never issue alerts without at least two independent checks.
Spotting Disinformation
Bot or Troll Signs
- Numeric or generic usernames.
- Burst posting with identical language.
- Activity at hours inconsistent with local timezones.
Narrative Flags
- Overly emotional framing or fear bait.
- Contradictory or unverifiable details.
- Focus on delegitimizing pods or dispatch.
Deepfake or Edited Content
- Shadows or lip-sync mismatches.
- Metadata inconsistencies (date/location mismatches).
- Pixelation around key elements (common in manipulated media).
Counter-Messaging Protocol
- Level 1 (Quiet Correction): Alert trusted pods and correct internally without public attention.
- Level 2 (Neutral Public Statement):
"We have no verified reports of [incident] at this time. Verification ongoing." - Level 3 (Escalated Response): Pod leads and admins coordinate a formal correction or press release.
Never βcall outβ individuals unless cleared by leadership.
Secure Information Sharing
- Never share exact pod movements, verification methods, or times on public channels.
- Use encrypted backchannels (Signal, Matrix, or PGP) for sensitive confirmations.
- Rotate verification routes and responders to avoid predictable patterns.
Decision Flow (Visual Guide)
Simulation Drill
"Scenario: A viral post claims thereβs a police checkpoint near Main Street.
- Verify with ground teams or trusted pods.
- Reverse search the image and check timestamps.
- Determine the response level (quiet, neutral, or escalated).
- Draft messaging using templates."
Quick Action Steps
- Always double-verify every report before alerts.
- Use secure, redundant channels (Signal, radio, Matrix) for intel sharing.
- Keep neutral, factual language in all public updates.
- Rotate verification duties hourly during high-volume events to reduce errors.
Risks & Red Lines
- Never share unverified reports, even privately.
- Never confirm operational details while denying or debunking reports.
- Avoid identifiable patterns in how verification is done β keep methods discreet.
Checklist
- Can verify or debunk reports using at least two independent checks.
- Can run basic image verification (EXIF data, reverse search).
- Maintains a directory of trusted verification contacts.
- Knows when to issue neutral updates vs. escalate to admins.
- Has practiced secure handoffs and messaging protocols.
Resource Appendix
- Verification Field Guide (reverse image tools, bot detection checklist).
- Neutral Statement Templates for counter-messaging.
- Escalation Flowchart (for quick reference during dispatch).
- Encrypted Reporting Template (for pod-to-pod confirmations).
π Knowledge Check
Why is strict verification critical before issuing alerts about ICE or police activity?
Dispatch should only issue alerts after at least two independent checks are confirmed.
Which methods are part of verifying reports?
Which behavior is a common indicator of bots or troll accounts spreading disinformation?
Deepfake or edited content often shows shadow mismatches, metadata inconsistencies, or pixelation around key elements.
What is the correct counter-messaging approach for unverified reports?
Which security practices help protect verification and messaging?
Which response level involves issuing a neutral public statement like, βWe have no verified reports of [incident] at this time. Verification ongoing.β?
Verification duties should rotate hourly during high-volume events to reduce fatigue and errors.
Which red lines must dispatchers follow when handling disinformation?
π« You must register and log in to mark this lesson as qualified. Registering helps us track progress, verify training, and build trust across our network.
You can use your Dispatch login here if you already created an account there. Likewise, creating an account here will let you use the same credentials on Dispatch.
Complete and pass the quiz above to unlock this button. Youβll need at least 80% correct.