Version 1.1

Media and Disinformation Handling

Learn how to identify, counter, and contain false reports and hostile narratives while coordinating accurate information flow with pods and the public.

18 min readΒ·Qualified Lesson

πŸ“° Media and Disinformation Handling

Observation & Legal Track
Pod Leadership & Organizing
Movement Strategy & Ethics

Info

This course is in Level 3: Dispatcher Certification (Verified Dispatcher).
It teaches how to identify false reports, hostile narratives, and misinformation campaigns while ensuring secure, accurate updates to the public and pods.


Why It Matters

False ICE sightings, fake police activity alerts, or targeted disinformation can waste resources, create panic, or discredit the network.
Strong verification and messaging protocols keep trust intact and ensure dispatch only mobilizes when real threats exist.


What You'll Learn

  1. Verification Standards – A matrix for cross-checking reports by type.
  2. Spotting Disinformation – Recognizing bot patterns, narrative flags, and deepfake tactics.
  3. Secure Media Handling – Sharing only what’s safe, avoiding leaks or predictable patterns.
  4. Counter-Messaging Protocols – Quiet corrections, public neutral statements, or escalated responses.
  5. Escalation Flow – Knowing when to involve pod leads or zone admins.

Verification Matrix

Report TypeMinimum Verification RequiredTools/Methods
ICE SightingsVisual proof + trusted witness confirmationEncrypted photo/video, location timestamp
Police ActivityTwo pods cross-confirm via independent meansRadio or LoRa check + GPS ping
Viral Social PostsReverse image search + timestamp validationGoogle Lens, InVID, EXIF metadata check

Rule: Never issue alerts without at least two independent checks.


Spotting Disinformation

Bot or Troll Signs

  • Numeric or generic usernames.
  • Burst posting with identical language.
  • Activity at hours inconsistent with local timezones.

Narrative Flags

  • Overly emotional framing or fear bait.
  • Contradictory or unverifiable details.
  • Focus on delegitimizing pods or dispatch.

Deepfake or Edited Content

  • Shadows or lip-sync mismatches.
  • Metadata inconsistencies (date/location mismatches).
  • Pixelation around key elements (common in manipulated media).

Counter-Messaging Protocol

  1. Level 1 (Quiet Correction): Alert trusted pods and correct internally without public attention.
  2. Level 2 (Neutral Public Statement):
    "We have no verified reports of [incident] at this time. Verification ongoing."
  3. Level 3 (Escalated Response): Pod leads and admins coordinate a formal correction or press release.

Never β€œcall out” individuals unless cleared by leadership.


Secure Information Sharing

  • Never share exact pod movements, verification methods, or times on public channels.
  • Use encrypted backchannels (Signal, Matrix, or PGP) for sensitive confirmations.
  • Rotate verification routes and responders to avoid predictable patterns.

Decision Flow (Visual Guide)


Simulation Drill

"Scenario: A viral post claims there’s a police checkpoint near Main Street.

  1. Verify with ground teams or trusted pods.
  2. Reverse search the image and check timestamps.
  3. Determine the response level (quiet, neutral, or escalated).
  4. Draft messaging using templates."

Quick Action Steps

  1. Always double-verify every report before alerts.
  2. Use secure, redundant channels (Signal, radio, Matrix) for intel sharing.
  3. Keep neutral, factual language in all public updates.
  4. Rotate verification duties hourly during high-volume events to reduce errors.

Risks & Red Lines

  • Never share unverified reports, even privately.
  • Never confirm operational details while denying or debunking reports.
  • Avoid identifiable patterns in how verification is done β€” keep methods discreet.

Checklist

  • Can verify or debunk reports using at least two independent checks.
  • Can run basic image verification (EXIF data, reverse search).
  • Maintains a directory of trusted verification contacts.
  • Knows when to issue neutral updates vs. escalate to admins.
  • Has practiced secure handoffs and messaging protocols.

Resource Appendix

  • Verification Field Guide (reverse image tools, bot detection checklist).
  • Neutral Statement Templates for counter-messaging.
  • Escalation Flowchart (for quick reference during dispatch).
  • Encrypted Reporting Template (for pod-to-pod confirmations).

πŸ“˜ Knowledge Check

Why is strict verification critical before issuing alerts about ICE or police activity?

Dispatch should only issue alerts after at least two independent checks are confirmed.

Which methods are part of verifying reports?

Which behavior is a common indicator of bots or troll accounts spreading disinformation?

Deepfake or edited content often shows shadow mismatches, metadata inconsistencies, or pixelation around key elements.

What is the correct counter-messaging approach for unverified reports?

Which security practices help protect verification and messaging?

Which response level involves issuing a neutral public statement like, β€œWe have no verified reports of [incident] at this time. Verification ongoing.”?

Verification duties should rotate hourly during high-volume events to reduce fatigue and errors.

Which red lines must dispatchers follow when handling disinformation?


🚫 You must register and log in to mark this lesson as qualified. Registering helps us track progress, verify training, and build trust across our network.

You can use your Dispatch login here if you already created an account there. Likewise, creating an account here will let you use the same credentials on Dispatch.

Complete and pass the quiz above to unlock this button. You’ll need at least 80% correct.