Everything you need to keep 
social feeds safe — and private

Our hybrid approach blends on‑device vision (via NymageGuard), text understanding, and context signals to gate harmful content before it appears. 
No screenshots. No chat uploads. No surveillance.

On‑device privacy

All detection happens locally on your teen’s device. Only anonymised, differential‑privacy metrics are ever sent (opt‑in).

Zero‑exposure gating

Blur, pause, and mute are applied before any risky media renders — stopping harm in the first 200–400ms, not after.

Multimodal intelligence

Fuses image, text, and context to catch coded language, screenshots‑of‑text, and subtle grooming patterns.

Cross‑platform coverage

Works across the feeds teens actually use: Instagram, TikTok, YouTube, X, Reddit — with Android app coverage.

Teen transparency

“Why hidden?” explanations, hold‑to‑reveal with cooldowns, and simple appeals — building trust, not pushback.

Family & school dashboards

Weekly trend insights without raw content. Cohort‑only views for schools. Always privacy‑respecting.

Don't worry; it's much simpler than a cockpit.

Privacy‑first teen safety

Safe feeds, everywhere.

On‑device, zero‑exposure gating that protects what your teen sees—across the platforms they love—without exporting their content to the cloud.

 ​On‑device detection —even we can’t see their content
 Zero‑exposure gating — blur / pause / mute before anything renders.
Teen transparency — “Why hidden?”, hold‑to‑reveal, and appeals.

Teen‑Safe Feed Layer

(Interactive Demo)
Feel the creativity that went into the design and development
Hold to reveal
Reel-style “Image Risk” (safe, abstract)
  • Abstract, blurred media box with “Hidden by policy” overlay — no real imagery shown.
  • NymageGuard vision simulates detection of potentially explicit visuals.
  • Privacy-first demo: on-device gating; posters stay safe and educational.
  • Interaction: Hold to reveal (simulated) to illustrate controlled viewing.
Hold to reveal
Chat “Text Cue (Self-harm)” — simulated & redacted (safe)
  • Generic chat motif with fully redacted message bubbles and neutral placeholders.
  • NLP cue detection is demonstrated without displaying real phrases or triggers.
  • Includes a gentle “Why hidden?” chip to model supportive explanations.
  • Calm, non-triggering aesthetic; demo example only, no harmful content.
Hold to reveal
Feed tile “Violence Risk” — abstract motion (safe)
  • Generic news/feed card with abstract kinetic shapes — no people, weapons, or scenes.
  • Soft red pulse frame signals elevated risk flagged by vision models.
  • Hold to reveal illustrates gated exposure without shocking visuals.
  • Clearly marked as a safe demonstration; no violent content is ever shown.

How NymAGF Works

Two core methods — one unified framework.

Digital ID + Zero-Knowledge Proofs

1. Connects

Connects securely to government or trusted identity sources

2. Extracts

Extracts only an age band (e.g., “18+”)

3. ​Cryptographic

Uses cryptographic signatures to prevent fraud

4. Storage

Nothing stored by the platform, not even the name or birthdate

5. Revocation

Supports revocation and expiry

6. Privacy-by-design

Meets strict privacy-by-design standards (GDPR, eSafety, DSA)

Facial Age Estimation + Liveness Detection

1. Submits

User submits a short video or selfie

2. Estimates

AI estimates age range with 90–98% accuracy

3. Checks

Liveness checks ensure it’s a real person, not a spoof

4. Storage

No biometric data stored or reused

5. Formal

Ideal for users without formal IDs

6. Fallback

Seamless fallback for fast onboarding

The Output

A privacy-preserving token, 
issued to the user, verified by the platform. No identity, no data trail -- just trust.

🌿 Let users explore -- but only when they're ready.

Contact us