Content Policy
Last updated: March 23, 2026
Overview
This policy defines what content is and isn't allowed on ROSLO. All content — posts, comments, messages, products, profile information — is subject to these rules. Content is reviewed through a combination of automated AI screening and human moderation.
Prohibited Content
The following content is strictly prohibited and will be immediately removed:
- Child sexual abuse material (CSAM) or any content sexualizing minors
- Non-consensual intimate imagery (revenge pornography)
- Content promoting terrorism or violent extremism
- Direct threats of violence against individuals or groups
- Content facilitating human trafficking or exploitation
- Sale of illegal drugs, weapons, or stolen goods on The Exchange
- Malware, phishing links, or other cybersecurity threats
Restricted Content
The following content is restricted and may be removed or have limited visibility:
- Graphic violence or gore without educational or newsworthy context
- Sexually explicit content without appropriate content warnings
- Misinformation that could cause real-world harm (health, safety, elections)
- Coordinated inauthentic behavior or manipulation campaigns
- Excessive profanity or shock content designed solely to provoke
- Unverified claims presented as fact on health or safety topics
Content Moderation Process
Layer 1 — AI Screening: All content passes through automated AI analysis for prohibited content detection. Blocked content is rejected immediately.
Layer 2 — Flagging: Content that falls in gray areas is flagged for human review. Flagged content may have reduced visibility until reviewed.
Layer 3 — Human Review: Trained moderators review flagged content and user reports. Decisions are made with context and nuance.
Layer 4 — Appeals: Users can appeal moderation decisions. Appeals are reviewed by senior moderators.
Layer 2 — Flagging: Content that falls in gray areas is flagged for human review. Flagged content may have reduced visibility until reviewed.
Layer 3 — Human Review: Trained moderators review flagged content and user reports. Decisions are made with context and nuance.
Layer 4 — Appeals: Users can appeal moderation decisions. Appeals are reviewed by senior moderators.
Repeat Violations
Users who repeatedly violate content policies face escalating consequences: warning → temporary restriction → suspension → permanent ban. Severe violations (CSAM, terrorism) result in immediate permanent ban and referral to authorities.
Reporting Content
Use the report button on any post, comment, message, or product listing. Provide as much context as possible. All reports are reviewed. False reporting to harass others is itself a violation.