This policy enter into force on 30 October 2025 and replace all previous versions.
This website is owned, regulated and operated by Blippio Limited, hereinafter referred to as “we”.
1. Purpose and Scope
We are committed to providing a respectful, inclusive and secure environment for users worldwide. This policy outlines our global approach to moderating user content, communications and behavior in compliance with international regulations, including the UK Online Safety Act 2023, EU Digital Services Act and U.S. Section 230.
This policy also applies to AI-generated or digitally simulated content created, curated, or distributed through the platform. It applies to all content accessible to users within the EU, UK, U.S. and other territories, consistent with applicable local laws.
This policy is consistent with applicable the UK Online Safety Act 2023, the EU Digital Services Act and U.S. Section 230.
This policy applies to all areas of the platform including profiles, chat messages, multimedia content, user interactions and reports submitted through our platform.
No AI-generated content that depicts real persons without consent.
2. Permissible Use and Community Standards
Users must adhere to the following standards:
- No content involving hate speech, harassment, discrimination, or threats
- No content involving grooming, trafficking, or sexual exploitation
- No impersonation or fraudulent profiles
- No uploading or distribution of explicit or violent material
- No spam, phishing, or malicious links
- No solicitation, promotion, or sale of illegal services or substances
- No underage use or facilitation of unlawful contact
3. Moderation Systems
Support interactions, including those conducted via call centers or automated systems, are also subject to monitoring for abusive or exploitative behavior.
Monitoring and automated scanning are conducted under Article 6(1)(f) GDPR (legitimate interest to prevent fraud and illegal content). Users are informed when automated tools are used to detect, rank, or restrict content in accordance with Article 27 of the DSA and the Provider’s AI Disclosure Notice.
Automated detection decisions are always subject to subsequent human review before permanent removal or restriction of user accounts.
Automated tools are not used to infer or profile users’ personal traits or beliefs.
We use a hybrid system that combines automated and manual methods:
- Pre-launch Manual Review: All user profiles are reviewed by trained moderators
- Real-Time AI Detection: Automated filters monitor user behavior and flag prohibited content
- Chat Moderation: Conversations are monitored using Google Firebase and proprietary safeguards.
- Real-time chat communications operated by our authorized service providers are subject to monitoring, message logging and review to detect prohibited conduct and protect both users and representatives
- User Reports: Easy-to-use reporting tools are embedded throughout the platform
- Moderator Triage: Flagged items are reviewed contextually by trained personnel
- Priority Escalation: Suspected CSAM, grooming, or trafficking content is reviewed immediately and escalated to law enforcement if applicable
4. Illegal Content Removal Timelines
Users will be notified of content removal and provided with a short explanation, except where notification would impede law-enforcement investigations.
- Content identified as clearly illegal (e.g., CSAM, incitement to violence, trafficking) will be removed within 24–48 hours of detection or report
- Chat transcripts flagged for illegal or exploitative content are reviewed and, if necessary, reported to law enforcement within 24–48 hours
- Borderline content is placed under temporary suspension and reviewed within 72 hours
- Content reports submitted by trusted flaggers or authorities may be prioritized and removed without further notice where legally required.
5. Appeals and User Rights
Users will receive a written decision including the reasons for any moderation action. Appeals will be decided within 14 days and users will be informed of their right to lodge a complaint with the relevant national Digital Services Coordinator if within the EU.
Appeals are reviewed and decided within 14 calendar days of submission by a separate moderator.
Users may appeal moderation actions:
- Submit a written explanation via our appeals form
- Appeals are reviewed by a separate moderator within 5 business days
- Outcomes and justifications are recorded
6. Recordkeeping and Transparency
Transparency reports will distinguish between automated and human moderation actions.
- All moderation actions and appeals are logged and stored securely
- We conduct quarterly internal audits of moderation accuracy
- Where required by law, we will issue an annual transparency report summarizing:
- Number of reports received
- Time taken to act on illegal content
- Volume and outcome of appeals
- Moderation logs and appeal records are retained for no longer than 12 months after closure of the case, unless longer storage is required for legal defense purposes.
7. Special Protections Against Grooming and Exploitation
We recognize the unique risks of grooming and exploitation on interactive entertainment and chat platforms. We:
- Prohibit all users under 18 years of age
- Use AI and keyword scanning to detect early grooming patterns
- Maintain a dedicated escalation protocol for such cases
- Cooperate with law enforcement as needed
Human chat representatives receive mandatory training to identify and report potential grooming, trafficking, or coercive behavior from users.
8. Policy Governance
This policy extends to all communication channels, including customer service responses from non-human systems and is reviewed accordingly.
This policy is reviewed annually by our legal and compliance team. Updates are made to reflect:
The Provider maintains written records of compliance checks, audit outcomes and law-enforcement referrals for inspection by relevant authorities upon request.
- Regulatory developments
- Evolving online abuse threats
- Operational improvements
All contracted chat providers are bound by this policy and are subject to regular compliance checks.
This policy is published in accordance with Article 14 of the EU Digital Services Act.