User Safety & Content Policy
How we work to keep our platform safe, responsible, and transparent
1. Who We Are & How We Work
Combily Private Limited ("Combily", "we", "us") provides a value-added AI SaaS platform that lets users interact with state-of-the-art AI models for text, image, voice, video, and code generation through its applications, extensions, and web-based tools.
How It Works: Combily does not own, train, or host any AI models. We provide a sophisticated interface layer that routes your requests to third-party AI providers through API Providers (for text, image, voice, video, and code models). Our value lies in the unified experience, model selection, credit management, and safety measures we layer on top of these providers.
No training on your content: Where third-party AI providers offer configurable controls to disable training or limit data retention, we configure our integrations to opt out of such training on your behalf. We do not authorise any third-party AI provider to use your Input or Output for model training purposes. However, as we do not own, host, or control the infrastructure of third-party AI providers, we cannot guarantee how providers process data on their own systems. Each provider is governed by its own terms, data policies, and regulatory obligations.
This transparency is central to our safety approach. Because we are an interface provider rather than a model developer, our safety strategy operates on two levels:
- Provider-Level Safeguards: We leverage the built-in content moderation, safety filters, and responsible AI practices implemented by each upstream AI provider
- Platform-Level Safeguards: We implement our own policies, monitoring, enforcement, and user reporting systems on top of provider protections
2. Our Safety Commitments
We are committed to the following principles in how we build and operate Combily:
User Protection
We prioritise the safety of our users — especially vulnerable groups — in every product decision. We never knowingly permit content that endangers, exploits, or harms individuals.
Transparency
We are honest about what we can and cannot control. We clearly disclose our role as an interface provider, our reliance on third-party AI models, and the limitations of our moderation capabilities.
Accountability
We enforce our policies consistently and fairly. We investigate reports promptly, take proportionate action, and provide users with a right of appeal against enforcement decisions.
Continuous Improvement
AI safety is an evolving field. We regularly review and update our policies, tools, and practices as new risks emerge and as regulatory expectations change.
Collaboration
We work with our AI provider partners, industry peers, and regulatory bodies to share best practices and contribute to a safer AI ecosystem.
3. Roles & Responsibilities
3.1 Combily's Responsibilities
- Maintaining and enforcing platform-level policies (Terms of Use, Usage Policy, this Safety Policy)
- Selecting reputable AI providers with demonstrated safety practices
- Implementing platform-level monitoring and enforcement mechanisms
- Operating a user reporting system and responding to safety concerns
- Cooperating with law enforcement when legally required
- Regularly reviewing and updating policies in response to emerging risks
3.2 AI Provider Responsibilities
Our AI providers (accessed through API Providers) are responsible for:
- Training models with appropriate safety alignment and guardrails
- Implementing model-level content filters and refusal mechanisms
- Conducting red-teaming and safety testing before model deployment
- Publishing and maintaining their own usage policies and safety documentation
3.3 User Responsibilities
- Complying with our Terms of Use and Usage Policy
- Not attempting to circumvent safety measures or content filters
- Reporting harmful content or behaviour encountered on the platform
- Exercising personal judgment when using AI-generated Output
- Disclosing AI-generated content when sharing it publicly
4. Illegal Content
We have zero tolerance for content that is illegal under applicable law. The following categories of content are absolutely prohibited, and confirmed violations will result in immediate account termination and, where appropriate, referral to law enforcement:
4.1 Child Sexual Abuse Material (CSAM)
Any content that depicts, promotes, or facilitates the sexual exploitation or abuse of children — including AI-generated imagery — is strictly prohibited. We will immediately terminate accounts involved in such activity and report all confirmed instances to the relevant authorities, including the National Center for Missing & Exploited Children (NCMEC) and local law enforcement.
4.2 Terrorism & Violent Extremism
Content that promotes, recruits for, funds, or provides material support for terrorism or violent extremist organisations is prohibited. This includes propaganda, radicalisation materials, and instructions for carrying out attacks.
4.3 Other Illegal Content
Content related to human trafficking, forced labour, drug trafficking, money laundering, fraud, identity theft, non-consensual intimate imagery, stalking, harassment, and any other activity that constitutes a criminal offence under applicable law is prohibited.
5. Harmful Content
Beyond strictly illegal content, we also prohibit content that — while it may not always be criminal — poses a significant risk of harm to individuals or groups. These categories are addressed in detail in our Usage Policy and include:
- Self-Harm & Suicide: Content that promotes, encourages, or provides instructions for self-harm or suicide
- Hate Speech & Discrimination: Content that promotes hatred or incites violence against individuals or groups based on protected characteristics
- Disinformation: Deliberately false or misleading content designed to deceive or cause public harm
- Harassment & Bullying: Content intended to intimidate, threaten, or cause emotional distress to specific individuals
- Dangerous Activities: Instructions for creating weapons, explosives, dangerous chemicals, or other materials that could cause physical harm
- Privacy Violations: Content that reveals personal, private, or sensitive information about individuals without their consent (doxxing)
6. Product Safeguards
We implement multiple layers of safeguards across our platform to prevent harmful use:
6.1 Provider-Level Protections
- Model Safety Alignment: The AI models available through Combily have been trained with safety alignment by their respective developers (OpenAI, Anthropic, Google, Meta, etc.), including refusal training and content filtering
- API-Level Moderation: API Providers implement their own moderation policies and may reject requests that violate their terms of service
- Multi-Layer Filtering: Requests may pass through multiple safety checkpoints — our platform rules (including automated prompt screening for certain requests), provider policies, and the model's own safety training
6.2 Platform-Level Protections
- Account Verification: Email verification as a prerequisite for account creation
- Age Gating: Date-of-birth validation at registration (18+ only)
- Rate Limiting: API rate limiting and credit-based usage controls to prevent automated abuse
- Automated Prompt Moderation: For certain AI generation requests, we run automated safety checks (including blacklist rules and third-party moderation classifiers) and may warn, block, or refuse requests that appear to violate our policies
- Usage Monitoring: Monitoring for anomalous usage patterns that may indicate abuse
- Feature-Level Controls: Fine-grained access controls based on subscription tier and user standing
- Graduated Enforcement: A structured response system from warnings through account termination (see our Usage Policy)
6.3 Age Restriction
Our Services are restricted to individuals aged 18 and over. We implement this through our registration process and Terms of Use. Accounts found to be operated by minors will be terminated.
7. Proactive Technology
We employ proactive systems to identify and respond to safety concerns before they cause harm:
- Upstream Model Safeguards: We rely on the sophisticated safety systems built into the AI models themselves. Leading model providers invest significantly in red-teaming, adversarial testing, and safety training. These safeguards are our first and most robust line of defence.
- Automated Prompt Screening: For certain AI generation requests, we apply automated checks (including blacklist matching and third-party moderation classifiers) to reduce the risk of prohibited content such as CSAM indicators, self-harm instructions, violent threats, and certain fraud/malware patterns.
- Usage Pattern Analysis: We monitor aggregated usage data for abnormal patterns — such as sudden volume spikes, unusual model selection patterns, or repeated failed requests — that may indicate abuse or attempted policy circumvention.
- Account Integrity Signals: We track account-level signals including registration patterns, payment history, and interaction frequency to identify potentially fraudulent or abusive accounts.
- Feedback Loops with Providers: We maintain communication with our API Providers and can report emerging abuse patterns to them, contributing to improved safety at the model level.
Honest Disclosure: Combily uses a combination of upstream provider safeguards and platform-level controls. While upstream models and providers provide significant safety protections, we also apply our own automated checks for certain requests (including blacklist rules and third-party moderation classifiers) and log moderation events to support enforcement. No automated system is perfect — harmful or inaccurate content may still occur, and we encourage users to report concerns.
8. Reporting Concerns
If you encounter content or behaviour on the platform that you believe is unsafe, illegal, or violates our policies, please report it to us. We take every report seriously.
How to Report
Support Ticket (recommended if you have an account): Open a support ticket
Email: info@combily.com
Subject Line: Safety Concern Report
Please include:
- A description of the content or behaviour you are reporting
- The approximate date, time, and context of the incident
- Any supporting evidence (screenshots, conversation excerpts)
- Your contact information (optional, but helpful for follow-up)
Our Response Process
Acknowledgement — Within 48 Hours
We will confirm receipt of your report within 48 hours via email.
Assessment — Within 10 Business Days
We will review the report, investigate the circumstances, and determine the appropriate response. Complex cases may require additional time, in which case we will notify you.
Action
Based on our assessment, we may take enforcement action ranging from a warning to account termination. For illegal content, we will cooperate with relevant authorities.
Notification
Where possible and appropriate, we will inform the reporter of the outcome. Due to privacy obligations, we may not be able to disclose specific details about actions taken against other users.
Immediate Danger: If you believe someone's life is in imminent danger, please contact your local emergency services immediately. Combily's reporting process is not a substitute for emergency services.
9. Appeals Process
We recognise that enforcement decisions may sometimes be incorrect or disproportionate. If you believe a moderation action taken against your account was made in error, you have the right to appeal.
How to Appeal
Send an email to info@combily.com with the subject line "Safety Enforcement Appeal". Include:
- Your account email address
- A description of the enforcement action you are appealing
- Your explanation of why you believe the action was incorrect
- Any supporting evidence or context
What to Expect
- We will acknowledge your appeal within 48 hours
- We will review the appeal within 10 business days and make a determination
- If the original decision is overturned, we will restore access and notify you
- If the original decision is upheld, we will provide a reasoned explanation (including the policy basis for the action, where appropriate)
- Appeal decisions are final
EU and UK users may also have additional rights to dispute resolution under the Digital Services Act (DSA) and UK Online Safety Act (OSA). See Section 10 for details.
10. Regional Requirements
As an interface layer that connects users to third-party AI providers via API, our regulatory obligations may differ from those of providers who directly host or train AI models. The following disclosures reflect our commitment to align our practices with applicable requirements to the extent they apply to our service model and operational capabilities.
10.1 United Kingdom — Online Safety Act (OSA)
We implement measures designed to align with the UK Online Safety Act 2023, including:
- Take proactive measures to prevent the generation and dissemination of illegal content through our platform
- Provide accessible reporting and complaints mechanisms for UK users
- Act on reports of illegal content expeditiously, including removal or restriction of access
- Apply transparent and proportionate enforcement and appeals processes
- Publish this safety policy to ensure users understand what is and isn't permitted
10.2 European Union — Digital Services Act (DSA)
In alignment with the EU Digital Services Act obligations:
- We provide a clear reporting mechanism for potentially illegal content or policy violations
- We process reports promptly and diligently and inform reporters of the outcome
- We offer an internal complaint-handling system for users affected by enforcement actions
- Users may also refer disputes to certified out-of-court dispute settlement bodies where available
- We will cooperate with the Digital Services Coordinator in any relevant EU Member State as required
10.3 United States — COPPA & Federal Law
- Our Services are not directed at children under 13 years of age, consistent with the Children's Online Privacy Protection Act (COPPA)
- We are committed to complying with 18 U.S.C. § 2258A regarding mandatory reporting of apparent child sexual abuse material (CSAM) to NCMEC
- We are committed to cooperating with law enforcement agencies in accordance with applicable US federal and state laws
10.4 Canada
- We implement measures consistent with the Personal Information Protection and Electronic Documents Act (PIPEDA) regarding user data in safety investigations
- We will align with any applicable requirements under Canada's forthcoming Online Harms Act once enacted
10.5 Brazil (LGPD & Marco Civil da Internet)
- We process user data in the context of safety investigations in a manner consistent with the Lei Geral de Proteção de Dados (LGPD)
- We implement measures designed to align with the Marco Civil da Internet (Law No. 12,965/2014) regarding the responsibility of service providers, content moderation, and preservation of user records where required by court order
- Complaints may be directed to the Autoridade Nacional de Proteção de Dados (ANPD)
10.6 Australia (Online Safety Act 2021)
- We are committed to cooperating with the eSafety Commissioner in Australia regarding reports of cyber abuse, image-based abuse, and illegal or harmful online content
- We aim to comply with takedown and removal notices issued under the Online Safety Act 2021 within the timeframes prescribed by the Commissioner
- Users in Australia may report safety concerns directly to the eSafety Commissioner at www.esafety.gov.au in addition to our internal reporting mechanism
10.7 India (IT Act & DPDP Act 2023)
- We implement measures designed to align with the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 regarding the takedown of unlawful content, cooperation with government orders, and appointment of a Grievance Officer for user complaints
- We process user safety data in a manner consistent with the Digital Personal Data Protection Act, 2023 (DPDP Act)
- Our designated Grievance Officer's contact details are available upon request at info@combily.com
10.8 Japan (APPI & Provider Liability Limitation Act)
- We handle personal information in safety and moderation processes in a manner consistent with the Act on the Protection of Personal Information (APPI)
- We implement measures designed to align with the Provider Liability Limitation Act regarding the disclosure of sender information for rights infringement and the handling of user complaints related to content
10.9 South Korea (PIPA & Network Act)
- We process personal information in safety investigations in a manner consistent with the Personal Information Protection Act (PIPA)
- We implement measures designed to align with the Act on Promotion of Information and Communications Network Utilization and Information Protection regarding the prohibition and removal of illegal information and cooperation with the Korea Communications Standards Commission (KCSC)
10.10 China (PIPL & Cybersecurity Law)
- To the extent our service model falls within the scope of the Personal Information Protection Law (PIPL) and the Cybersecurity Law, we aim to implement measures consistent with applicable content moderation and data protection requirements
- We do not currently target our Services specifically at residents of the People's Republic of China. If and when our operations extend to users in the PRC, we will implement additional compliance measures as required by applicable law, including any obligations relating to the Cyberspace Administration of China (CAC)
10.11 United Arab Emirates
- We process personal data in safety matters in a manner consistent with Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data
- We implement measures designed to align with applicable provisions of Federal Decree-Law No. 34 of 2021 (Combating Rumours and Cybercrimes) regarding prohibited online content and cooperation with authorities
10.12 Qatar
- We process personal data in safety matters in a manner consistent with Law No. 13 of 2016 Concerning Personal Data Privacy
- We implement measures designed to align with applicable cybercrime and electronic transactions laws regarding prohibited content
10.13 Pakistan & Bangladesh
- In Pakistan, we implement measures designed to align with the Personal Data Protection Act, 2025 (or successor legislation) and the Prevention of Electronic Crimes Act, 2016 regarding the prohibition of harmful and illegal online content and cooperation with authorised agencies
- In Bangladesh, we implement measures consistent with the Digital Security Act 2018 and the Information and Communication Technology Act 2006 (as amended) to the extent they govern content safety and data processing obligations. We will update our practices as Bangladesh's data protection legislation develops
10.14 Other Regions
Emerging online-safety and data protection frameworks across jurisdictions are tracked in our internal Regulatory Watchlist and will be reflected in this section as our user base and regulatory exposure grows. If you have questions about our compliance in a specific jurisdiction, please contact us at info@combily.com.
11. Contact Us
For safety concerns, policy questions, or general enquiries about our content and safety practices:
Combily Private Limited
Safety & Content Policy Inquiries
Email: info@combily.com
Website: www.combily.com
Our physical address is available upon request for official correspondence.
For urgent safety matters, please include "URGENT" in the subject line.