Trust & Safety
We at fal have invested in building a dedicated Trust & Safety function, hiring a Head of Trust & Safety, formerly of TikTok's trust and safety organization, to lead the company's efforts across policy development, content moderation, enforcement, and external partnerships. This role reports directly to the fal leadership team and is responsible for ensuring the company's safety practices meet or exceed industry standards for AI platforms.
Our Trust & Safety Enforcement Process governs how violations are identified, escalated, investigated, and resolved. This process ensures consistency and accountability across all enforcement actions, from detection through final disposition. It includes defined escalation paths, documentation requirements, and appeal mechanisms in order to provide rigor and transparency in fal's operations.
Below outlines the current trust and safety priorities.
Child Safety: NCMEC & Thorn Partnership
fal has partnered with NCMEC and Thorn.org to deploy automated CSAM hash-matching across its platform. This integration ensures that child sexual abuse material is detected and reported in accordance with federal law (18 U.S.C. § 2258A). By adopting the same tooling and reporting infrastructure used by other major tech companies, fal has aligned its child safety program with established industry best practices.
Non-Consensual Intimate Imagery: StopNCII.org
fal began onboarding as an industry partner of StopNCII.org, the initiative operated by the Revenge Porn Helpline in partnership with Meta. StopNCII.org enables victims of NCII to create hashed fingerprints of their images, which participating platforms use to detect and prevent redistribution. fal's participation reflects a proactive commitment to combating image-based abuse on generative AI infrastructure.
Content Policy & NSFW Framework
fal has formally adopted a comprehensive NSFW content policy governing permissible and prohibited content. This policy balances platform openness with clear prohibitions on unlawful categories, including CSAM, NCII, and other illegal material.
Automated Content Moderation: OpenAI Omni Integration
fal has integrated OpenAI's Omni moderation API — widely adopted as an industry standard for AI content classification — into its platform infrastructure. This provides automated, real-time detection and filtering of harmful content across modalities, supplementing fal's policy-based enforcement with state-of-the-art moderation technology.
Commitment to Responsible AI Infrastructure
fal recognizes its responsibility as an AI infrastructure provider and is committed to continuous improvement of its safety practices through industry partnerships, policy development, and investment in dedicated Trust & Safety resources.
Report abuse or harmful content
If you've encountered content on fal that violates our policies — including CSAM, non-consensual intimate imagery (NCII), or other unlawful material — we encourage you to submit a report at fal.ai/report-content. We review all reports and take action in accordance with our Trust & Safety Enforcement Process.
Additional support
If you or someone you know has experienced image-based sexual abuse or non-consensual intimate imagery (NCII), the following resources are available:
-
If you're under 18 or if you're an adult concerned that someone may share intimate images or videos from when you were under 18, use Take It Down from the National Center for Missing & Exploited Children (NCMEC). This tool helps remove sexually explicit content involving minors and prevents further sharing.
-
If you're 18 or older and are concerned that someone may share your intimate images or videos without your consent, visit StopNCII.org. This service helps remove non-consensual intimate content and prevents it from being reshared.
For any questions regarding fal's Trust & Safety program, contact us at safety@fal.ai.