Andreessen Horowitz Backed Marketplace Enables Custom Deepfakes of Real Women
Stanford study finds 90 percent of deepfake requests on Civitai target women as platform struggles with abuse
A study from researchers at Stanford and Indiana University analyzed bounty requests on the platform between mid-2023 and the end of 2024. While most requests were for animated content, a significant portion sought deepfakes of real individuals.
The research found that some instruction files were specifically designed to generate pornographic images that the platform officially bans. This gap between stated policy and actual use raises questions about enforcement.
Civitai has positioned itself as a marketplace for AI model components, allowing creators to sell custom LoRA files and other training materials. The legitimate use cases include artistic styles and character designs, but the same technology enables nonconsensual intimate imagery.
Andreessen Horowitz invested in Civitai as part of its AI portfolio. The firm has not publicly commented on the studys findings.
Analysis
Why This Matters
AI-enabled abuse of women through deepfakes is escalating. When venture-backed platforms facilitate this harm, it raises questions about investor responsibility and platform accountability.
Background
Nonconsensual deepfakes have become a significant harassment vector, with celebrities and ordinary women alike targeted.
Key Perspectives
Victims advocates demand stricter platform controls. AI proponents argue technology is neutral and misuse should not limit legitimate applications.
What to Watch
Whether Civitai implements meaningful enforcement changes and how investors respond to reputational concerns.