23 days to comply: what the TAKE IT DOWN Act actually requires of platforms
Federal notice-and-takedown obligations for non-consensual intimate imagery — including AI-generated deepfakes — become effective May 19, 2026. Here is the rule, the exemptions that don't exist, and the 23-day plan.
The federal Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act — better known as the TAKE IT DOWN Act — was signed into law on May 19, 2025. Its criminal provisions, which target the publication of non-consensual intimate visual depictions including AI-generated imagery, took effect immediately. The harder problem for online services has always been Section 3, the notice-and-takedown regime imposed on platforms. That regime becomes effective on May 19, 2026 — twenty-three days from this writing.
The Federal Trade Commission has been preparing for that date for months. FTC Chair Andrew Ferguson has stated publicly that the agency views the May 19, 2026 effective date as the start of enforcement, not the start of a grace period. Anyone hoping for a soft launch is going to be disappointed. This piece is a working compliance checklist for the next three weeks, written for platform counsel, trust-and-safety leads, and product teams that need to ship a takedown workflow before the deadline.
The deadline and what it triggers
Section 3(a) of the Act gives platforms a one-year on-ramp from enactment to "establish a process whereby an identifiable individual" may submit a removal request. The clock started May 19, 2025 and ends May 19, 2026. From that date forward, any covered platform that fails to remove a properly-noticed depiction within 48 hours is in violation. The FTC's TAKE IT DOWN Act enforcement page describes the takedown obligation as "in addition to" any state-law claims for non-consensual intimate imagery (NCII), which now exist in 49 states. Federal preemption is narrow.
Two operational consequences follow. First, the May 19 date applies to existing depictions, not just future uploads — there is no statute of limitations carved out for legacy content. A request received on May 20 about a depiction uploaded in 2022 is just as much within scope as a request about a depiction uploaded the day before. Second, the 48 hours runs on a clock the requester defines, not the platform. Receipt is the trigger; "I didn't see it for two days" is not a defense.
Who's covered (and the surprise inclusions)
The covered-platform definition reaches further than most product teams realize. The Act applies to a "public-facing website, online service, online application, or mobile application" that is either (a) primarily a forum for user-generated content — messages, videos, images, games, audio — or (b) in the regular course of trade or business primarily engaged in publishing or hosting non-consensual intimate depictions. The first prong captures the obvious targets: every major social network (Meta, X, YouTube, TikTok, Reddit, Discord, Snap), every major user-generated-video platform, every dating app, every forum, every comment-enabled news site that meets the "primarily" bar.
The exemption list is short and unusual. The Act explicitly excludes broadband internet access service providers, email services, and services where user-generated content is "incidental" to provider-curated content. Latham & Watkins's primer on the Act notes that the third exclusion is the most contested in practice — most news sites with comment sections, e-commerce sites with product reviews, and SaaS products with internal collaboration features will plausibly qualify, but the line is undefined.
What is not on the exemption list is the more interesting question, and the one that has driven the most last-minute compliance work. Skadden's compliance breakdown highlights three categories the statute does not exempt:
- Cloud storage providers — Dropbox, iCloud, Google Drive, OneDrive. If a user shares a public link to a stored depiction, the storage service is on the hook for takedown.
- Messaging applications — including those without a public-facing surface area, where the "public-facing" determination has to be made channel by channel rather than service by service.
- End-to-end encrypted services — Signal, WhatsApp, Telegram, iMessage. The Act contains no E2EE carve-out, which has produced what the Electronic Frontier Foundation describes as a structural compliance crisis: services that cannot, by design, see user content are nonetheless required to remove specific items of user content within 48 hours.
The E2EE problem is not theoretical. Signal and Apple have both published positions arguing that the Act effectively imposes content-scanning obligations they cannot satisfy without breaking the cryptographic guarantees of their products. The FTC has so far declined to issue interpretive guidance. Litigation challenging the Act's application to E2EE services is widely expected.
The 48-hour clock and the "valid removal request"
The takedown obligation is triggered by a "valid removal request." Section 3(b) defines the request's required elements with unusual specificity. A valid request must include (i) a physical or electronic signature; (ii) identification of, and information reasonably sufficient to locate, the intimate visual depiction; (iii) a brief statement that the requester has a good-faith belief the depiction is non-consensual; and (iv) the requester's contact information. There is no penalty-of-perjury attestation, no requirement that the requester be the depicted person rather than an authorized representative, and no requirement that the requester verify their identity.
Once received, the platform has 48 hours to do two things in parallel: remove the specific depiction identified in the request, and "make reasonable efforts" to identify and remove identical copies. The "reasonable efforts" duplicate-removal standard is the one that produces the most engineering work. Hive AI's compliance write-up notes that perceptual-hash duplicate detection — PhotoDNA-style for images, video-fingerprinting for video — is treated as the de facto baseline for "reasonable efforts" by most platforms preparing for the deadline.
The 48 hours runs continuously. There is no business-day exception, no holiday extension, and no opportunity to pause the clock for investigation. A request received at 11pm on Friday must be acted on by 11pm Sunday. Trust-and-safety teams that operate on weekday-shift coverage will need to either expand to 24/7 or build automated triage that can complete a removal action without human review.
The asymmetric safe harbor
Section 3(c) of the Act provides a safe harbor that has attracted more academic criticism than any other provision. A covered platform is immunized from liability for the good-faith disabling or removal of material in response to a removal request, even if the depiction turns out not to be non-consensual or not within scope. There is no parallel safe harbor for under-removal — a platform that wrongly leaves up material is liable; a platform that wrongly takes down material is not.
The asymmetry is not subtle, and it produces a predictable equilibrium. The EFF's analysis argues that the rational platform response is to remove on receipt with no real verification — the cost of wrongful removal is zero, and the cost of wrongful non-removal is FTC liability plus reputational harm. The Cyber Civil Rights Initiative's statement on the Act, while supportive of its underlying goal, makes the same point: the statute creates incentives for over-removal that will fall hardest on lawful speech that is plausibly mistaken for NCII.
Operationally, this means the realistic compliance posture is: receive request, trigger automated takedown, log good-faith determination, and deal with any disputes after the fact. Pre-removal verification is technically permitted but legally unrewarded.
What the Act doesn't have (a DMCA comparison)
The natural comparison is to the DMCA's Section 512 notice-and-takedown regime, which has governed online copyright infringement claims for twenty-seven years. The differences are sharp:
- No penalty-of-perjury attestation. Section 512 requires DMCA notices to be made under penalty of perjury as to authority and good-faith belief. The TAKE IT DOWN Act asks for a "good-faith belief" only — no perjury hook, no criminal exposure for false notices.
- No counter-notice procedure. Section 512 gives the user whose content was taken down a structured way to push back; if they file a counter-notice, the platform must restore the content within 10–14 days unless the rights holder sues. The TAKE IT DOWN Act has no counter-notice mechanism at all. A user whose content is removed has no statutory route to challenge the removal.
- No 17 U.S.C. § 512(f) analog. Section 512(f) creates a federal cause of action against bad-faith DMCA notices. The TAKE IT DOWN Act creates no analogous liability for false or bad-faith removal requests. The only deterrent is a civil claim under existing tortious-interference or defamation theories, neither of which fits cleanly.
- No designated-agent registry. Section 512 requires platforms to register a DMCA agent with the Copyright Office; that registry is the address of record for service of takedown notices. The TAKE IT DOWN Act requires only a "clear and conspicuous" notice on the platform itself describing how to submit removal requests.
The cumulative effect of those four differences is that the TAKE IT DOWN Act regime is structurally more aggressive than the DMCA — faster clock, lower verification bar, no counter-notice, no false-notice penalty. It is also, if EFF and CCRI are right, more vulnerable to abuse.
Section 230 implications
The Act's interaction with Section 230 of the Communications Decency Act is the structural shift practitioners have been quietest about. Troutman Pepper Locke's analysis describes the TAKE IT DOWN Act as the first true Section 230 carve-out for hosted user content beyond the FOSTA-SESTA sex-trafficking exception of 2018. Because the takedown obligation is enforced as an unfair or deceptive practice under FTC Act Section 18(a)(1)(B), and because Section 230 does not preempt FTC enforcement, a platform's failure to remove a properly-noticed depiction creates direct federal liability that Section 230 does not block.
The doctrinal point matters because the platform-immunity architecture courts have built around Section 230 over thirty years assumed that hosted-content liability had to come through narrow carve-outs (FOSTA-SESTA, federal criminal law). The TAKE IT DOWN Act adds a third such carve-out, and the first one in seven years. Whether this is a one-off or the beginning of a more general retreat from Section 230 is the question every platform GC is now thinking about.
Enforcement and penalties
The FTC has primary enforcement authority. A violation of the takedown obligation is treated as a violation of an FTC trade regulation rule, which exposes the violator to civil penalties under 15 U.S.C. § 45(m). Honigman's compliance memo calculates the per-violation penalty at approximately $53,088 under the current FTC penalty schedule, indexed annually for inflation. State attorneys general have parallel enforcement authority.
The criminal side has already produced its first conviction. In April 2026, an Ohio man became the first person convicted under the Act for AI-generating non-consensual sexual images of female neighbors and acquaintances. That prosecution was brought under the criminal-publication provision of Section 2, not the platform-takedown provision of Section 3, but it demonstrates that the Department of Justice is treating the Act as live federal criminal law and is willing to charge cases that involve AI generation.
The 23-day compliance checklist
If you are reading this in late April 2026 and you have not yet shipped a TAKE IT DOWN Act takedown workflow, here is the minimum viable scope:
- Publish a clear and conspicuous removal-request notice. The Act requires a public, easily-findable description of the request mechanism. A dedicated URL — typically
/legal/take-it-downor similar — linked from the site footer satisfies this. - Build the intake form. Capture all four required elements (signature, depiction identification, good-faith statement, contact info) plus a free-text "additional context" field. Do not require identity verification — that creates an over-collection problem and is not a statutory prerequisite.
- Wire the form to a 24/7 escalation queue. The 48-hour clock does not pause for weekends or holidays. Either staff round-the-clock or design the workflow so that automated removal can occur without human review on receipt of a facially valid request.
- Stand up duplicate-removal tooling. Perceptual hashing for images (PDQ, PhotoDNA), video fingerprinting for video, and audio fingerprinting where applicable. The "reasonable efforts" standard is interpreted in practice as "platform of comparable size has this; you should too."
- Document every good-faith determination. The Section 3(c) safe harbor is only available for removals made in good faith. That defense lives or dies on contemporaneous records — the request received, the timestamp of action, the rationale for treating the depiction as in-scope. Build the logging before the deadline.
- Train internal counsel and trust-and-safety leadership on the FTC enforcement posture. Assume audits. The FTC has signaled it will request takedown-workflow documentation in early enforcement matters.
- Plan for legacy content. Requests about depictions uploaded years before the effective date are within scope. The intake system must handle them without a date-based filter.
- Coordinate with state-law NCII compliance. The federal regime does not preempt state law; state NCII statutes in 49 states layer on top, several with shorter or differently-defined removal windows.
The bigger picture
The TAKE IT DOWN Act is the first piece of federal legislation since the DMCA to impose substantive content-removal obligations on a broad class of online services. It is also the first such regime to be designed with AI-generated material expressly in scope — the statute defines an "intimate visual depiction" to include "computer-generated" depictions, which is the textual hook for the deepfake provisions. Read together, the Act and the active deepfake-and-likeness litigation tracked here mark the moment at which AI-generated NCII stopped being a state-law problem and started being a federal one.
The interesting question for the next year is not whether platforms comply — they will, because the cost of non-compliance is FTC enforcement and the cost of over-compliance is zero. The interesting questions are constitutional. The EFF's pre-passage analysis argued the Act's combination of fast clock, low verification bar, no counter-notice, and no false-notice penalty creates a textbook prior-restraint problem that will produce viewpoint-discriminatory takedowns at scale. A First Amendment challenge is widely expected; the most likely vehicle is a platform that gets a high-profile takedown demand for plainly-protected satirical or journalistic content and refuses.
That challenge has not been filed yet. Until it is, the operative posture for platform counsel is straightforward: ship the workflow, document the good-faith determinations, and assume the FTC will be looking. May 19 is twenty-three days away.
Frequently asked questions
When does the TAKE IT DOWN Act take effect?
The criminal provisions took effect on May 19, 2025, the day President Trump signed the Act. The platform notice-and-takedown obligations of Section 3 — the part most companies have to comply with — become effective May 19, 2026.
Is there a penalty-of-perjury requirement on removal requests?
No. Unlike the DMCA, the TAKE IT DOWN Act asks only for a "good-faith belief" that the depiction is non-consensual. There is no perjury attestation and no statutory liability for bad-faith requests.
Are end-to-end encrypted services exempt?
No. The Act contains no E2EE exemption. Signal, WhatsApp, Telegram, and iMessage are within scope on the face of the statute, which has produced a structural compliance problem because those services cannot, by design, see user content.
What counts as "reasonable efforts" to remove identical copies?
The statute does not define the term. In practice, perceptual-hash matching for images and video fingerprinting are treated as the baseline for platforms of meaningful scale. Smaller platforms can argue a lower bar but should expect any FTC inquiry to ask what duplicate-removal tooling is in place.
Does the Act preempt state NCII laws?
No. The Act explicitly preserves state-law claims. Platforms must comply with both the federal takedown obligation and any applicable state NCII statute, several of which have shorter removal windows or different scope definitions.
Can a platform require identity verification before acting on a removal request?
The statute does not authorize identity verification as a precondition. Imposing one would risk delaying action past the 48-hour deadline, which would defeat the safe harbor. Most platforms preparing for the deadline are accepting requests without verification and dealing with disputes after the fact.
Related coverage: Deepfakes & Likeness — doctrinal explainer · High-profile AI deepfake incidents · Fair use and AI training: what courts have actually ruled · All AI lawsuits & rulings tracked
This analysis is for general informational purposes and is not legal advice. The TAKE IT DOWN Act's effective date and enforcement posture may shift; this page reflects the FTC's announced position and the statute as enacted on May 19, 2025. Editorial team. Questions: editor@ailawsuittracker.com.