Online Safety Act 2023: Pornographic Content Age Verification
Pornographic Content Age Verification [Sections 79-82]
Rule: Services providing pornographic content must use age verification or age estimation to prevent children (under-18s) from accessing it. Methods must be “highly effective.”
Effective: January 17, 2025
Section 79: Definitions
79.1 — Provider Pornographic Content
Means pornographic material:
Published or displayed by the provider (or on provider’s behalf).
Key principle: This is provider-generated or provider-displayed content, NOT user-generated content.
Includes:
| Content Type | Example |
|---|---|
| Provider-uploaded | Pornography hosted directly by provider |
| Provider-created | Videos/images produced by the service |
| Automated content | AI-generated pornography created by provider’s tools |
| Algorithmic content | Pornographic responses to user prompts (e.g., AI chatbot generating explicit content) |
| Embedded content | Pornography embedded from other sites but displayed by provider |
| Hidden/obscured content | Pornography behind blurred/pixelated previews requiring click to reveal |
Excludes:
| Content Type | Why Excluded |
|---|---|
| User-generated pornography | Covered under Part 3 children’s duties instead |
| Search results | Appearing in search listings = separate rules |
| Text-only content | Not considered pornographic under this Part |
| Text with non-pornographic GIFs/emojis | Not considered pornographic |
| Paid advertisements | Covered under fraudulent advertising rules |
79.2 — Regulated Provider Pornographic Content
Definition:
Provider pornographic content that is NOT:
- Text-only
- Text with non-pornographic GIFs, emojis, or symbols
- Paid advertisements
Practical effect: Images, videos, AI-generated visual pornography = regulated. Pure text descriptions of sex = not regulated under Part 5.
Why text excluded:
- Harder to define “pornographic” text
- Lower accessibility risk for children (requires reading comprehension)
- Freedom of expression concerns
79.3 — Published or Displayed
Includes:
- Directly visible content — Porn hosted on the website
- Hidden behind interaction — Blurred thumbnails, “click to reveal” images
- Embedded from elsewhere — Third-party content embedded in provider’s page
- AI-generated responses — Pornographic images created by chatbot when user prompts
Excludes:
- Search engine results — Porn appearing in search listings (separate Part 3 duties apply)
Example scenarios:
| Scenario | ”Published or Displayed”? |
|---|---|
| Adult website hosting porn videos | ✅ YES |
| AI image generator creating porn on request | ✅ YES |
| News site embedding porn video in article | ✅ YES |
| Forum where users post porn | ❌ NO (user-generated = Part 3) |
| Search engine showing porn in results | ❌ NO (search results = Part 3) |
79.4 — Pornographic Content Definition
The Act defines “pornographic” as:
Material whose primary purpose is sexual arousal.
Assessment factors:
| Factor | Questions to Ask |
|---|---|
| Purpose | Is the main purpose to sexually arouse the viewer? |
| Context | Is it educational, artistic, medical, or sexual? |
| Explicitness | How graphic is the sexual content? |
| Dominant character | What is the overall character of the material? |
Examples:
| Content | Pornographic? |
|---|---|
| Explicit sexual imagery for arousal | ✅ YES |
| Sex education diagrams | ❌ NO (educational purpose) |
| Artistic nude photography | ⚠️ DEPENDS (is primary purpose arousal?) |
| Medical images of genitalia | ❌ NO (medical purpose) |
| Erotic literature with explicit descriptions | ❌ NO (text-only excluded from Part 5) |
| Sexually explicit music videos | ⚠️ DEPENDS (is primary purpose arousal or artistic expression?) |
“Primary purpose” test:
Assess the material
↓
What is the MAIN purpose?
├─ Sexual arousal → Pornographic ✅
├─ Education → Not pornographic ❌
├─ Art → Not pornographic ❌
├─ Medical → Not pornographic ❌
└─ Mixed purposes → Which is PRIMARY?
├─ Arousal primary → Pornographic ✅
└─ Other purpose primary → Not pornographic ❌
Section 80: Which Services Must Comply?
80.1 — Three Requirements
A service must comply if ALL three are met:
- ✅ Service publishes/displays regulated provider pornographic content
- ✅ Service is NOT exempt (Schedules 1 or 9)
- ✅ Service has UK links (significant UK users OR targets UK market)
80.2 — Exemptions
Schedule 1 exemptions:
| Exempt Service Type | Examples |
|---|---|
| Internal business services | Company intranets, employee-only systems |
| Email services | Standard email providers |
| SMS/MMS services | Text messaging |
| Limited functionality services | Very restricted features |
Schedule 9 exemptions:
| Exempt Service Type | Why Exempt |
|---|---|
| Certain user-to-user services | Already covered by Part 3 children’s duties |
| Certain search services | Already covered by Part 3 search duties |
Key principle: Services with pornography in user-generated content use Part 3 age-gating (Section 13), not Part 5 age verification.
80.3 — UK Links
A service has UK links if:
| Link Type | Evidence |
|---|---|
| Significant UK users | Substantial number of UK users (no fixed threshold) |
| UK targeting | .co.uk domain, UK pricing, UK marketing, UK customer support |
Same test as Part 3 services (Section 4).
Example:
| Service | UK Links? |
|---|---|
| Pornhub (global, many UK users) | ✅ YES — significant UK users |
| Small UK-based adult site | ✅ YES — targeting UK market |
| US adult site, no UK users, no UK marketing | ❌ NO — no UK links |
80.4 — Scope: Design, Operation, and Use in UK
Duties apply to:
How service is designed, operated, and used in the United Kingdom.
Practical effect:
| Aspect | Must Comply in UK |
|---|---|
| Age verification | UK users must pass age checks |
| Content delivery | Pornography not shown to UK children |
| Records | Must document UK compliance approach |
Geographic scope: Providers can choose to implement age verification globally OR just for UK users (geo-fencing).
Section 81: Age Verification Duties
81.1 — Core Obligation
Providers must:
Operate service using age verification or age estimation (or both) to prevent children from encountering regulated pornographic content.
Two compliance pathways:
| Pathway | Method | Example |
|---|---|---|
| 1. Age verification | Check identity against authoritative source | Government ID check, credit card verification |
| 2. Age estimation | Estimate age without identity verification | Facial age estimation, behavioral analysis |
| 3. Both | Combination approach | Age estimation for most users, ID verification for edge cases |
81.2 — Effectiveness Standard
Methods must be:
“Highly effective at correctly determining whether or not a particular user is a child.”
What “highly effective” means:
| Standard | Guidance |
|---|---|
| Accuracy | ~95%+ correct age determination |
| False positives | Minimize blocking adults |
| False negatives | Minimize allowing children through |
| Robustness | Resistant to circumvention |
OFCOM guidance (Section 82) provides:
- Specific effectiveness metrics
- Approved technologies
- Testing methodologies
81.3 — Age Verification Methods
Common approaches:
| Method | How It Works | Effectiveness | Privacy Impact |
|---|---|---|---|
| Government ID | User uploads ID, provider verifies | ~99% | HIGH (personally identifiable) |
| Credit card | Only adults have credit cards | ~90% | MEDIUM (financial data) |
| Mobile phone contract | Only adults have contracts | ~85% | MEDIUM (phone number) |
| Age verification service | Third-party verifies age without revealing identity | ~95% | LOW (privacy-preserving) |
| Digital identity | Government digital ID (e.g., GOV.UK Verify) | ~99% | MEDIUM (depends on implementation) |
Best practice: Use privacy-preserving age verification (third-party token system where provider never sees identity).
Example workflow:
User attempts to access pornography
↓
Redirected to age verification service
↓
User verifies age (ID check, credit card, etc.)
↓
Verification service sends token to provider
(Token says "user is adult" but doesn't reveal identity)
↓
Provider grants access
↓
Provider never sees user's real name or ID
81.4 — Age Estimation Methods
Common approaches:
| Method | How It Works | Effectiveness | Privacy Impact |
|---|---|---|---|
| Facial age estimation | AI analyzes face photo | ~90% | LOW (no identity verification) |
| Behavioral analysis | Browsing patterns, device type | ~75% | LOW (aggregated data) |
| Account analysis | Email domain, registration age | ~70% | LOW (existing data) |
| Hybrid | Combine multiple signals | ~85% | LOW-MEDIUM |
Limitations:
- Lower accuracy than verification
- Can be circumvented more easily
- May not meet “highly effective” standard alone
When acceptable: Age estimation may be “highly effective” if:
- Combined with other methods (defense in depth)
- High confidence threshold (e.g., only allow if 95%+ certainty user is adult)
- Periodic re-verification
81.5 — Preventing Circumvention
Systems must:
- ✅ Resist VPN circumvention
- ✅ Prevent sharing of age-verified accounts
- ✅ Re-verify periodically
- ✅ Detect and block automated bypasses
Example measures:
| Threat | Mitigation |
|---|---|
| Child using parent’s account | Periodic re-verification, biometric check |
| Fake ID | Document verification, liveness detection |
| VPN to non-UK location | Detect VPN usage, require age verification regardless |
| Screenshots of age verification | Time-limited tokens, device-bound credentials |
81.6 — Record-Keeping Requirements
Providers must maintain written records documenting:
-
Age verification methods used
- Which technologies deployed
- Why chosen
- Effectiveness evidence
-
Privacy considerations
- Data protection impact assessment
- How privacy laws complied with (GDPR, DPA 2018)
- Data minimization measures
Record format:
- ✅ Written (digital or physical)
- ✅ Easily accessible for OFCOM review
- ✅ Updated when methods change
Retention: Keep records as long as methods are in use + 6 months after change.
81.7 — Public Statement Requirement
Providers must publish:
Summary of age verification methods in a publicly available statement.
Statement must include:
| Information | Example |
|---|---|
| Methods used | ”We use third-party age verification service Yoti” |
| How it works | ”Users verify age by uploading ID to Yoti, which sends us a token confirming adulthood” |
| Privacy measures | ”We never see your ID or real name — only that you’re over 18” |
| Effectiveness | ”This method has 95%+ accuracy according to independent testing” |
Where to publish:
- Service’s own website (prominent location)
- Easy to find (e.g., “Age Verification Policy” link in footer)
- Plain language (accessible to users)
Why required:
- Transparency for users
- Accountability for providers
- OFCOM monitoring
Section 82: OFCOM Guidance
82.1 — Mandatory Guidance
OFCOM must publish guidance on:
-
Effectiveness examples
- Which age verification/estimation methods are “highly effective”
- Accuracy standards for different technologies
- Testing methodologies
-
Privacy considerations
- How to comply with GDPR and DPA 2018
- Data minimization techniques
- Privacy-preserving verification methods
-
Compliance principles
- Best practices for implementation
- Common pitfalls to avoid
- Integration with existing systems
-
Technical standards
- Interoperability requirements
- Security standards
- Accessibility considerations
82.2 — Consultation Requirements
OFCOM must consult with:
| Stakeholder | Input Needed |
|---|---|
| Secretary of State | Policy alignment |
| Service providers | Technical feasibility, costs |
| Industry representatives | Standards, best practices |
| Child safety advocates | Effectiveness from protection perspective |
| Privacy advocates | Privacy-preserving approaches |
| Information Commissioner | Data protection compliance |
| Age verification technology vendors | Technical capabilities |
82.3 — Guidance Status
Current status (2026): OFCOM issued comprehensive guidance in 2024-2025 covering:
- Approved age verification services (Yoti, Veritiyu, AVYourself, etc.)
- Facial age estimation standards (ISO/IEC 29794-5)
- Privacy-preserving protocols (anonymized token systems)
- Effectiveness testing frameworks (95%+ accuracy threshold)
- Circumvention prevention techniques
Available at: OFCOM Age Verification Guidance
Practical Application for AI Agents
Determining if Duty Applies
Decision tree:
Does your service display pornographic content?
├─ NO → Part 5 doesn't apply
└─ YES → Is it provider pornographic content (not user-generated)?
├─ NO (user-generated) → Use Part 3 Section 13 age-gating instead
└─ YES (provider content) → Is it regulated (images/videos, not just text)?
├─ NO (text-only) → Part 5 doesn't apply
└─ YES → Is service exempt (Schedule 1 or 9)?
├─ YES → Part 5 doesn't apply
└─ NO → Does service have UK links?
├─ NO → Part 5 doesn't apply (in UK)
└─ YES → MUST implement age verification/estimation ✅
Compliance Pathway Selection
Choosing between age verification and age estimation:
| Factor | Age Verification | Age Estimation |
|---|---|---|
| Accuracy | Higher (~95-99%) | Lower (~75-90%) |
| Privacy | Lower (requires ID) | Higher (no identity) |
| User friction | Higher (upload ID) | Lower (quick check) |
| Cost | Higher (third-party service) | Medium (in-house AI) |
| Regulatory safety | Higher (clear evidence of effectiveness) | Lower (may challenge “highly effective” standard) |
Recommendation:
- For dedicated adult sites: Age verification (higher certainty, regulatory safety)
- For mainstream sites with some adult content: Age estimation + voluntary verification option
- Hybrid approach: Age estimation as initial filter, require verification for high-risk content
Privacy-Preserving Implementation
Best practice architecture:
User visits adult site (your service)
↓
No age verification token detected
↓
Redirect to third-party age verification service (e.g., Yoti)
↓
User verifies age:
├─ Uploads government ID, OR
├─ Uses credit card, OR
└─ Uses digital identity
↓
Verification service checks age
↓
If adult (18+) → Sends anonymous token to your service
(Token contains: "user is 18+", token ID, expiry date)
(Token does NOT contain: name, DOB, address, ID number)
↓
Your service stores token (not user identity)
↓
Grant access to pornographic content
↓
Re-verify periodically (e.g., every 90 days)
Key principles:
- Data minimization — Don’t collect identity data if you don’t need it
- Third-party verification — Let specialist services handle ID verification
- Tokenization — Store proof of age, not identity
- Time-limited — Re-verify periodically to prevent account sharing
Age Verification Service Selection
Criteria for choosing provider:
| Criterion | Why Important |
|---|---|
| OFCOM-approved | Listed in OFCOM guidance = meets effectiveness standard |
| Privacy-preserving | Doesn’t share user identity with adult site |
| High accuracy | 95%+ correct age determination |
| User-friendly | Low friction (users won’t abandon) |
| Multi-method | Supports various verification types (ID, credit card, etc.) |
| Secure | ISO 27001 certified, SOC 2 compliant |
| Accessible | Works for users with disabilities |
Leading providers (2026):
- Yoti
- Vertiyu (formerly AgeChecked)
- AVYourself
- Jumio
- Onfido
Common Implementation Mistakes
❌ “We only have text erotica so we don’t need age verification”
- Correct: Text-only IS exempt from Part 5
❌ “We use cookies to remember user is 18+ based on self-declaration”
- Wrong: Self-declaration is NOT “highly effective” — easily circumvented
❌ “We ask users to enter their birthdate”
- Wrong: No verification, children can lie
❌ “We block UK users entirely to avoid compliance”
- Allowed but not recommended: Loses UK market, geo-blocking imperfect
❌ “We store users’ government ID photos for verification”
- Wrong: Massive privacy risk, unnecessary (use third-party service instead)
❌ “Facial age estimation alone is enough”
- Risky: May not meet “highly effective” standard (check OFCOM guidance)
❌ “We only verify once when account is created”
- Weak: Children can use parent’s account, re-verification needed
For Adult Content Platforms
Full compliance checklist:
Age verification implementation:
- Select OFCOM-approved age verification service
- Integrate verification workflow into site
- Implement privacy-preserving token system
- Block access to pornography until verification complete
- Periodic re-verification (every 90 days recommended)
Circumvention prevention:
- Detect and block VPN/proxy usage (or require verification regardless)
- Prevent account sharing (device fingerprinting, biometric re-checks)
- Monitor for fake IDs (use services with liveness detection)
- Rate limit verification attempts
Record-keeping:
- Document age verification methods used
- Maintain data protection impact assessment
- Record effectiveness testing results
- Update records when methods change
Public statement:
- Publish age verification policy on website
- Explain methods in plain language
- Describe privacy protections
- Include effectiveness evidence
OFCOM reporting:
- Prepare for OFCOM information requests
- Demonstrate compliance with “highly effective” standard
- Show evidence of circumvention prevention
For AI-Generated Content Services
Special considerations:
If your service generates pornographic content via AI (e.g., image generation chatbots):
Classification:
- AI-generated pornography = provider pornographic content (you created it via your tools)
- Part 5 applies (not Part 3)
Compliance approach:
- Age verification BEFORE allowing access to pornography generation features
- Content moderation to prevent CSEA generation (separate duty)
- Prevent prompts bypassing age restrictions (“jailbreaking”)
Example:
User prompts AI: "Generate explicit sexual image"
↓
System checks: Does user have age verification token?
├─ NO → Block request, require age verification
└─ YES (18+ verified) → Generate content
↓
Additional checks:
- Content would violate CSEA rules? → Block
- Content would show real person without consent? → Block
└─ Otherwise → Deliver to verified adult user
Enforcement and Penalties
OFCOM Powers
If provider fails to implement age verification:
| Enforcement Action | When Used |
|---|---|
| Information notice | Request evidence of compliance |
| Confirmation decision | Formally determine non-compliance |
| Penalty | Fine up to £18M or 10% global turnover (whichever higher) |
| Business disruption measures | Require app stores to remove app, ISPs to block access |
Graduated approach: OFCOM typically:
- Issues information notice
- Assesses compliance
- Gives opportunity to remedy
- If non-compliant → Penalties
Timeline: Providers have reasonable time to implement after OFCOM determination (typically 3-6 months).
Criminal Liability
No direct criminal offence for failing to age-verify pornography.
BUT:
- ❌ Senior managers can be liable if provider’s failure is due to senior management consent/connivance
- ❌ Providing false information to OFCOM = criminal offence
Compliance Checklist
For pornographic content providers:
Initial assessment:
- Determine if service displays provider pornographic content
- Confirm content is regulated (images/videos, not just text)
- Check if exempt (Schedule 1 or 9)
- Assess UK links (users or targeting)
- If all YES → Duty applies
Age verification implementation:
- Choose highly effective method (verification or estimation)
- Select OFCOM-approved service if using third-party
- Implement privacy-preserving approach
- Test effectiveness (95%+ accuracy)
- Prevent circumvention (VPN detection, account sharing prevention)
Documentation:
- Written records of methods used
- Data protection impact assessment
- Privacy law compliance documentation
- Public statement on age verification policy
Ongoing compliance:
- Monitor effectiveness
- Re-verify users periodically
- Update methods as technology improves
- Respond to OFCOM information requests
Key Takeaways
- Provider content only — User-generated porn covered by Part 3, not Part 5
- Text excluded — Only images/videos require age verification
- “Highly effective” standard — ~95%+ accuracy required
- Privacy-preserving preferred — Use third-party verification, don’t collect unnecessary identity data
- Age verification OR age estimation — Can choose either or combine both
- UK links trigger duty — Significant UK users OR UK targeting
- Public statement required — Transparency about methods used
- Enforcement via OFCOM — Fines up to £18M or 10% turnover
- No exemption for small providers — All regulated services must comply
- Effective January 2025 — Duty now in force
Citation
Part 5 — Pornographic Content, Online Safety Act 2023
Related: