Online Safety Act 2023: Children's Access Assessments and Fraudulent Advertising
Children’s Access Assessments and Fraudulent Advertising [Sections 35-40]
Rule: Providers must assess whether children are likely to access their service. If yes, children’s protection duties apply (Sections 11-12). Category 1 and 2A services must prevent fraudulent advertising.
Effective: January 10, 2024
Part 1: Children’s Access Assessments [Sections 35-37]
Section 35: What is a Children’s Access Assessment?
35.1 — Definition
Assessment evaluates:
- Can children access the service?
- Does the child user condition apply?
Purpose: Determine whether children’s protection duties (Sections 11-12) apply to the service.
35.2 — When Can Provider Conclude “Children Cannot Access”?
ONLY if:
Age verification or age estimation is used with the result that children are not normally able to access the service.
Key principle: Cannot claim “children can’t access” just because:
- ❌ Service is “designed for adults”
- ❌ Terms of service prohibit under-18s
- ❌ Content is inappropriate for children
- ❌ No children appear to use it (based on anecdotal evidence)
Must have: ✅ Actual age barriers (verification or estimation) ✅ Effectiveness evidence (children actually blocked)
35.3 — The Child User Condition
Condition is satisfied if EITHER:
| Condition | What It Means |
|---|---|
| Significant proportion test | Significant number of users are children (relative to total users) |
| Substantial numbers test | Service nature/characteristics mean substantial number of children are likely to use it |
Important: Based on actual usage, not intended audience.
Examples:
| Service | Child User Condition? | Reasoning |
|---|---|---|
| TikTok | ✅ YES | Substantial numbers of children use it |
| ⚠️ LIKELY NO | Primarily professional networking; few children | |
| Gaming platform | ✅ YES | Nature attracts children |
| Adult dating app with age verification | ⚠️ DEPENDS | If verification effective, may conclude children can’t access |
| Forum about professional accounting | ⚠️ DEPENDS | Unlikely to attract children (but must assess) |
Section 36: Assessment Duties
36.1 — When to Conduct Assessments
Initial assessment: Required by timelines in Schedule 3 (varies by service launch date and category).
Ongoing assessments required:
| Trigger | Frequency |
|---|---|
| Regular reviews | At least annually |
| Significant design changes | Before implementing changes affecting child access |
| Evidence of change | When age verification becomes less effective OR child usage increases |
36.2 — What Assessment Must Cover
Data sources to examine:
| Evidence Type | Examples |
|---|---|
| User data | Age demographics (where collected) |
| Usage patterns | Times of day, device types typical for children |
| Content analysis | What content is created/consumed (indicates user ages) |
| Age verification logs | How many attempted to access, how many blocked |
| Market research | Third-party data on user demographics |
| Competitor comparison | Similar services’ child usage rates |
36.3 — Documentation Requirements
Written record must:
- ✅ Be in easily understandable form
- ✅ Contain assessment findings
- ✅ Explain reasoning for conclusions
- ✅ Reference evidence used
- ✅ Be kept up to date
Each service needs separate assessment: Can’t assess “all our services” together if they differ significantly.
36.4 — Assessment Methodology Example
Step 1: Collect data
- User age demographics (if collected)
- Usage patterns indicating child users
- Content types popular with children
Step 2: Analyze child access capability
- Age verification/estimation in place?
- How effective is it?
- Can children bypass it?
Step 3: Evaluate child user condition
- What percentage of users are children?
- What is the absolute number?
- Is service type likely to attract children?
Step 4: Reach conclusion
- Can children access? (usually YES unless robust age barriers)
- Is child user condition met? (if significant proportion OR substantial numbers)
Step 5: Document
- Write up findings
- Retain evidence
- Update if circumstances change
Section 37: “Likely to be Accessed by Children” Determination
37.1 — Three Pathways to “Likely Accessed by Children”
Your service is “likely to be accessed by children” if ANY of:
| Pathway | Trigger |
|---|---|
| 1. Assessment concluded YES | Completed assessment finds children CAN access AND child user condition met |
| 2. No assessment completed | Provider hasn’t done required assessment (defaults to child-accessible) |
| 3. OFCOM determined | OFCOM investigation finds non-compliance and determines child-accessible |
37.2 — Default Presumption
Critical principle:
Services default to child-accessible until assessment proves otherwise.
Practical effect:
- New services = immediately subject to children’s duties
- Until first assessment done = assume children can access
- Burden on provider to prove children can’t access (via age barriers)
37.3 — Consequences of “Likely Accessed by Children”
If service is likely accessed by children, MUST comply with:
| Duty | Section | Description |
|---|---|---|
| Children’s risk assessment | 11 | Assess risks of harm to children |
| Children safety measures | 12 | Mitigate and manage those risks |
| Age verification (if applicable) | 13 | For primary priority harmful content |
| Search services children duties | 28-29 | Equivalent duties for search engines |
Practical Application for AI Agents
Assessment Workflow
New service launched
↓
Default: "Likely accessed by children"
↓
Children's protection duties APPLY immediately
↓
Conduct children's access assessment (within Schedule 3 timeline)
↓
Can children access?
├─ YES (no age verification) → Is child user condition met?
│ ├─ YES → "Likely accessed by children" = Duties continue
│ └─ NO → Not "likely accessed by children" = Duties cease
└─ NO (effective age verification blocks children)
→ Not "likely accessed by children" = Duties cease
Update assessment:
- Annually
- Before design changes
- If evidence of child access changes
Common Mistakes to Avoid
❌ “Our service is for adults so children don’t use it”
- Must prove with age barriers, not assumptions
❌ “Terms of service say 18+ so we’re compliant”
- Terms alone don’t prevent child access
❌ “We don’t collect age data so we can’t assess”
- Must find other evidence (usage patterns, market research, etc.)
❌ “One-time assessment is enough”
- Must review annually and when circumstances change
AI Agent Classification Examples
| Platform | Child Access? | Child User Condition? | Likely Accessed by Children? |
|---|---|---|---|
| Social media (no age verification) | YES | YES (substantial numbers) | ✅ YES |
| Gaming platform (no age verification) | YES | YES (nature attracts kids) | ✅ YES |
| Professional network (no age verification) | YES | ⚠️ UNLIKELY (but must assess) | ⚠️ DEPENDS |
| Adult content site (robust age verification) | NO (effective barriers) | N/A | ❌ NO |
| News site with comments (no age verification) | YES | ⚠️ DEPENDS (analyze usage) | ⚠️ DEPENDS |
Part 2: Fraudulent Advertising Duties [Sections 38-40]
Section 38: Category 1 Services — Fraudulent Advertising Duties
38.1 — Which Services?
Applies to: Category 1 user-to-user services only.
As of 2025: ~20 largest platforms (Meta, Google/YouTube, X, TikTok, Snapchat, Reddit, etc.)
38.2 — Core Requirements
Obligation:
Operate service using proportionate systems and processes designed to prevent individuals from encountering fraudulent advertisements.
Specific duties:
| Duty | Description |
|---|---|
| Prevention | Design systems to prevent fraud ads appearing |
| Detection | Identify fraudulent advertisements |
| Swift takedown | When alerted, swiftly remove fraud ads |
38.3 — What is a “Fraudulent Advertisement”?
Definition: Content that is BOTH:
- A paid-for advertisement (money exchanged for promotion)
- Constitutes an offence under specified fraud laws
Covered offences:
| Law | Offence | Example |
|---|---|---|
| FSMA 2000 | Unauthorized regulated activities | Fake investment advisor without FCA authorization |
| FSMA 2000 | False authorization claims | Claiming to be FCA-regulated when not |
| Fraud Act 2006 | False representation | Fake product claims, misleading offers |
| Fraud Act 2006 | Abuse of position | Trustee misusing position for gain |
| Financial Services Act 2012 | Misleading statements/impressions | False financial information |
38.4 — Proportionate Systems
What “proportionate” means:
| Factor | Consideration |
|---|---|
| Risk level | How much fraud advertising occurs? |
| Technical capability | What detection tech is available? |
| User impact | How harmful is this fraud? |
| Costs | What’s reasonable to implement? |
Example measures:
| Measure | Description |
|---|---|
| Advertiser verification | Verify advertiser identities before allowing ads |
| Content screening | AI/human review of ad content pre-publication |
| User reports | Easy reporting mechanism for fraud ads |
| Proactive detection | AI scanning for fraud indicators |
| Blocklists | Block known fraudulent advertisers |
38.5 — Swift Takedown Requirement
When triggered: Provider is alerted by a person to fraud ad presence.
“Swift” means:
- Within hours for clear-cut fraud
- Immediate for severe financial scams
- Reasonable investigation time if ambiguous (but communicate with reporter)
38.6 — Transparency Requirement
Must disclose in terms of service:
- Technology used for fraud ad detection
- When it’s used (pre-publication screening? Ongoing monitoring?)
- How it works (AI detection? Human review? Combination?)
Purpose: Users understand what protections exist.
Section 39: Category 2A Services — Search Engine Fraud Ad Duties
39.1 — Which Services?
Applies to: Category 2A services (search engines classified as high-risk).
Examples: Google Search, Bing, etc.
39.2 — Scope: Search Results Fraud Ads
Obligation: Prevent fraud advertisements in or via search results.
Includes:
- ✅ Paid search ads (sponsored results)
- ✅ Ads embedded in search results pages
- ❌ NOT general webpage content returned in organic results (covered elsewhere)
39.3 — Duties
Same as Category 1 but focused on search:
| Duty | Search-Specific Application |
|---|---|
| Prevention | Design search ad systems to block fraud ads |
| Swift action | When alerted, ensure fraud ads no longer appear in search |
| Transparency | Publicly available statement about proactive technology used |
39.4 — Transparency Statement
Difference from Category 1:
- Category 1 = disclosure in terms of service (users see it)
- Category 2A = publicly available statement (anyone can access)
Must cover:
- Proactive technology used (AI, filters, verification)
- How it works
- When applied
Section 40: Definitions — “Advertisement”
40.1 — What Counts as an Advertisement?
Advertisement means: Content communicated to the public or section of public for purposes of:
- Promoting a product or service, OR
- Promoting interests of a person
Includes:
- Sponsored posts
- Paid search results
- Banner ads
- Native advertising
- Influencer promotions (if paid)
Excludes:
- Organic user content (not paid for)
- Editorial content by provider
- Unpaid recommendations
Paid-for advertisement: Advertisement for which payment/valuable consideration was given.
Practical Compliance for AI Agents
Fraudulent Advertising Detection Workflow
For Category 1 user-to-user services:
Ad submitted
↓
Pre-publication screening
├─ Advertiser verified? (identity, FCA authorization if needed)
├─ Content screening (AI fraud detection)
└─ High-risk indicators? (investment schemes, crypto, get-rich-quick)
↓
Publish if cleared
↓
Ongoing monitoring
├─ User reports of fraud
├─ Proactive AI scanning
└─ Regulator alerts
↓
If fraud detected → Swift takedown
For Category 2A search services:
Search query submitted
↓
Generate results
↓
Paid ads to display?
├─ Filter against fraud ad database
├─ Check advertiser authorization
└─ Screen ad content for fraud indicators
↓
Display only cleared ads
↓
User reports fraud ad?
↓
Swift removal from search results
Red Flags for Fraud Ads
| Indicator | Examples |
|---|---|
| Unrealistic returns | ”100% guaranteed returns”, “Get rich quick” |
| Unauthorized financial services | Investment advice without FCA registration |
| Celebrity endorsements | Fake Elon Musk crypto scams |
| Urgency tactics | ”Limited time offer”, “Act now or miss out” |
| Impersonation | Fake brand logos, government impersonation |
| Suspicious domains | Typosquatting, recently registered domains |
Compliance Checklist
For Category 1 Services:
Fraudulent advertising prevention:
- Advertiser verification system in place
- Pre-publication content screening (AI + human review)
- User reporting mechanism for fraud ads
- Proactive monitoring of published ads
- Swift takedown procedure (when alerted)
- Terms of service disclose fraud detection technology
Children’s access assessment:
- Complete initial assessment (Schedule 3 timeline)
- Annual review process established
- Assessment before design changes
- Written record maintained
- If “likely accessed by children” → children’s duties implemented (Sections 11-12)
For Category 2A Services:
Search fraud ad prevention:
- Search ad screening before display
- Swift removal process for reported fraud ads
- Publicly available statement on fraud detection tech
Children’s access assessment:
- Same as Category 1 (Sections 35-37 apply to all services)
Key Takeaways
Children’s Access Assessments:
- Default = child-accessible until assessment proves otherwise
- Can’t rely on terms alone — need actual age barriers
- Annual reviews required plus pre-change assessments
- Child user condition = significant proportion OR substantial numbers
Fraudulent Advertising: 5. Category 1 = all fraud ads on platform 6. Category 2A = fraud ads in search results only 7. Proportionate systems required — prevention + detection + swift takedown 8. Transparency mandatory — disclose fraud detection technology
Citation
Part 3, Chapter 4 — Children’s Access Assessments, Online Safety Act 2023
Part 3, Chapter 5 — Fraudulent Advertising, Online Safety Act 2023
Related: