UK

Online Safety Act 2023: Children's Access Assessments and Fraudulent Advertising

Children’s Access Assessments and Fraudulent Advertising [Sections 35-40]

Rule: Providers must assess whether children are likely to access their service. If yes, children’s protection duties apply (Sections 11-12). Category 1 and 2A services must prevent fraudulent advertising.

Effective: January 10, 2024


Part 1: Children’s Access Assessments [Sections 35-37]

Section 35: What is a Children’s Access Assessment?

35.1 — Definition

Assessment evaluates:

  1. Can children access the service?
  2. Does the child user condition apply?

Purpose: Determine whether children’s protection duties (Sections 11-12) apply to the service.

35.2 — When Can Provider Conclude “Children Cannot Access”?

ONLY if:

Age verification or age estimation is used with the result that children are not normally able to access the service.

Key principle: Cannot claim “children can’t access” just because:

  • ❌ Service is “designed for adults”
  • ❌ Terms of service prohibit under-18s
  • ❌ Content is inappropriate for children
  • ❌ No children appear to use it (based on anecdotal evidence)

Must have:Actual age barriers (verification or estimation) ✅ Effectiveness evidence (children actually blocked)

35.3 — The Child User Condition

Condition is satisfied if EITHER:

ConditionWhat It Means
Significant proportion testSignificant number of users are children (relative to total users)
Substantial numbers testService nature/characteristics mean substantial number of children are likely to use it

Important: Based on actual usage, not intended audience.

Examples:

ServiceChild User Condition?Reasoning
TikTok✅ YESSubstantial numbers of children use it
LinkedIn⚠️ LIKELY NOPrimarily professional networking; few children
Gaming platform✅ YESNature attracts children
Adult dating app with age verification⚠️ DEPENDSIf verification effective, may conclude children can’t access
Forum about professional accounting⚠️ DEPENDSUnlikely to attract children (but must assess)

Section 36: Assessment Duties

36.1 — When to Conduct Assessments

Initial assessment: Required by timelines in Schedule 3 (varies by service launch date and category).

Ongoing assessments required:

TriggerFrequency
Regular reviewsAt least annually
Significant design changesBefore implementing changes affecting child access
Evidence of changeWhen age verification becomes less effective OR child usage increases

36.2 — What Assessment Must Cover

Data sources to examine:

Evidence TypeExamples
User dataAge demographics (where collected)
Usage patternsTimes of day, device types typical for children
Content analysisWhat content is created/consumed (indicates user ages)
Age verification logsHow many attempted to access, how many blocked
Market researchThird-party data on user demographics
Competitor comparisonSimilar services’ child usage rates

36.3 — Documentation Requirements

Written record must:

  • ✅ Be in easily understandable form
  • ✅ Contain assessment findings
  • ✅ Explain reasoning for conclusions
  • ✅ Reference evidence used
  • ✅ Be kept up to date

Each service needs separate assessment: Can’t assess “all our services” together if they differ significantly.

36.4 — Assessment Methodology Example

Step 1: Collect data

  • User age demographics (if collected)
  • Usage patterns indicating child users
  • Content types popular with children

Step 2: Analyze child access capability

  • Age verification/estimation in place?
  • How effective is it?
  • Can children bypass it?

Step 3: Evaluate child user condition

  • What percentage of users are children?
  • What is the absolute number?
  • Is service type likely to attract children?

Step 4: Reach conclusion

  • Can children access? (usually YES unless robust age barriers)
  • Is child user condition met? (if significant proportion OR substantial numbers)

Step 5: Document

  • Write up findings
  • Retain evidence
  • Update if circumstances change

Section 37: “Likely to be Accessed by Children” Determination

37.1 — Three Pathways to “Likely Accessed by Children”

Your service is “likely to be accessed by children” if ANY of:

PathwayTrigger
1. Assessment concluded YESCompleted assessment finds children CAN access AND child user condition met
2. No assessment completedProvider hasn’t done required assessment (defaults to child-accessible)
3. OFCOM determinedOFCOM investigation finds non-compliance and determines child-accessible

37.2 — Default Presumption

Critical principle:

Services default to child-accessible until assessment proves otherwise.

Practical effect:

  • New services = immediately subject to children’s duties
  • Until first assessment done = assume children can access
  • Burden on provider to prove children can’t access (via age barriers)

37.3 — Consequences of “Likely Accessed by Children”

If service is likely accessed by children, MUST comply with:

DutySectionDescription
Children’s risk assessment11Assess risks of harm to children
Children safety measures12Mitigate and manage those risks
Age verification (if applicable)13For primary priority harmful content
Search services children duties28-29Equivalent duties for search engines

Practical Application for AI Agents

Assessment Workflow

New service launched

Default: "Likely accessed by children"

Children's protection duties APPLY immediately

Conduct children's access assessment (within Schedule 3 timeline)

Can children access?
├─ YES (no age verification) → Is child user condition met?
│   ├─ YES → "Likely accessed by children" = Duties continue
│   └─ NO → Not "likely accessed by children" = Duties cease
└─ NO (effective age verification blocks children)
    → Not "likely accessed by children" = Duties cease

Update assessment:
- Annually
- Before design changes
- If evidence of child access changes

Common Mistakes to Avoid

“Our service is for adults so children don’t use it”

  • Must prove with age barriers, not assumptions

“Terms of service say 18+ so we’re compliant”

  • Terms alone don’t prevent child access

“We don’t collect age data so we can’t assess”

  • Must find other evidence (usage patterns, market research, etc.)

“One-time assessment is enough”

  • Must review annually and when circumstances change

AI Agent Classification Examples

PlatformChild Access?Child User Condition?Likely Accessed by Children?
Social media (no age verification)YESYES (substantial numbers)✅ YES
Gaming platform (no age verification)YESYES (nature attracts kids)✅ YES
Professional network (no age verification)YES⚠️ UNLIKELY (but must assess)⚠️ DEPENDS
Adult content site (robust age verification)NO (effective barriers)N/A❌ NO
News site with comments (no age verification)YES⚠️ DEPENDS (analyze usage)⚠️ DEPENDS

Part 2: Fraudulent Advertising Duties [Sections 38-40]

Section 38: Category 1 Services — Fraudulent Advertising Duties

38.1 — Which Services?

Applies to: Category 1 user-to-user services only.

As of 2025: ~20 largest platforms (Meta, Google/YouTube, X, TikTok, Snapchat, Reddit, etc.)

38.2 — Core Requirements

Obligation:

Operate service using proportionate systems and processes designed to prevent individuals from encountering fraudulent advertisements.

Specific duties:

DutyDescription
PreventionDesign systems to prevent fraud ads appearing
DetectionIdentify fraudulent advertisements
Swift takedownWhen alerted, swiftly remove fraud ads

38.3 — What is a “Fraudulent Advertisement”?

Definition: Content that is BOTH:

  1. A paid-for advertisement (money exchanged for promotion)
  2. Constitutes an offence under specified fraud laws

Covered offences:

LawOffenceExample
FSMA 2000Unauthorized regulated activitiesFake investment advisor without FCA authorization
FSMA 2000False authorization claimsClaiming to be FCA-regulated when not
Fraud Act 2006False representationFake product claims, misleading offers
Fraud Act 2006Abuse of positionTrustee misusing position for gain
Financial Services Act 2012Misleading statements/impressionsFalse financial information

38.4 — Proportionate Systems

What “proportionate” means:

FactorConsideration
Risk levelHow much fraud advertising occurs?
Technical capabilityWhat detection tech is available?
User impactHow harmful is this fraud?
CostsWhat’s reasonable to implement?

Example measures:

MeasureDescription
Advertiser verificationVerify advertiser identities before allowing ads
Content screeningAI/human review of ad content pre-publication
User reportsEasy reporting mechanism for fraud ads
Proactive detectionAI scanning for fraud indicators
BlocklistsBlock known fraudulent advertisers

38.5 — Swift Takedown Requirement

When triggered: Provider is alerted by a person to fraud ad presence.

“Swift” means:

  • Within hours for clear-cut fraud
  • Immediate for severe financial scams
  • Reasonable investigation time if ambiguous (but communicate with reporter)

38.6 — Transparency Requirement

Must disclose in terms of service:

  • Technology used for fraud ad detection
  • When it’s used (pre-publication screening? Ongoing monitoring?)
  • How it works (AI detection? Human review? Combination?)

Purpose: Users understand what protections exist.

Section 39: Category 2A Services — Search Engine Fraud Ad Duties

39.1 — Which Services?

Applies to: Category 2A services (search engines classified as high-risk).

Examples: Google Search, Bing, etc.

39.2 — Scope: Search Results Fraud Ads

Obligation: Prevent fraud advertisements in or via search results.

Includes:

  • ✅ Paid search ads (sponsored results)
  • ✅ Ads embedded in search results pages
  • ❌ NOT general webpage content returned in organic results (covered elsewhere)

39.3 — Duties

Same as Category 1 but focused on search:

DutySearch-Specific Application
PreventionDesign search ad systems to block fraud ads
Swift actionWhen alerted, ensure fraud ads no longer appear in search
TransparencyPublicly available statement about proactive technology used

39.4 — Transparency Statement

Difference from Category 1:

  • Category 1 = disclosure in terms of service (users see it)
  • Category 2A = publicly available statement (anyone can access)

Must cover:

  • Proactive technology used (AI, filters, verification)
  • How it works
  • When applied

Section 40: Definitions — “Advertisement”

40.1 — What Counts as an Advertisement?

Advertisement means: Content communicated to the public or section of public for purposes of:

  • Promoting a product or service, OR
  • Promoting interests of a person

Includes:

  • Sponsored posts
  • Paid search results
  • Banner ads
  • Native advertising
  • Influencer promotions (if paid)

Excludes:

  • Organic user content (not paid for)
  • Editorial content by provider
  • Unpaid recommendations

Paid-for advertisement: Advertisement for which payment/valuable consideration was given.

Practical Compliance for AI Agents

Fraudulent Advertising Detection Workflow

For Category 1 user-to-user services:

Ad submitted

Pre-publication screening
├─ Advertiser verified? (identity, FCA authorization if needed)
├─ Content screening (AI fraud detection)
└─ High-risk indicators? (investment schemes, crypto, get-rich-quick)

Publish if cleared

Ongoing monitoring
├─ User reports of fraud
├─ Proactive AI scanning
└─ Regulator alerts

If fraud detected → Swift takedown

For Category 2A search services:

Search query submitted

Generate results

Paid ads to display?
├─ Filter against fraud ad database
├─ Check advertiser authorization
└─ Screen ad content for fraud indicators

Display only cleared ads

User reports fraud ad?

Swift removal from search results

Red Flags for Fraud Ads

IndicatorExamples
Unrealistic returns”100% guaranteed returns”, “Get rich quick”
Unauthorized financial servicesInvestment advice without FCA registration
Celebrity endorsementsFake Elon Musk crypto scams
Urgency tactics”Limited time offer”, “Act now or miss out”
ImpersonationFake brand logos, government impersonation
Suspicious domainsTyposquatting, recently registered domains

Compliance Checklist

For Category 1 Services:

Fraudulent advertising prevention:

  • Advertiser verification system in place
  • Pre-publication content screening (AI + human review)
  • User reporting mechanism for fraud ads
  • Proactive monitoring of published ads
  • Swift takedown procedure (when alerted)
  • Terms of service disclose fraud detection technology

Children’s access assessment:

  • Complete initial assessment (Schedule 3 timeline)
  • Annual review process established
  • Assessment before design changes
  • Written record maintained
  • If “likely accessed by children” → children’s duties implemented (Sections 11-12)

For Category 2A Services:

Search fraud ad prevention:

  • Search ad screening before display
  • Swift removal process for reported fraud ads
  • Publicly available statement on fraud detection tech

Children’s access assessment:

  • Same as Category 1 (Sections 35-37 apply to all services)

Key Takeaways

Children’s Access Assessments:

  1. Default = child-accessible until assessment proves otherwise
  2. Can’t rely on terms alone — need actual age barriers
  3. Annual reviews required plus pre-change assessments
  4. Child user condition = significant proportion OR substantial numbers

Fraudulent Advertising: 5. Category 1 = all fraud ads on platform 6. Category 2A = fraud ads in search results only 7. Proportionate systems required — prevention + detection + swift takedown 8. Transparency mandatory — disclose fraud detection technology

Citation

Part 3, Chapter 4 — Children’s Access Assessments, Online Safety Act 2023

Part 3, Chapter 5 — Fraudulent Advertising, Online Safety Act 2023

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt