Tech
U.K. Enforces Online Safety Act, Giving Tech Giants Three Months to Comply
The U.K. has officially enacted its landmark Online Safety Act, ushering in stringent regulations to combat harmful content online and hold tech giants like Meta, Google, and TikTok accountable. The new rules, effective Monday, aim to tackle illegal content, including terrorism, hate speech, fraud, and child sexual abuse, with oversight by the British media and telecommunications regulator, Ofcom.
Ofcom has issued its initial codes of practice and guidance, detailing the steps platforms must take to comply with the law. The act imposes “duties of care” on tech firms, requiring them to prevent harmful content from spreading on their platforms. Companies have until March 16, 2025, to complete risk assessments of illegal harms and implement measures such as enhanced moderation tools, easier reporting systems, and in-built safety features.
Ofcom Chief Executive Melanie Dawes emphasized the regulator’s commitment to enforcing the new standards. “We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year,” she said in a statement.
The Online Safety Act, which passed in October 2023, includes severe penalties for non-compliance. Ofcom can impose fines of up to 10% of a company’s global annual revenue. For repeated violations, senior managers may face imprisonment, and the regulator has the authority to block access to services in the U.K. or restrict platforms’ payment and advertising capabilities.