Tech firms urged to implement child safety measures in UK
The proposed measures address this issue by urging platforms to reevaluate their algorithmic systems to prioritise child safety.
Social media platforms such as Facebook, Instagram, and TikTok face proposed measures in the UK to modify their algorithms and better safeguard children from harmful content. These measures, outlined by regulator Ofcom, are part of the broader Online Safety Act and include implementing robust age checks to shield children from harmful material related to sensitive topics like suicide, self-harm, and pornography.
Ofcom’s Chief Executive, Melanie Dawes, has underscored the situation’s urgency, emphasising the necessity of holding tech firms accountable for protecting children online. She asserts that platforms must reconfigure aggressive algorithms that push harmful content to children and incorporate age verification mechanisms.
The utilisation of complex algorithms by social media companies to curate content has raised serious concerns. These algorithms often amplify harmful material, potentially influencing children negatively. The proposed measures seek to address this issue by urging platforms to reevaluate their algorithmic systems to prioritize child safety by providing children with a safer online experience tailored to their age.
UK’s Technology Secretary, Michelle Donelan, called for social media platforms to engage with regulators and proactively implement these measures, cautioning against waiting for enforcement and potential fines. After a consultation, Ofcom plans to finalise its Children’s Safety Codes of Practice within a year, with anticipated enforcement actions, including penalties for non-compliance, once parliament approves.