Twitter responsible for half of child abuse material UK investigators found on web platforms

Twitter said it had "serious concerns" over the IWF's figures
Twitter said it had "serious concerns" over the IWF's figures Credit: PA

Twitter is responsible for almost half of the child abuse material found by UK investigators being hosted openly on popular tech sites, according to figures seen by the Telegraph.

Statistics from the Internet Watch Foundation (IWF) show that 49 percent of the images, videos and url links it found on social media, search engines and cloud services in the last three years were on the social network, making up 1,396 of the total 2,835 incidents found.

Child protection figures warned the each incident could represent hundreds or thousands of images or videos, as they included urls linking back to child abuse websites.

The IWF, the UK's online abuse watchdog, can only find child abuse material on the open web, meaning all the abuse images and videos found had slipped through tech companies' filters and were available for anyone to see. The IWF is also unable to scan for abuse images in messaging apps or even closed Facebook groups.

The IWF say that the figures show the number of child abuse images and urls being openly hosted on popular sites was increasing year-on-year, with 742 incidents found in 2016, 1,016 in 2017 and 1,077 in 2018.

John Carr OBE, secretary of the Children’s Charities' Coalition for Internet Safety, which represents organisations such as the NSPCC and Barnardo's, said: “It is appalling and scandalous that thousands of child abuse images are openly available on popular social media and search engine sites for anybody to see. 

“The crucial point is that each report is not one image but there could hundreds or thousands of images behind each of those reports. 

“The industry needs to eradicate this horrific material from there services and Twitter in particular needs to get its house in order. 

“Well done to the Daily Telegraph for getting these numbers into the public domain. We shouldn’t have had to wait this long.”

The IWF only released figures for the number of incidents that had been verified as child abuse by human analysts, rather than the total number of reports flagged by its scanning software or from the public, meaning the actual numbers are likely to be higher.

Microsoft’s Bing search engine had the second highest number of incidents with 604 recorded between 2016 and 2018, followed by Amazon with 375, and Google with 348.

The IWF found 72 incidents of abuse being openly hosted on Facebook, 18 on its sister site Instagram and 22 on YouTube.

Twitter said it had concerns over how the IWF had compiled its figures.

A spokesperson said: “We have serious concerns about the accuracy of these figures and the metrics used to produce them.

“The IWF has used one data standard across all services, social platforms, file hosting platforms, and search engines, which isn’t a reliable metric, nor does it reflect the scale of proactive work we do in this area.

“We will continue to work with the IWF to address their concerns and improve the accuracy of their data, so that it reflects the full picture of our proactive work to remove egregious child sexual exploitation content from our service. “This work is complex and the offenders are often sophisticated bad actors, which is why it is essential to ensure any data released is robust, accurate, and reflective of the critical work being done in this space.”

Microsoft also queried the data and said that it had made considerable efforts to remove such material from Bing since 2018

A spokesman for Microsoft added: “The data, taken from 2018 and without consideration of improvements made as a result of reports and routine diligence over the course of the current year, are from unverified or raw end-user reports, and are therefore not an accurate measure of the actual prevalence of child sexual exploitation and abuse images on the platform.”

Susie Hargreaves OBE, the CEO of the IWF, said: “Our data is accurate and recorded fairly and consistently regardless of where we find child sexual abuse material. 

“We’re also very happy to make it available to an independent hotline inspection team, comprising a law enforcement auditor and High Court judge, for scrutiny.

“Every time we find an image or a video of a child being sexually abused we perform an assessment against UK law using highly trained, human, analysts. We then trace the content to the host country, and company, before working with partners around the world to get it removed. Our data is trusted by police, governments and internet companies internationally.”

The vast majority of the abuse material the IWF finds is every year is on the open web in the form of websites set up purposely to host child abuse.

In 2018, the organisation found and took down 105,047 URLs hosting child abuse images. 

The IWF, which was founded in 1996, is largely funded by annual membership subscriptions from tech companies, with Facebook, Amazon, Microsoft and Google among its highest contributors, paying more than £80,000 to the organisation.

Twitter is also a member of the IWF but pays a lower membership fee of between £27,000 and £54,000, according to the IWF's website.

The figures are the first time the IWF has disclosed how many images and links its investigators have found on mainstream tech sites.

Most social media and search sites automatically scan images to check they are not known abuse images before they are uploaded, meaning the vast majority of images are caught before being published.

The NSPCC said the fact that thousands of abuse images were still getting through and being openly hosted on popular sites highlighted the urgent need for a Government regulator to scrutinise companies' child protection measures. 

The Government has said it plans to introduce a statutory duty of care on tech giants to better protect its users, a measure campaigned for by the Telegraph.

Andy Burrows, NSPCC Head of Child Safety Online Policy, said: “Let’s be clear, this imagery should not be on social networks in the first place because processes should ensure known images are taken down immediately. 

“Yet, offenders are using these sites to churn out new child abuse pictures and exploiting the platforms’ messaging features to spread them.

“However, it is not enough that firms just take down this awful content. They should be proactively looking for, and disrupting, grooming behaviour that leads to new images being made and more children being hurt. 

“Currently tech giants can pick and choose what they tell us when it comes to the extent of child abuse on their platforms, which is why duty of care legislation is urgently needed to force them to be transparent and show how effectively they are tackling this problem.”

Following the release of the figures, Amazon said the majority of incidents found by the IWF had been on its cloud service, Amazon Web Services.

A spokesman for the company said: “Our acceptable use policy clearly prohibits illegal content, and it is not tolerated.  When notified of illegal content on our network, we respond quickly and take action to remove it.

“When we receive notifications of illegal content from the Internet Watch Foundation (IWF), the National Center for Missing and Exploited Children (NCMEC, in the US), or any of the other agencies we work with, we take immediate action with this type of harmful content.

“We also proactively fight against some of the worst types of crime, we work with organisations such as IWF, law enforcement, Marinus Analytics, NCMEC, Thorn, and many others. These organisations use our advanced machine learning technologies to scour the internet in order to fight child exploitation and trafficking.”

A spokesman for Facebook said: “Keeping young people safe on our platform is one of our top priorities.

“In addition to using technology to proactively detect grooming and prevent child sexual exploitation on our platform, we work with child protection experts, including the National Centre for Missing and Exploited Children as well as specialist law enforcement teams like CEOP in the UK, to keep young people safe.”

License this content