Google became the first significant social media intermediary to release a compliance report as mandated by India’s new social media rules, and it misses the crucial metric of data on content removals as a result of automated detection.
The social media rules which came into effect from May 26 mandate companies like Google’s YouTube, Facebook and Twitter to publish a compliance report every month mentioning the details of complaints received and action taken.
Under the rules, the companies also have to reveal the number of specific communication links or parts of information that they removed following proactive monitoring using automated tools.
While Google’s report released on Wednesday does mention the number of complaints it received and the pieces of content it removed, it does not mention how much content Google removed using automated tools.
In response to our query, a Google spokesperson said, “the data for automated take downs is captured in the YouTube report which comes out on a quarterly basis which includes India, and we have said in our statement that we will continue to refine the report for India under the new rules. So it’s a matter of requiring sufficient time for data processing and validation”.
“In future reports, data on removals as a result of automated detection, as well as data relating to impersonation and graphic sexual content complaints received post May 25, 2021, will be included,” Google said in the report, but did not specify the time frame of what it meant by “in future”.
In the month of April, Google said it received a total of 27,762 complaints from individual users located in India via “designated mechanisms”. A large chunk of these complaints—more than 96 per cent—were related to copyright infringements.
Google also revealed that in April, it removed a total of 59,350 individual pieces of content from its platforms following user complaints. The reason why this number is higher than the total number of complaints is because a single complaint may include multiple links and Google treats each individual link as a separate “item”.
“When we receive complaints from individual users regarding allegedly unlawful or harmful content, we review the complaint to determine if the content violates our community guidelines or content policies, or meets local legal requirements for removal,” Google said in its report.
Facebook and WhatsApp had earlier said that they will release a transparency report in July. We have reached out to Twitter and Koo to understand when they will publish their compliance reports.