New Delhi, India: In India, between May 15 and June 15, more than 20 lakh WhatsApp accounts were banned to prevent online abuse and keep users secure, according to the Facebook-owned instant messaging platform’s monthly compliance report released on Thursday. The report must include the activities taken by social media and communication platforms such as WhatsApp and Twitter, as required by the country’s new Information Technology Rules.
“To keep our users safe and secure, we continually invest in technology, people, and processes… Preventing accounts from spreading damaging or undesired communications at scale is our top priority.”We have increased skills to identify accounts sending a high or unusual pace of messages, and we banned 2 million accounts in India alone from May 15 to June 15 for attempting this type of abuse,” WhatsApp claimed.
“WhatsApp uses a variety of tools and resources to help users avoid engaging in destructive behaviour on the app. We put a special emphasis on prevention because we feel it is far preferable to prevent harmful behaviour from occurring in the first place than to detect it after it has occurred “It was also added.”Abuse detection takes place at three points in an account’s life cycle: during registration, during messaging, and in response to negative input, such as user reports and blocks. These systems are supplemented by a team of analysts that assess edge cases and assist us increase our efficacy over time “According to the firm.
Despite the fact that the report noted WhatsApp’s compliance with the new IT rules, the company has filed a legal complaint in Delhi against the Indian government, requesting that regulations that went into effect last month be blocked, which experts believe would force the company to violate privacy protections. The case seeks the Delhi High Court to declare one of the provisions, which requires social media companies to identify the “original creator of information” when authorities demand it, a breach of India’s constitution.
WhatsApp, which has almost 40 million Indian users, has stated that it will “continue to engage with the government of India on practical solutions targeted at keeping people safe, including responding to valid legal requests for the information we have.”
Big social media companies must assign Indian residents to key compliance jobs, remove content within 36 hours of a judicial order, and put up a framework to respond to complaints, according to the new Intermediary Guidelines and Digital Media Ethics Code, which was unveiled in February.
If the companies do not comply, they risk facing lawsuits and criminal punishment, as evidenced by the long-running feud between the government and Twitter, which has frequently requested extensions to meet demands such as recruiting India-based compliance officers.