Recently, instances of obscene and unlawful content related to minors being shared on Instagram pages and group chats, including ‘Boys locker room’, came to the limelight. These instances raise grave concerns about the safety of minors on the platform and necessitate a review of the mechanisms present on the same to protect children from being sexually exploited.
This paper analyses two components of this mechanism: the sign-up process for Instagram and the various Terms and Conditions and policies that users consent to when they create accounts on the platform. It analyses the structure and content of these components and some practical issues that might hamper their efficacy in preventing the exploitation of minors.
The Sign-up Process
The sign-up process for Instagram, like that for many other social media platforms, does not require any proper documentation or proof of the accuracy of the details one is uploading. The sign-up process requires a potential user only to provide his/her full name, the user’s phone number, the username one would like to choose, a password and the date of birth of the person. In the process, the only detailed that is verified (through an OTP) is a user’s phone number Although children under 13 years of age are not allowed to sign up, even an underage child can sign up by entering a false date of birth. There is no mechanism whatsoever to confirm if the date of birth entered by the user is correct.
This procedure has various implications for the safety of children on the platform. The age limit, which itself is arguably very low, is rendered meaningless by the ease and finality with which it can be bypassed. The same might lead to young children, who are susceptible to child sexual abuse, registering on the platform and place them at risk. On the flip side, the lack of the requirement of any proof or documentation entails that malefactors can easily register using false information, making the platform even more unsafe for children.
User Agreements and Policies
Prima facie, these policies provide a comprehensive mechanism for countering child sexual abuse. Under these policies, users have been proscribed from sharing (i) information that is unlawful (ii) any information for an “illegal or unauthorised purpose”. They are forbidden from posting private or confidential information or any other information that violates any other person’s rights. Specifically, sexual content involving minors and threats of posting confidential information of other users cannot be shared. Further, penalties for not following these proscriptions have been prescribed. These include the removal of information that is required to be removed by law or is in violation of the policies. Unlawful conduct can also result in the deactivation of the account of the user and the sharing of the information concerned with law enforcement agencies. The latter is especially true of instances of child sexual abuse: all such instances are reported to the National Centre for Missing and Exploited Children, which refers these matters to law enforcement agencies across the globe as needed in the case concerned for ensuring justice to the victim.
The implementation of these guidelines is enabled by Instagram’s ability to collect user data. This includes the content posted by users/ communications of users/ any other information that is provided by them while signing up, sharing content or communicating with others. This information might be used to, among other purposes, investigate complaints, combat harmful conduct and analyse if the platform’s policies have been breached. Further, Instagram reserves the right to share this information with law enforcement agencies if the same is required by the latter/ if the platform deems it necessary to prevent an offence from being committed.
However, certain facts militate against the effectiveness of these policies.
Firstly, their potential to create deterrence is mitigated by the lack of awareness of their contents among users. In reality, most users don’t read these policies: the acceptance of the same is considered little more than a formality. Thus, even terms that have the potential to create deterrence, such as Instagram’s ability to access users’ content and share the same with law enforcement agencies, fail in creating the effect.
Secondly, the lack of awareness of these policies might lead to children not knowing about how to deal with online sexual abuse. Thus, they might not be able to take adequate measures or report the malefactors in time.
Thirdly, even as the policies proscribe the provision of wrong information on the part of users, the lack of any mechanism to verify the information provided effectively makes the rule toothless. Thus, malefactors can easily make fake accounts.
Fourthly, though the legal framework empowers Instagram to access the content of users and take appropriate actions, in reality, not all content is screened in real time. This is apparent in how the nefarious incidents that were recently brought to light continued until they were brought to light by users. However, a lot of such groups and private chats might not be able to be found and called out by other users.
As the recent incidents just show, there is a need for both more stringent mechanisms to control child sexual abuse on online platforms and, at the same time, increase the awareness of these mechanisms. Further, there needs to be a mechanism for the verfication of the information provided by users.