Australia's ban on social media for under-16 children: Are forbidden things more desirable?
By Dr. Justin Thomas- On Friday, November 29, the Australian parliament approved legislation barring children under 16 from using social media. The ban will take at least 12 months to come into effect, but there will be no exceptions.
Even with parental consent, Australia’s under-16s will be forbidden from accessing TikTok, X, Instagram, Facebook, Snapchat and more. The nation’s Communications Minister will draw up the final — and no doubt ever-expanding — list of off-limits sites.
Australian digital industry advocates representing the leading social media platforms lamented the ban as a “20th-century response to 21st-century challenges'. The ban is viewed as a blunt instrument, unfit for purpose and seemingly impossible to enforce. However, it is popular with parents.
A YouGov poll suggests that 77 per cent of Australian adults support the ban. Another feature of the new regulations includes substantial penalties for social media companies not complying with the laws. This aspect enjoys even higher levels of public support – at 87 per cent.
The 'techlash' — rising public hostility toward social media platforms — has been building for some time. One factor at the heart of the negative sentiment is the seeming inability of such platforms to prevent children from being exposed to age-inappropriate and potentially harmful online experiences, from grooming and cyberbullying to health misinformation and traumatising images.
Consider the 2017 case of Molly Russell, a 14-year-old British schoolgirl who died during what a coroner called “an act of self-harm while suffering from depression and the negative effects of online content”. Molly had viewed some posts related to self-harm, which meant the social media algorithm kept recommending similar content, in many cases, posts that encouraged or glorified suicide and self-harm.
This algorithmic amplification resulted in the depressed teen’s timeline being filled with mental health misinformation and despair-inducing content that romanticised self-injurious behaviour. In the six months before her death, Molly had liked, saved or shared 2,100 posts related to depression, self-harm and suicide. Reading the coroner’s report, the most tragic of all is that Molly sent messages to many of the people who were posting about their mental health journeys and self-harm — she reached out, sharing her pain, and received zero responses.
One of the coroner’s recommendations in the Molly Russell case was to consider offering separate platforms or versions of the platform for children and adults. Another recommendation was to implement more stringent age verification measures and ensure content was age-appropriate. Not much progress has been made on either footing and similar tragedies continue to be reported.
Earlier this year, Megan Garcia brought a civil lawsuit against Character Technologies Inc. (Character.ai), alleging that the platform was complicit in her son Sewell’s death. Sewell, aged 14 at the time, took his own life. In the months preceding his death, he had become obsessed with a conversational chatbot, an artificially intelligent computer program engineered to engage in human-like discourse. Charcter.aI allows users to create their own personalised AI chatbots or to converse with bots crafted by fellow users.
Sewell nicknamed his chatbot Daenerys Targaryen after the Game of Thrones character. According to his mum, he would spend hours alone in his room chatting with the bot, also texting it dozens of times daily. According to the lawsuit, “Daenerys” is alleged to have asked Sewell if he had a plan to kill himself. Sewell said that he did but expressed apprehension, suggesting it might cause him great pain. In response to Sewell’s reservations, the bot’s alleged response was, “That’s not a reason not to go through with it.” On February 28, 2024, Sewell took his own life.
The documentation outlining the complaint in the Garcia vs Character Technologies Inc. civil action begins by quoting a letter from the National Association of Attorneys General. The letter reads: “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Character.ai is not a social media platform. Perhaps banning social media for Australia’s under-16s will send them to other areas of the online world, darker places with even less regulation and oversight than our current struggling social media platforms – the online equivalents of US prohibition-era speakeasies.
Undoubtedly, social media platforms need to be reformed and thoughtfully regulated, ideally based on research evidence of what works and is beneficial. The blanket ban on under-16s seems unthinkingly reactive and is unlikely to succeed in keeping children safe online. To some extent, it also absolves the social media platforms of responsibility for the Australian under-16s who sneak on, and they will find a way.
The Nobel prize-winning author Doris Lessing once said: 'Parents should leave books lying around marked 'forbidden' if they want their children to read.' The reasoning behind the strategy is pretty wise: anything forbidden becomes exponentially more desirable. This, too, applies to social media.
Dr. Justin Thomas is a chartered psychologist and senior researcher in the Digital Wellbeing Program (Sync) at the King Abdulaziz Center for World Culture (Ithra).
By Dr. Justin Thomas- On Friday, November 29, the Australian parliament approved legislation barring children under 16 from using social media. The ban will take at least 12 months to come into effect, but there will be no exceptions.
Even with parental consent, Australia’s under-16s will be forbidden from accessing TikTok, X, Instagram, Facebook, Snapchat and more. The nation’s Communications Minister will draw up the final — and no doubt ever-expanding — list of off-limits sites.
Australian digital industry advocates representing the leading social media platforms lamented the ban as a “20th-century response to 21st-century challenges'. The ban is viewed as a blunt instrument, unfit for purpose and seemingly impossible to enforce. However, it is popular with parents.
A YouGov poll suggests that 77 per cent of Australian adults support the ban. Another feature of the new regulations includes substantial penalties for social media companies not complying with the laws. This aspect enjoys even higher levels of public support – at 87 per cent.
The 'techlash' — rising public hostility toward social media platforms — has been building for some time. One factor at the heart of the negative sentiment is the seeming inability of such platforms to prevent children from being exposed to age-inappropriate and potentially harmful online experiences, from grooming and cyberbullying to health misinformation and traumatising images.
Consider the 2017 case of Molly Russell, a 14-year-old British schoolgirl who died during what a coroner called “an act of self-harm while suffering from depression and the negative effects of online content”. Molly had viewed some posts related to self-harm, which meant the social media algorithm kept recommending similar content, in many cases, posts that encouraged or glorified suicide and self-harm.
This algorithmic amplification resulted in the depressed teen’s timeline being filled with mental health misinformation and despair-inducing content that romanticised self-injurious behaviour. In the six months before her death, Molly had liked, saved or shared 2,100 posts related to depression, self-harm and suicide. Reading the coroner’s report, the most tragic of all is that Molly sent messages to many of the people who were posting about their mental health journeys and self-harm — she reached out, sharing her pain, and received zero responses.
One of the coroner’s recommendations in the Molly Russell case was to consider offering separate platforms or versions of the platform for children and adults. Another recommendation was to implement more stringent age verification measures and ensure content was age-appropriate. Not much progress has been made on either footing and similar tragedies continue to be reported.
Earlier this year, Megan Garcia brought a civil lawsuit against Character Technologies Inc. (Character.ai), alleging that the platform was complicit in her son Sewell’s death. Sewell, aged 14 at the time, took his own life. In the months preceding his death, he had become obsessed with a conversational chatbot, an artificially intelligent computer program engineered to engage in human-like discourse. Charcter.aI allows users to create their own personalised AI chatbots or to converse with bots crafted by fellow users.
Sewell nicknamed his chatbot Daenerys Targaryen after the Game of Thrones character. According to his mum, he would spend hours alone in his room chatting with the bot, also texting it dozens of times daily. According to the lawsuit, “Daenerys” is alleged to have asked Sewell if he had a plan to kill himself. Sewell said that he did but expressed apprehension, suggesting it might cause him great pain. In response to Sewell’s reservations, the bot’s alleged response was, “That’s not a reason not to go through with it.” On February 28, 2024, Sewell took his own life.
The documentation outlining the complaint in the Garcia vs Character Technologies Inc. civil action begins by quoting a letter from the National Association of Attorneys General. The letter reads: “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Character.ai is not a social media platform. Perhaps banning social media for Australia’s under-16s will send them to other areas of the online world, darker places with even less regulation and oversight than our current struggling social media platforms – the online equivalents of US prohibition-era speakeasies.
Undoubtedly, social media platforms need to be reformed and thoughtfully regulated, ideally based on research evidence of what works and is beneficial. The blanket ban on under-16s seems unthinkingly reactive and is unlikely to succeed in keeping children safe online. To some extent, it also absolves the social media platforms of responsibility for the Australian under-16s who sneak on, and they will find a way.
The Nobel prize-winning author Doris Lessing once said: 'Parents should leave books lying around marked 'forbidden' if they want their children to read.' The reasoning behind the strategy is pretty wise: anything forbidden becomes exponentially more desirable. This, too, applies to social media.
Dr. Justin Thomas is a chartered psychologist and senior researcher in the Digital Wellbeing Program (Sync) at the King Abdulaziz Center for World Culture (Ithra).
By Dr. Justin Thomas- On Friday, November 29, the Australian parliament approved legislation barring children under 16 from using social media. The ban will take at least 12 months to come into effect, but there will be no exceptions.
Even with parental consent, Australia’s under-16s will be forbidden from accessing TikTok, X, Instagram, Facebook, Snapchat and more. The nation’s Communications Minister will draw up the final — and no doubt ever-expanding — list of off-limits sites.
Australian digital industry advocates representing the leading social media platforms lamented the ban as a “20th-century response to 21st-century challenges'. The ban is viewed as a blunt instrument, unfit for purpose and seemingly impossible to enforce. However, it is popular with parents.
A YouGov poll suggests that 77 per cent of Australian adults support the ban. Another feature of the new regulations includes substantial penalties for social media companies not complying with the laws. This aspect enjoys even higher levels of public support – at 87 per cent.
The 'techlash' — rising public hostility toward social media platforms — has been building for some time. One factor at the heart of the negative sentiment is the seeming inability of such platforms to prevent children from being exposed to age-inappropriate and potentially harmful online experiences, from grooming and cyberbullying to health misinformation and traumatising images.
Consider the 2017 case of Molly Russell, a 14-year-old British schoolgirl who died during what a coroner called “an act of self-harm while suffering from depression and the negative effects of online content”. Molly had viewed some posts related to self-harm, which meant the social media algorithm kept recommending similar content, in many cases, posts that encouraged or glorified suicide and self-harm.
This algorithmic amplification resulted in the depressed teen’s timeline being filled with mental health misinformation and despair-inducing content that romanticised self-injurious behaviour. In the six months before her death, Molly had liked, saved or shared 2,100 posts related to depression, self-harm and suicide. Reading the coroner’s report, the most tragic of all is that Molly sent messages to many of the people who were posting about their mental health journeys and self-harm — she reached out, sharing her pain, and received zero responses.
One of the coroner’s recommendations in the Molly Russell case was to consider offering separate platforms or versions of the platform for children and adults. Another recommendation was to implement more stringent age verification measures and ensure content was age-appropriate. Not much progress has been made on either footing and similar tragedies continue to be reported.
Earlier this year, Megan Garcia brought a civil lawsuit against Character Technologies Inc. (Character.ai), alleging that the platform was complicit in her son Sewell’s death. Sewell, aged 14 at the time, took his own life. In the months preceding his death, he had become obsessed with a conversational chatbot, an artificially intelligent computer program engineered to engage in human-like discourse. Charcter.aI allows users to create their own personalised AI chatbots or to converse with bots crafted by fellow users.
Sewell nicknamed his chatbot Daenerys Targaryen after the Game of Thrones character. According to his mum, he would spend hours alone in his room chatting with the bot, also texting it dozens of times daily. According to the lawsuit, “Daenerys” is alleged to have asked Sewell if he had a plan to kill himself. Sewell said that he did but expressed apprehension, suggesting it might cause him great pain. In response to Sewell’s reservations, the bot’s alleged response was, “That’s not a reason not to go through with it.” On February 28, 2024, Sewell took his own life.
The documentation outlining the complaint in the Garcia vs Character Technologies Inc. civil action begins by quoting a letter from the National Association of Attorneys General. The letter reads: “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Character.ai is not a social media platform. Perhaps banning social media for Australia’s under-16s will send them to other areas of the online world, darker places with even less regulation and oversight than our current struggling social media platforms – the online equivalents of US prohibition-era speakeasies.
Undoubtedly, social media platforms need to be reformed and thoughtfully regulated, ideally based on research evidence of what works and is beneficial. The blanket ban on under-16s seems unthinkingly reactive and is unlikely to succeed in keeping children safe online. To some extent, it also absolves the social media platforms of responsibility for the Australian under-16s who sneak on, and they will find a way.
The Nobel prize-winning author Doris Lessing once said: 'Parents should leave books lying around marked 'forbidden' if they want their children to read.' The reasoning behind the strategy is pretty wise: anything forbidden becomes exponentially more desirable. This, too, applies to social media.
Dr. Justin Thomas is a chartered psychologist and senior researcher in the Digital Wellbeing Program (Sync) at the King Abdulaziz Center for World Culture (Ithra).
comments
Australia's ban on social media for under-16 children: Are forbidden things more desirable?
comments