Social Media Bans for children - Not a Panacea

Published on
13 min read

We're going to be banning social media in order to protect our children. Or, at least, it seems we might be heading in that direction after Australia introduced the world's first such ban for under 16s at the end of last year. Following Australia's lead, some of our EU neighbours have gotten the ball rolling on legislation themselves - the National Assembly in France has adopted a bill to ban under 15s, and the Spanish government have announced intentions to pursue an under 16 ban. Many other countries, including the UK, Denmark, and ourselves, are also making noises about similar measures.

Why the bans?

Why exactly are we banning social media for kids in the first place? The answer may seem obvious - it's dangerous of course! But what exactly are the dangers, and how dangerous are they? To answer these questions, we can look to a number of sources, beginning with those who are responsible for introducing the first bans.

Anthony Albanese, Prime Minister of Australia wrote that social media is:

a weapon for bullies, a platform for peer pressure, a driver of anxiety, a vehicle for scammers and, worst of all, a tool for online predators

French President Emmanuel Macron announced:

The brains of our children and teenagers are not for sale. Their emotions are not for sale, neither to American platforms nor Chinese algorithms.

And, Prime Minister Pedro Sánchez of Spain said social media is a place of:

...addiction, abuse, pornography, manipulation [and] violence.

This is heavy stuff. You would be forgiven for thinking that it might be better to rip up the entire internet infrastructure, encase it in concrete, and shoot it out into space.

Behind the political rhetoric, there does lie a general realisation that there are problems associated with social media use, and understanding the dangers and pitfalls requires a certain level of mental maturity. As Anthony Albanese noted,

...as we get older most of us get better at spotting the fakes and the risks and we build up the resilience to ignore the nastiness.

So, according to the politicians, the dangers that exist can be more safely navigated by older teens, as opposed to younger ones.

What does the science say?

Is there any empirical evidence to back up the effectiveness of a ban on social media for children? Short answer, no. Since these large scale bans have only just come into force, the true impact will only become clear in years to come. There are early reports of some users feeling much better now they're off their phones, while others are finding ways around the bans. However, it is wise to take these individual stories with a grain of salt. We won't know the results for a few years.

The good news is we do have an abundance of scientific research into the effects of technology and social media use on adolescents. However, problematically, the results and conclusions often differ drastically. Take the following three academic articles (which have all been quoted in various media outlets) as an example. First, in 2025, a large meta-analysis of published results carried out by an international team of researchers found screen use leads to 'socioemotional problems'. This, in turn, leads to further screen use, and worse socioemotional problems. It's a vicious downward cycle. Second, a 2025 University of Manchester study on data from over 25,000 adolescents concluded the opposite, that there is no significant impact of technology use on mental health. Finally, A 2026 large scale (over 100,000 participants), longitudinal study (3 years) of Australian adolescents at the University of South Australia found that moderate social media use was associated with the best well-being, better than both no use and excessive use. So, where does that leave us? Well, probably a bit confused.

Amidst the conflicting conclusions, one consistency does emerge across the majority of the scientific literature - the negative effects of excessive use of internet, games, screens, social media etc. The term excessive use is employed when these activities begin to interfere with normal daily functioning. And, what is the driver behind the addictive nature of social media?... The 'algorithms'.

Can't we just ban the algorithms?

If we are keen to stop excessive or problematic social media use, can't we just make the algorithms that feed us addictive content illegal instead of banning the platforms? After all, a lot of social media is good - it enables us to stay in contact with distant friends and family, easily arrange events etc. It seems a shame to ban under 16s from these elements. One teenager, speaking recently on a podcast about the mobile phone bans in schools here in Ireland, told a story about a girl who had left her school to move to another part of the country. Social media was the main reason they felt like they were still present in each others daily lives, and was identified as a key factor in their continued friendship. It seems unfair to put up boundaries that would result in loss of connection, especially when that is such an important part of adolescence.

Banning the bad parts of social media and leaving the good seems like a reasonable solution. TD Paul Murphy recently proposed such a measure in the Dail. He asked for the Taoiseach to support his party's bill:

...to have toxic algorithms on social media platforms switched off.

Unfortunately, in reality, this is not a feasible solution. In fact, from a technical standpoint, it is quite a naïve proposition. We would effectively be banning math. The family of algorithms used to recommend content to users based on theirs and others' history of activity are called 'recommender algorithms'. These are simply statistical models which perform probabilistic calculations. They can also be very useful - recommendations for music, movies, and series on streaming platforms can be a good way to encounter new artists, directors, and actors. Same for online bookstores, clothes shops etc. Could we ban recommender algorithms for one platform but not another? It would also be very hard to determine what is and is not a 'toxic algorithm'. ChatGPT is a Large Language Model (LLM), but it can also act as a recommender system Would we have to include LLMs in the ban? Additionally, this type of regulation would be impossible to police. Private tech companies keep their proprietary code under wraps. Unless we force companies to make all code open source, this proposal is a non-starter.

What can be done?

Education. I will admit I am very biased on this topic. As an educator, when I watch the news or read articles on the need for social media bans, I am always on the lookout for information on what the education for children is going to be. It has been frustrating and disappointing to see a paucity of coverage and a lack of detail on this matter to date. Australia, despite leading the way and already having a ban in place, has not published updates to its curriculum in relation to the age restrictions. The eSafety Commissioner's website, which provides educational resources for schools, still shows an "(updates in relation to social media age restrictions coming soon)" notice.

Banning an activity for children prior to figuring out the details on how to prepare them for it is putting the cart before the horse. When we (the adults) restrict a potentially dangerous activity until a certain age, e.g. alcohol consumption, sexual relationships, we are obligated to provide an education so that children are informed and prepared when they do come of age. For alcohol, teenagers learn the details of what a 'unit' of alcohol is, how much is in each type of drink and the short and long term effects on the health of the body. With sex education, we teach the details of the menstrual cycle, pregnancy and STDs, alongside consent and relationships. These topics are taught for years before coming of age. Knowing the details, the mechanisms underlying these phenomena helps us to make better decisions.

Children themselves have been requesting more, better, and earlier education on digital safety for a long time, including here in Ireland. The Ombudsman for Children's Office (OCO) said blanket bans for phones in schools was not the best solution. A report into this issue, ' One Size Does Not Fit All (2025)' listed five recommendations by a Youth Advisory Panel which includes the following:

  • "Invest more in resources for digital education"
  • "Increase support, education and awareness for parents, teachers and other adults to better understand technology"

CyberSafeKids - Ireland's online safety charity - published a Trends and Usage Report in 2025 where they state

Starting digital media and literacy education at secondary level is simply too late.

We should then expect to see an updated curriculum in the works. A course or series of courses designed to prepare children for the ban, and support them through the first few years to navigate the dangers.

Will updated school curriculums be effective?

I am not confident that any future curriculum will be effective against protecting children from the dangers of toxic algorithms. There are two main reasons why I am pessimistic about this: first, the resources that have been published to date are not sufficient; and second, bans push responsibility away from the government and tech companies.

When I look at the resources currently available for teachers and parents in Ireland, I see some useful high-level information, but not the content or depth that would really help children protect themselves from being tracked and profiled for the purposes of targeting content/advertisements. Webwise, National Parents Council and Coimisiún na Meán are a few of the bodies in Ireland that publish educational content for online safety. They do provide a lot of good materials for teachers, parents and students on topics like how to deal with cyberbullying and the types of content which are appropriate and inappropriate to share online. What is lacking is any means to avoid tracking and profiling. This is best achieved when one has an understanding of the technical details of how the algorithms and the internet works. Using the earlier analogy of sex education, it would be like teaching consent and relationships, but not the details of the menstrual cycle, pregnancy and STDs. In the field of education, there is a huge gulf between the effectiveness of the, 'do this because I am telling you' approach, and the 'here is how this thing actually works and here are the likely outcomes from the choices you make' approach.

Regarding the political side, one of the goals of these bans is to push responsibility to the end user. For the past few years, governments and large tech companies around the world have been on the receiving end of a lot of criticism for the negative impacts of social media on children. One solution is to provide early and comprehensive education; another is for the tech companies to properly check and regulate content. Neither of these options is appealing due to a number of factors, mainly cost. By banning social media until 16, it gives governments and tech companies a quick and easy off-ramp. Consider a 14-year-old who dies from complications due to anorexia, and who was found to have been spending hours on social media consuming a feed full of stick-thin models and weight loss advice. Post ban, the issue would not be about the profiling and 'toxic algorithms', but on how this child got access to social media in the first place!? Was it the parents? Did she cheat the age restrictions? Sadly, the only direction this is headed is towards much more stringent personal identifications, which is itself a privacy nightmare (and a topic for a later article).

What can be done?

The good news is there are many ways to reduce the effectiveness of toxic algorithms on your personal online activity. Primarily, you need to reduce the amount data that you feed the algorithms. This requires you to reduce your Digital Footprint. Your Digital Footprint is the sum of all the online data you generate which can be associated to you.

Minimising your footprint is very important, and it will be important for your child to do this from the moment they go online. Let's say your child doesn't use Instagram, TikTok, Whatsapp etc until they are 16. When they do turn 16 and sign up, they start these accounts with a clean slate, right? And as long as they are sensible in the content they choose to watch, and be careful about what they themselves upload, they will be fine, won't they? No. Not at all.

Meta, Google, Tik Tok etc. track users across the internet - not just on their own platforms, and not just on social media. You've probably heard of 'cookies' - which are used to track our online behaviours - but you might not have heard of other tracking methods, for example 'tracking pixels', or 'browser fingerprinting'. There are a multitude of ways in which companies are tracking our online activities. Google, for instance, has tracking embedded in more than 2/3 of the most popular websites, while Meta sits at 21%. When that website you visit shows a popup that explains how they are sharing data with their 376 partners, they really are sharing/selling your data to 376 companies. Many of these companies are then congregating your data to sell on again. Why? Because it adds to your digital footprint, and the bigger your footprint, the more valuable it is. So, if your 16 year old signs up to Tik Tok, and has not taken measures to minimise their footprint in the preceding years, the algorithm is there, waiting, trained on years of their data, ready to pounce. This is one reason why bans without prior education can be very dangerous - they risk complacency.

Unfortunately, there is no one silver bullet to remove or reduce your digital footprint. To make matters more complicated, the way companies track users is constantly changing. It isn't simply a case of using a VPN and all your internet traffic is anonymised, or installing a browser extension and all trackers are blocked. Minimising your digital footprint effectively requires taking a range of measure, based on some technical knowledge of how the internet works. It is this knowledge that helps me keep my own digital footprint to a minimum. It is why, now, when I go to YouTube to watch a specific video, I no longer emerge hours later after falling down a rabbit hole of footage of the luckiest golf shots in history. Instead, I see a blank screen and a popup that asks me if I will accept cookies. Every single time. I get this screen because the site has no idea who I am, even though I use it often. I will admit, the internet is a more boring place when your digital footprint is small, but that's no bad thing. I watch the video I wanted to watch, and then get on with the rest of my life.

So, back to the original question - will bans work? No. They will solve some problems, but in doing so create more. Complacency is a big one, as is the privacy minefield around age verification. More and earlier education on the technical side is sorely needed, but is unlikely to be a priority for the governments imposing the bans, or the companies who enforce them.