Over the past few weeks, the topic of filtering adult content has been widely discussed in the mainstream media. It’s one of the most emotive subjects, with some people demanding more action for moral decency and others crying censorship. The problem is that most people who make these arguments, are doing so with a very narrow and uninformed view of both the ‘problem’ they are trying to solve, or the proposed methods to address them.
Network operators and the people behind them, the people that run the Internet as we know it, are generally very liberal in their thinking, opposing large scale surveillance or restrictions on Internet use. Much of the software that powers the websites we use run on ‘open source’ software, created by people passionate about empowering others to do good and giving away their work to help others build on it and hopefully share it again (and get some kudos too). This forms a culture of sharing and mutual co-operation for the common good and it requires openness. Therefore any restriction on the ability to share or access resources, is viewed negatively. It is this openness that has created what we know as ‘the Internet’, which has overtaken private networks run by Compuserve, AOL and MSN among others because it promotes rapid innovation.
This philosophy often means that when any type of filtering is being discussed within the technical community, we get presented with arguments like:
“This system will not stop determined people working around it”
A typical ‘techie’ view
..and of course, they’re absolutely right–No filter is going to be perfect. The danger with this approach is that we don’t even try to tackle some of the problems the Internet has presented, because they are, doomed to fail.
This is a common view by people across the spectrum from Internet engineers through to security experts:
Let’s be honest here, techie-to-techie; most people who use the Internet really don’t know what they are doing and parents are often less technically savvy than their kids. I don’t know very much about how cars work, but I need one as a tool on a daily basis. Computers and Internet access is just the same–we can expect them to learn the basics, but not to understand the intricacies of how it all works. The Internet has been around a while so we should start seeing more Internet-generation parents starting families so this problem may alleviate over time, but we do need to do more to support parents in making informed decisions about Internet use by children and can’t expect the average parent to be net savvy. We need to make it easy for them to protect their family online.
It is not enough for broadband providers or major service providers to say ‘not our problem’. It’s actually quite bad when they do this as the recent Twitter incident demonstrates.
However, neither is network-level filtering or any suggestion of ‘default on’ encouraging consumers to make uninformed choices, a solution to this problem.
The guardians of decency
On the other side, we have campaigners and politicians eager to demonstrate they have achieved something, or to respond to public outcry after a horrific incident. We have individuals who believe they are protecting society from evil content that can serve no useful purpose, only tolerating its existence if you are willing to admit you want to opt in to access ‘porn’.
If you book into a hotel and the receptionist asks you “Would you like to have access to the adult channels and porn on the Internet?” you’d probably find fewer people watching adult movies, than if you were asked “Would you like your Internet access and TV filtered to help protect your children?” – This isn’t even about giving parents the choice, but about impressing one’s own moral values on others by shaming them into accepting them. The Active Choice system being proposed is asking anyone wishing to have unfiltered Internet access to effectively ‘opt in’ to porn, rather than to unfiltered Internet access.
Opting in or out of adult content is simply too blunt an instrument to deal with the problem.
What I find difficult to understand is, putting aside spam e-mail, I have never stumbled on what you might call ‘adult content’ on the Internet. The closest would probably be UK tabloid newspapers or ads for dating sites.
Filtering should be performed at the edge
One issue raised by campaigners for better protection is the lack of a simple switch you can flick; BBC iPlayer may have its own protection, but as a parent you can’t possibly visit every site to enable these. There is software you can install on your PC, but that probably won’t work on your kids’ mobile phones or the Xbox in their bedrooms. Network-level filtering seems like the obvious answer, but it’s technically really really bad. Just because something appears possible technically doesn’t mean it’s a good idea.
Let me introduce the non-techies of you to a concept called the ‘end-to-end principle‘ – The Internet runs on various protocols, each designed for a different purpose. When you visit a website, the connection uses a protocol called TCP whilst you’ll probably find that when you stream a video, it’s delivered over UDP. I won’t go into what the differences are or the higher level protocols involved, but each has been designed to deliver the best user experience for a particular application. The network between your computer and the server you’re accessing, doesn’t really need to know about TCP or UDP (Layer-4 protocols), or any of the applications-specific characteristics (Layers 5-7), as Internet routing is performed in another layer, the IP (Internet Protocol) layer, or Layer 3. This means that we don’t need to upgrade every router on the Internet when a new game with its own protocol (i.e. ‘way of talking to the server’) is launched. If you look at the slow adoption of IPv6 (a Layer-3 protocol) you can see how requiring upgrades would slow the growth of the Internet down to a snail’s pace. The end-to-end principle means each layer does its job without worrying about what’s above/below it.
The best way of thinking about the Internet is like the postal service. When you visit a website, it’s just like writing a letter and putting the name of the web page you want to receive at the top of the letter, sealing it in an envelope, and then putting the IP address of the web server on the front of the envelope, and your return address on the back. Normally, that envelope remains sealed until it gets to the web server, who then opens the request, and posts back a response in the same way. With network-level filtering, your broadband provider effectively steams open the envelope, spies on what’s written inside (unless it’s a secure https:// address in which case the filter probably won’t work as it can’t decode it), and then makes some decisions about whether to pass it onto the web server untouched, or maybe respond back itself, pretending to be the web server, usually explaining that the content couldn’t be provided as it’s inappropriate. What’s worse, is that it also probably filters either all traffic to the IP address which hosts a single site which needs filtering, or possibly all your traffic creating even more complex troubleshooting scenarios. The problem is, the web server probably didn’t even find out about the fact a request sent to it was intercepted and responded to by the broadband provider. It also doesn’t know if the request was tampered with, because something in the system used for filtering was badly implemented, or maybe the author of the filter didn’t understand the unintended consequences of what they were doing. This causes many headaches when troubleshooting problems, which is why most Internet engineers don’t like it. If it breaks at your router, it’s much easier to diagnose than if it disappears somewhere in your provider’s network (with each provider implementing its system in different ways).
“Absolutely ridiculous idea. It won’t work.”
Jimmy Wales, founder of Wikipedia, commenting on government plans for default-on filtering
There are other ways to implement network-level filtering such as using DNS poisoning which attacks another part of the connection, but the general effect is the same (although that will be broken by DNSSEC I suspect). It creates uncertainty and it breaks things. If you go on the Jubilee line at Waterloo Eastbound, you expect to end up at Stratford, not Wimbledon. This is a bit like London Underground deciding that you don’t really want to go to East London as it’s dangerous, so they take you elsewhere instead.
This doesn’t even start tackling the fact filtering of this kind will inevitably over-filter. We’ve had thinkbroadband blocked by one mobile network operator before, even though there was no adult content and believe me it’s not easy to switch that off, on a business account no less! Some adult content filters will stop users using forums, so will it affect our community? I suspect this blog post will be blocked by some at the very least. I do wonder what the political parties would think if their campaign contribution websites were blocked just before a general election?
The Internet is a global network
Parents shouldn’t consider the Internet a safe haven where their kids can play unsupervised, nor can they outsource parental responsibility to technical gadgets which will inevitably fail. Of course, using appropriate tools as part of a package to protect your family is a great idea, and I would expect key companies to be promoting ways in which you can use the Internet safely. There’s a lot which browser manufacturers like Google, Microsoft and Mozilla and content providers (including Twitter, Facebook, LOVEFiLM, Netflix, etc.) could do for a start to build in better in-browser filtering as these happen at each edge of the connection (the web browser and the server) without less chance of breaking things, especially if better standards are developed. We do need to create safe environments for kids to learn about life, including online life, but this shouldn’t be at the expense of breaking Internet infrastructure. If we had very simple multi-user login systems on iPads, etc. we wouldn’t need to mangle the network layer to solve a social problem. Many of the problems parents are worried about such as sexting (using MMS messages) wouldn’t be filtered by anything being proposed here.
I’m not sure if this has been considered, but using TCP flags to tag when the user is ‘under age’ might be an interesting way to build in parental control features throughout the network in a more effective way. I don’t have enough knowledge of the protocols to suggest how this would be done, but exploring this option might be interesting. Alternatively, maybe a cookie/header standard could be developed by the browser manufacturers (similar to ‘do not track‘ possibly?) to indicate the user is a child?
To solve the problem of access to inappropriate content by children, we need a multi-disciplinary approach by bringing together network engineers and those who understand the social problem, to devise possible solutions that address the issue, without collateral damage.