About the role of bots on network neutrality

I am not sure whether “outing” is the right word here, but I will give it a try. I am not on Facebook, I do not tweet and I read still hardware news. I mean newspapers.
Yesterday I stumbled over an article in the paper issue of the German Süddeutsche Zeitung (issue of 5. May 2014) about the traffic that bots generate on the Internet. The starting point was the “Webdriver Torso”, apparently a software that for purposes of testing video compression automatically generated and uploaded to Youtube 77,000 videos, each 11 seconds long. That was a new video every 20 seconds. Awesome, but not surprising. I suspected similar scenarios when I discovered what modern software can do out of the box.
For professional reasons we maintain several web sites and we post news and events items, as every decent European project web site probably does by now. Our content management and blog based websites are capable of automatically issuing a tweet when a news item is posted. Of course we emit also RSS feeds that are automatically parsed and processed by other sites. Obviously some of our web-sites import RSS feeds from other sites in exchange. So we are also automatically generating traffic.
We can add more advanced Web 2.0 functionality, but what we get is an inflation of Internet traffic, for example, just because I decided to post an article like this. This blog in fact does not tweet each new article. Instead it trusts that you will find the blog and its article through conventional ways; either because you know us and you interested in our work, or because you trust a search engine to find us for you. But I cannot guarantee that someone else will tweet it or write about it or even replicate it somewhere.
The newspaper article cites incapsula.com, a security related web-site, claiming that 2/3 of the total Internet traffic today is not of human origin, but generated by bots. There is room for discussion whether this is an accurate measure, but I personally believe that this kind of traffic is indeed more than 50 per cent.
What I find amusing is that bots read posts and tweets that other bots have generated. And indeed it looks like bots are also chating with each other. On the other hand bots are causing an immense damage to the Internet business model which is still largely based on online advertisement for the benefit of the few that are able and malicious enough to exploit it by programming bots that automatically click on banners and visit sites thus generating the wanted traffic.
But who wants that traffic? I think the online shop owners, whether the big shots like the amazons and ebays or the small start-ups are not delighted to discover that mainly bots are visiting their sites via ad-clicks.
And then there is the traffic; the cry for more bandwidth, for upgrading the network capacity at the edge and in the core. And then there is the net neutrality debate that everyone should have equal access to the Internet and that no packets should be prioritized over others. Not for money; not for any other reason. But hey, wait! Did we actually verify that the one or other outcry against the attempts to put some constraints on the Internet does not come from a bot? After all, on the Internet no one knows that you are a dog, just to cite one of the most famous cartoon symbolising a certain understanding of  privacy and anonymity on the Internet.
As far as I am concerned, I would not like to compete with bots when generating content and place it on the Internet, because bot generated content is mostly irrelevant and redundant. Personally, I am not so good in filtering irrelevant and redundant content whether reading my e-mails or viewing content on my favorite web-sites. Neither would I like to pay for an infrastructure that is used by more than 50% by bots that do meaningless things such as dating each other or watching YouTube videos on their own.

This entry was posted in Network neutrality. Bookmark the permalink.

One Response to About the role of bots on network neutrality

  1. Funny article. I personally hate the auto news replication/cloning because it usually takes a long time to find out the original site where the news was coming from. I have similar dislike feelings when I read EU project web sites with all of them cross-linking news making it very hard to find out what news is coming from them ..

Leave a Reply

Your email address will not be published. Required fields are marked *