Skip to content

OKCon (Open Knowledge) 2010: Programme

March 31, 2010

I am giving a short talk at the Wikimedia foundation event, OKCon 2010 – there is loads of great looking talks there including Daniela Silva & Pedro Markun from Brazil on, “how social action can take over information and open up the government (or simply hack it)” and Peter Murray-Rust from the University of Cambridge on ‘Open Science‘ – the full programme is here. The event is on Saturday 24th April 2010, 10.30am till 6.30pm at the University of London Union, Malet Street, London, WC1E 7HY.

My talk is in the ‘Ideas and Culture’ section: The Evolution of Software: Why Open Beats Closed.

Hope to see you there!

French Anti-Piracy Law: Fail

March 18, 2010
tags: , ,

Does clamping down on piracy via the law book stop piracy?  It’s seems it might not…it just changes form:

France, among several controversial legal moves concerning Net technology, has been busy enacting some draconian Web piracy laws that almost rival the Big Brother-ish moves going on in the U.K. (which even the boss of the country’s biggest telecoms network disagrees with.) France’s new “Hadopi” law is the real monster we’re talking about–it actively connects the country’s music biz through ISPs to music pirates, and penalizes repeat offending users by severing their Net connection after three warnings. Sounds fierce, right? May deter you from downloading that episode of How I Met Your Mother (rather, “La Manière Dont Je Me Suis Rencontreé Avec Ta Mère”) or Mika’s latest album? You may think so. Mais…Non. Those French types are actually defying their government, as a frank telephone study of 2,000 Bretons by the University of Rennes shows. Comparing user habits before and after the enactment of Hadopi revealed that piracy rates of all types have risen 3%.

The manner pirates are using to acquire the illicit data has shifted though–away from peer-to-peer sharing systems like bit torrenting, to “file locker” systems like Megaupload or Rapidshare, or illegal file-streaming systems which aren’t explicitly covered in the Hadopi law. This sort of piracy actually soared by some 27% after Hadopi (and probably actually more than this, assuming survey responders were wary of admitting to it), which demonstrates that the French public are much cannier than the legislators. We can assume, though, that before long there’ll be a legal move to fix these loopholes.

Basalla’s Evolution of Technology

March 15, 2010

Good post and discussion on Basalla’s 1998 book ‘The Evolution of Technology’:

Basalla uses the term “evolution” as a metaphor “at the heart of all extended analytical and critical thought” and highlight it as useful to apply this concept from biological evolution to evolution in technology. Initially this analogy was used from technology to biology (to describe living organisms in mechanical terms) and then the other way around, as a way to arrange technical objects into “genera, species and varieties and proceed from this classificatory exercise to the construction of an evolutionary tree illustrating the connections between the various forms of mechanical life”.

Peering, Virtual-Geography and the Shape of the Internet

March 11, 2010

There is an interesting discussion going on about how the flow of data moving around the internet gives it a ‘shape’ and what that shape means. You can see this discussion written up in an interesting article in the New York Times; ‘Scientists Strive to Map the Shape-Shifting Net‘. So what is mean by the ‘shape’ of the Internet? We often instinctively see shape in three-dimensional terms – which is not always helpful when thinking of the Internet – which is essentially a ‘network of networks’ running on on-top a physical network of wires, phone lines and wifi bubbles. Put simply, it has a complex shape and developing methods to map the complex virtual-physical amalgam is an ongoing area.

So how do we describe networks? The basic building blocks are as nodes and connections. However the node/connection structure still conjurers up a more geographical view of mapping – however the virtuality of the Internet means this is not the case. To envision the virtual-physical mapping take the example of the iconic London tube map; this is a topological map designed to help passengers navigate around and not a geographical map accurately reflecting the positions of the stations. There is a degree of correspondence between the geography and the topography – but both are distinct. Such is the case with mapping networks; The New York Times article quotes John C. Doyle, an electrical engineer at California Institute of Technology, who argues that, “the mathematical description of a network as a graph of lines and nodes vastly oversimplifies the reality of the Internet. The real-world Internet, they said, is not a simple scale-free model. Instead, they offered an alternate description that they described as an H.O.T. network, or Highly optimized/Organized tolerance/Trade-offs.”

In essence this concept seeks to account for the uncertainty and multi-functionality of the Internet. This means that it seeks to explain how we can map a networks that is constantly in flux as users add/remove/modify both nodes and links plus the fact that the computational devices we use are not single function items; as computers they can be used for many different things.

The movement towards cloud computing combined with the increasing use of p2p and video streaming means the topography of the internet is undergoing change. But what and how is a matter of debate. What is not in dispute the the growing volume of data;

“When we started releasing data publicly, we measured it in petabytes of traffic,” said Doug Webster, a Cisco Systems market executive who is responsible for an annual report by the firm that charts changes in the Internet. “Then a couple of years ago we had to start measuring them in zettabytes, and now we’re measuring them in what we call yottabytes.” One petabyte is equivalent to one million gigabytes. A zettabyte is a million petabytes. And a yottabyte is a thousand zettabytes.

Humanodes routing around Damage/Censorship

February 18, 2010

There is an old (on the Internet) saying; the web interprets censorship as damage and simply routes round. This is a key point as the internet was designed to route around the damages caused by a nuclear strike – so routing around censorship is child’s play. While the technical routing around damage/censorship is one thing, how people respond is not dissimilar;

You may recall that the Italian Supreme Court recently decided that it was okay for a lower court to block The Pirate Bay (the lower court is now deciding what to do), but in response, it appears that users have already figured out how to scatter to other sites, as many other torrent sites have seen an influx of Italian users. Another mole whacked, and yet, more keep popping up. It’s difficult to see how this is a particularly good strategic policy.

This reminds me of a story I heard last year when the Danish ISPs were forced by a court ruling to block The Pirate Bay – the humanodes (people as network-nodes) responded by emailing friends outside Denmark and asking them to download the torrent file they were after and email it back. This idea is, of course, viral – so you can then fw that email with the torrent to many others and they can too. Humanodes routing around damage/censorship.

Anatomy of Large-Scale Digital Technology

February 14, 2010

A couple of very interesting research papers that I came across recently…both are looking at cross sections of complex data systems and how they interact and operate. One of these is old (in technology terms), from 1998 and describes the emergence of Google;

Google utilizes link to improve search results.  The citation (link) graph of the web is an important resource that has largely gone unused in existing web search engines. We have created maps containing as many as 518 million of these hyperlinks, a significant sample of the total. These maps allow rapid calculation of a web page’s “PageRank“, an objective measure of its citation importance that corresponds well with people’s subjective idea of importance.

When you read this now, it seems a pretty basic idea – rank webpages based on how many other webpages link to it as a source. You are using the wisdom of the crowds to do the work – trusting that if millions and millions of web-users think a site is important; it must be. This had a number of advantages over how people were doing search prior to this. First it allowed the system to scale with the proliferation of users and sites. So as more users arrived, hey created more sites which had links that ranked sites and so on in a positive feedback loop that already existed. Google was like the creator of a microscpoe peering into water to discover a whole new world of micro-organisms/data that we could not see before.

Second is paper inspired by this; looking at how social networks influence search and information.  This paper follows on the the idea of PageRanks but moves it deeper into the interconnections we have created since 1998;

We demonstrate that there is a large class of subjective questions — especially longer, contextualized requests for recommendations or advice — which are better served by social search than by web search. And our key finding is that whereas in the Library paradigm, users trust information depending upon the authority of its author, in the Village paradigm, trust comes from our sense of intimacy and connection with the person we are getting an answer from.

Both are worth a read. (Also interesting to note that the company founded by the authors of the paper has been acquired by Google!)

Hat-tip to Peter for the links.

Google Does Not Sell Phones – It Sell Ads

February 13, 2010

I read this story and was a little baffled by it’s premiss. I think is misreads the situation to suggest a problem for Google that simply does not exist;

Google’s plans to take on the iPhone are running into problems in Europe as several mobile phone companies plan to sell a cheaper version within weeks of the internet company’s Nexus One device going on sale. Even Vodafone, which Google has signed up to provide network access for the Nexus One, is expected to sell the cheaper device online and through its own stores. The rival device, codenamed the Bravo, is made by Taiwanese handset manufacturer HTC, which also makes the Nexus One. Both mobile phones have the latest version of Google’s Android mobile phone software installed.

This is where I have the issue with the story – that last sentence says it all; Both mobile phones have the latest version of Google’s Android mobile phone software installed. Google makes around 97% of it’s huge income from context driven ads placed next to searches. The Android operating system it has developed is open source and free to the handset manufactures. This is what it wants; for phones t use it’s OS which come with it’s search enabled. It’s Nexus phones are intended to stimulate the market for Android, not to take control of it. It is making sure it moves to the mobile platforms as we do. Simple.

Twitter to Use P2P Technology to Cope with Demand

February 12, 2010

Anyone who’s used Twitter much will have noticed the frequent service outages – aka Whale Fail – after the image us users see then this happens. Given Twitter’s enormous growth of the past few years (sometimes exceeding 1000%!) then the demands on it’s central servers must be huge. Which is where p2p technology could help – as it scales with demand because as more people join they also bring their capacity too;

Bittorrent has been requested to give a hand to Twitter in the process of deploying the latter’s files across its several servers efficiently. “Murder” as the project has been labeled, is based on an open source. It would help Twitter as well as other developers for free.  Thousand of servers are required in the normal process of flow of updates that are sent to millions of users. Such big scale web-services are in need of the latest updates and that is for sure intensive time consumption.  More…

P2P Sharing Income, as well as Content

February 12, 2010
tags: ,

This is an interesting article worth a read. One of the people behind The Pirate Bay is trying to apply the same ideas they used regarding sharing content to the income side of distribution:

After resigning as The Pirate Bay’s spokesperson, Peter Sunde was left with some extra time to spend on his side projects. One of these ventures is Flattr, a social micropayment system for people who share content on the Internet, which just launched in Beta. … The Internet has brought us a tool to share and consume content, but up until now there has been no really easy way for the consuming side to reward content producers. Flattr, a new venture started by The Pirate Bay’s former spokesman Peter Sunde, opts to change this.

P2P Science

February 11, 2010
tags: ,

I have written a guest post on the p2p foundation blog

The hacking of emails written by scientists at the Climate Research Unit has produced lots of comment and copy about the efficacy of climate science and seems to have done damage to the reputations of the scientists and sciences involved. However the response to this incident – an opening up and peering of science that should be the response to this incident, will benefit us all…. more…