Drilling down into the Economics of Copyright
There is a report out recently with the catchy title of ‘The Economics of Copyright and Digitisation: A Report on the Literature and the Need for Further Research‘ by the UK’s Strategic Advisory Board for Intellectual Property Policy (SABIP). It’s an interesting read and it seeks to set a research path on the issue of the economics of copyright:
“The report undertakes a critical overview of the theoretical and empirical economic literature on copyright and unauthorised copying. On the basis of this literature, this report also identifies the salient issues for copyright policy in the process of digitisation, and formulates specific research questions that should be addressed to inform copyright policy.” (p.7)
So what does it conclude? Quite a lot. It concludes that there is not enough data to draw robust conclusions as to the central question around if a culture of abundant free copies damages the income of businesses built around copyright, for example:
“Some empirical studies have produced evidence that digital, unauthorised copying has asymmetric effects on creators, contrasting well-established incumbents on the one hand and fringe suppliers or newcomers on the other. Blackburn (2004) found that sales of publications by previously well-known artists are diminished as file-sharers substitute purchased copies for downloads. On the other hand, file-sharing appears to boost record sales for previously unknown artists. They seem to gain more from the additional exposure of their works than they lose due to a substitution effect.” (p71)
However they suggest that there is a slight balance of evidence pointing to it harming new production. This contrasts with the huge US Government Accountability Office report that concluded that it is “difficult, if not impossible, to quantify the economy-wide impacts.”
The authors conclude that with more research a comprehensive policy around copyright could be developed to address the issues of illegal copyright, the market regulation rope it plays and a longer term view of it’s impact on the production of new works. The authors also acknowledge that too much focus in on the rights of the copyright holder and not enough on the consumer:
“A particular challenge for policy-makers seems to be to develop a more balanced approach, which takes account of the full range of costs and benefits of unauthorised copying among all stakeholders. For example, surveys should target rights holders and users in order to enable a balanced assessment of the effects of unauthorised copying and the existing copyright system. Even where rights holders are concerned, surveys should address the relative weight of the benefits and costs of copyright protection and unauthorised copying. At the moment, there is still a tendency to focus on rights holders’ benefits only. The costs of copyright for follow-up innovation and for consumers require greater attention.” (p.91)
However, I feel that what this report is missing is a wider context (though they do touch on the issues): copyright ceased to be about only the economic side of the equations some time ago and we not see it being deployed as a proxy-means of censorship (for example via DMCA takedown notices), as a means of business strategy (for example see the YouTube vs Viacom battle) and as a means of political expressions (for example the rise of single-issue ‘Pirate Parties‘).
There is also no discussion on the costs, practicalities and impacts of enforcing any given system (never mind the same questions around the system itself). For example figures emerged on the net looking at the costs vs the monies recovered from copyright lawsuits in the US where $17 million was spent to recover just under $400K. It appears that it costs much more to sue than you recover – but then I’m assuming the legal actions are part of a strategy that looks further than recovered money into areas such as deterrence?
In short copyright is a bigger issue and it needs to be addressed in a bigger arena than just economics – it’s interlinked with culture, politics and technology too. To understand it is to look at all of these.
(Also published on the p2p foundation blog Hat-tip to Michel for the initial link thanks.)
Inception’s Incepts
I’ve been to see Inception a couple of times – I’ve really enjoyed the film. Spoiler warning – if you’ve not seen the film and plan to – stop reading now. OK, so some didn’t like it – and ok, so there are a few plot inconsistencies (like why gravity shifts that came from the van level into the hotel level did not also impact on the snow level more than a few thumps) but these are minor points – the film is a great idea, well executed and good fun. The net has produced a number of graphics that seek to explain the layers of dreaming in the film. This one has been posted around the net a lot:
And we’ve also seen the re-emergence of an old Calvin and Hobbs cartoon that captures Inception’s dream within a dream vibe:
More Open Source War – Software
Following on from the post I’d written about the novel New Model Army, which envisions an open-source warfare future, I was interested to come around this article pushing for the open-source release of defence software code:
“The Department of Defense spends tens of billions of dollars annually creating software that is rarely reused and difficult to adapt to new threats. Instead, much of this software is allowed to become the property of defense companies, resulting in DoD repeatedly funding the same solutions or, worse, repaying to use previously created software,” writes John M. Scott, a freelance defense consultant and a chief evangelist in the military open source movement. “Imagine if only the manufacturer of a rifle were allowed to clean, fix, modify or upgrade that rifle. This is where the military finds itself: one contractor with a monopoly on the knowledge of a military software system.”
Take Future Combat Systems, the Army’s behemoth program to make itself faster, smarter, and better-networked. One of the many reasons it collapsed: the code at the heart of the system was controlled by a single company, and not even the sub-contractors building gear that was supposed to rely on that code could have access to it.
“For years,” Scott adds, “the U.S. military has been losing an asymmetric battle that involves not improvised explosive devices, bullets or al-Qaida, but instead swarms of defense industry contractors seizing control of taxpayer-funded ideas because government policy and regulations were engineered to buy iron and steel, not to deploy a software-based military.”
As a small software development studio, this was a shock to find that the government agencies commissioning software were not insisting on a complete copy of the source code they they are paying for. This is the norm in the games industry and unless the company in question is using other software-tech along side this that they had developed previously – it’s hard to see why you would pay for something you don’t own at the end of development (unless it’s a service).
That said it does mean that once open, anyone can use it – including forces not sympathetic to the nation state that paid for it. But that is always the balance of open source – you get community innovation at the cost of complete control.
At FluffyLogic we’ve only done a couple of projects that were funded by public money – but we’ve always had the policy of putting the code developed from this funding into the community as open source – after all, the tax-payer paid for it – why should they not have access to the end result if they wish?
Technology Inspired by Nature: Living Buildings
A good collection of talks that look at the idea of making building with natural principles in mind, from TEDTalks.
BlackBerry Faces UAE Ban Over Encryption
This is an interesting story of technology that transcends national boundaries. BlackBerry, the popular smartphone and email device routes all the traffic via an encrypted connection and via it’s parent company, Research in Motion’s servers in Canada. What this means is that if you wish to monitor the traffic you need to break the encryption. Why have the encryption at all? because email is notoriously open – think of it as a postcard – it’s easy to read what’s being said. So if you’re going to be doing business over email it makes sense to encrypt it. However the UAE (United Arab Emirates) is not happy about the security arrangements at all..
Unlike other mobile devices, BlackBerry mobile phones access the internet and email through RIM’s own network of secure Network Operations Centres around the world using specialist encryption. Any mobile phone company operating the devices must connect to this proprietary system. As a result, BlackBerry devices are more secure and more network capacity efficient than other so-called smartphones. But the fact that data is leaving the jurisdiction of national courts has worried some governments who fear they may not be able to monitor the communications of terrorists and other criminals, even for reasons of national security.
Status: Open Source Twitter
There is an open source rival to Twitter: Status.net – I’m told that it’s the wordpress of the microblogging world…
Status.net was founded by programmer Evan Prodromou in 2008 as an open-source alternative to Twitter, and operates much like WordPress does for blogging: the company’s free software allows individuals or companies to run a Status.net server that connects to other microblogging servers (it can feed updates into Twitter and other services as well), but it also offers an enterprise version that is hosted and run by Status.net, as well as a subscription support service. Among others, the software is used by the creator of the Twitter account and upcoming TV series S***MyDadSays.
I’ve been reading the position paper by Alexander R. Galloway; Exploring New Configurations of Network Politics, where he suggests that the fascinations with networks that has entered into so much critical thinking is not all it’s claimed. That networks, by definition are asymmetric constructs and those using them as tool of understanding are failing to grasp this:
It is common to talk about networks in terms of equality, that networks bring a sense of evenhandedness to affairs. It is common to say that networks consist of relationships between peers, and that networks standardize and homogenize these relationships. It is not important to say that such characterizations are false, but rather to suggest that they obscure the reality of the situation. Networks only exist in situations of asymmetry or incongruity. If not no network would be necessary– symmetrical pairs can “communicate,” but asymmetrical pairs must “network.” So in addressing the question “What can a network do?” it is important to look at what it means to be in a relationship of asymmetry, to be in a relationship of inequality, or a relationship of antagonism.
I’ve got to take issue with his initial definitions here: Networks are constructions of two basic units – nodes and links. The number of nodes and links is a very different matter to any power accrued by the node/s or bandwidth of the link/s. Put simply Galloway is seeing power-relations in shadows and missing ones in plain sight. Being a Peers in a network mean having the ability to form (and break) links – again it is not a reading of the links power/bandwidth. If one node has many links, then it may lead us to conclude that it is a more powerful node than others, yet if that node is not a peer – i.e. if others can form links with it, but it cannot choose to deny them – then it’s power is illusory. The same applies where two nodes are talking – what is to say they are a symmetrical pair? Nothing. In short, networks create tendencies – if you can create links, then the tendency is towards a less hierarchical structure.
We also see Galloway’s misinterpretation of tendencies as absolutes in his discussion of the Robustness Principe:
The so-called “Robustness Principle,” which comes from RFC 761 on the transmission control protocol (TCP), one of the most important political principles of distributed networks, is stated as follows: “Be conservative in what you do, be liberal in what you accept from others.” This is called the Robustness Principle because if a technical system is liberal in what it accepts and conservative in what it does the technical system will be more robust over time. (But of course wouldn’t it ultimately make more sense to relabel this the Imperial Principle? Or even the Neoliberal Principle?) This indicates a second virtue of protocol: totality. As the Robustness Principle states, one must accept everything, no matter what source, sender, or destination.
The Robustness Principle is a tendency and not a command. It does not force a node to accept everything. Each node has the will have the capacity to interpret the principle differently – not as they are ordered to by any protocol. Thus the parameters of operation are set by via each node, resulting is a a tendency towards non-hierarchical structures. Protocols are methods of intercommunication – they are not structures of totality – for example, a node can use them to communicate with other nodes, while adhering to different protocols within its own communication. Protocols are structures of common agreement underpinned with practical agreement – nodes who struggle to use a protocol will disengage from it, so reducing it’s use. Whereas nodes that are conversant with a node will be taken up and used, so growing the network. As such the action of the node is needed to make a protocol work – and as such it has a tendency towards a democratic method of operation.
Design for the Platform
There’s an interesting opinion piece over on Ars Technica saying that Microsoft does not ‘get‘ the iPad – and why not:
The iPad is a neat package. It’s not a device for everyone. There are lots of things the iPad doesn’t do well; there are many things the iPad doesn’t do at all. But it’s not trying to be these things; it’s a conveniently sized, highly portable, long-lasting media-consumption device. It’s ideal for browsing the Internet, reading e-mail (with the occasional short reply), looking at photos, playing music and videos, and casual gaming. It doesn’t need much in the way of configuration. It doesn’t run Mac software. Every single piece of software on it is designed to be used with fingers. In no way is the iPad striving to be a PC, but it is because of this—because it’s not running software designed for keyboards and pixel-perfect pointers, because it’s running software that’s simple and restricted, because it uses a slow, but low-power, ARM processor—because of these things that it is so good at the things it does do.
I agree. In my develop talk this year I noted that the best designs for hardware always follow later as it takes time for the lessons of what new hardware is (and is not) to evolve and grow:
[Technology evolution] also means we should not rush to abandon older hardware and methods – as the incrementation of the process means the best examples of the genre will often come last – so best PS2 game? Resident Evil 4… It also means that it takes time for solid ideas to arrive on a new controller – as the first generation of games using new technology will still be partly memetically/technologically based on the past generation of technology – it will take time to breed out the older tech/memes and evolve the new ones.
Same applies on the iPad – we’ve yet to see the apps that fully use what it is. Simply porting over as-is a title from iPhone (or a PC) does not cut it – you have to design for the platform.
EA goes for User Generated Content
There is no doubt that LittleBigPlanet has been a huge success. Over 2.5 million levels and the promise of what can be done in LBP2 looks amazing. User generated content works. So now EA want to get into the game (so to speak…):
Electronic Arts has unveiled an ambitious new design package named Create … Set for release next year on PC, Mac and console the title will allow gamers to construct their own 3D worlds using a simple set of tools, menus and customisable items. In the demo footage we’ve seen, the player assembles a stunt car track complete with ramps and realistic physics properties, and a space station with controllable star craft flying overhead. Judging by this footage, the game seems to use a drag-and-drop interface, allowing players to grab objects from a menu and place them anywhere on the screen. The game is PlayStation Move compatible, which should offer a more intuitive way to chuck stuff about in your virtual art studio.
According to EA’s press release, you’ll be able to employ a vast selection of textures, brushes, stickers and ready-made animating objects to populate scenes. However, it seems there’s no Spore-style character creation element, so users will have to think outside the box if they want to construct human inhabitants. Meanwhile, they can complete up to 100 design challenges to unlock new areas, and learn about the possibilities. In order to beat these tasks, users have to select and manipulate available objects in a variety of emergent ways, to move items from one end of the screen to another.
I read this and thought it sounds a bit like 3DSmax with a easy to use wrapper and a game engine:
So is it an art package or a game creator? Humble says it’s actually neither. And both. “On one hand you can just make 2D scenes,” he explains. “If you just want to paint a picture in your living room without any mess, you can do that. The next step is similar to the way paintings work in Harry Potter – you can make them a little bit alive and add a touch of interactivity, So for example, I like to draw landscapes in Create, and once I’ve added the clouds, I just give a little flick to create some wind and they drift across the screen. Or you can go all the way to a fully interactive 3D scene where you’ve decided you’re going to put in a challenge where the player has to collect four objects to make X happen. You really can do anything with it on that spectrum. It’s not a game maker it’s not a 3D art tool, it’s this lovely mush in between.”
So a 3D sandbox then…
Cthulhu News! del Toro To Direct Lovecraft’s ‘At The Mountains Of Madness’ With The Help Of James Cameron
This is great news. There are no really great interpretations of Lovecraft’s work in cinema. Though I would add a noble mention for the The Call of Cthulhu (2005) silent movie adaptation by the HP Lovecraft Historical Society. This is the best current version of Lovecraft’s work and is great fun. So here’s hope of a second great Mythos film:
One of the countless movie projects that Guillermo del Toro has wanted to direct is an adaptation of an H.P. Lovecraft horror title, but many obstacles have stood in its way.
… According to reports from Deadline, that movie will indeed be Lovecraft’s 1931 novel, At the Mountains of Madness…finally!
The main issues that financiers have had is that del Toro needed his movie to be a period film, and he needed it to be R-rated. Movies like that are really hard to market, and so studios, such as Universal in this case, haven’t wanted to pay for it. Thankfully, Guillermo cares more about his craft than money, and hasn’t budged on either front to get it made sooner.
So why would Universal decide that they were finally ready to take the risk? One name: James Cameron. According to the reports, the Avatar director has decided to back del Toro’s vision and come on as a producer. Not only that, but the movie will be in 3D, and there’s no one else on the planet right now that you want in your corner when it comes to 3D more than James Cameron. They even plan to start pre-production immediately with hopes of filming some time next summer.
The story delves into Lovecraft’s Cthulhu Mythos and follows a team of researchers on an expedition to Antarctica in the 1930s. They venture to a range of mountains that reach even higher into the skies than the Himalayas. What the discover there is some mysterious ruins and the remains of many lifeforms — some badly damaged and some in perfect condition — that they decide to call the “Elder Things.” Eventually these ancient beings somehow become reanimated, creating major issues for the research team.
So that made me happy. On the subject, I’m also reading ‘Wireless‘ the collection of short sci-fi stories by Charlie Stross – which includes the amazing ‘A Colder War‘ which puts the discovery of Cthulhu and the Mythos into a Cold War setting. (To be honest, I purchased the book for just this story alone – though am enjoying the rest of them…). Here’s a sample:
He looks at the colonel, suddenly bashful and tongue-tied: “Can I talk about the, uh, thing we were, like, earlier …?”
“Sure. Go ahead. Everyone here is cleared for it.” The colonel’s casual wave takes in the big-haired secretary, and Roger, and the two guys from Big Black who are taking notes, and the very serious woman from the Secret Service, and even the balding, worried-looking Admiral with the double chin and coke-bottle glasses.
“Oh. Alright.” Bashfulness falls away. “Well, we’ve done some preliminary dissections on the Anomalocaris tissues you supplied us with. And we’ve sent some samples for laboratory analysis — nothing anyone could deduce much from,” he adds hastily. He straightens up. “What we discovered is quite simple: these samples didn’t originate in Earth’s ecosystem. Cladistic analysis of their intracellular characteristics and what we’ve been able to work out of their biochemistry indicates, not a point of divergence from our own ancestry, but the absence of common ancestry. A cabbage is more human, has more in common with us, than that creature. You can’t tell by looking at the fossils, six hundred million years after it died, but live tissue samples are something else.
“Item: it’s a multicellular organism, but each cell appears to have multiple structures like nuclei — a thing called a syncitium. No DNA, it uses RNA with a couple of base pairs that aren’t used by terrestrial biology. We haven’t been able to figure out what most of its organelles do, what their terrestrial cognates would be, and it builds proteins using a couple of amino acids that we don’t. That nothing does. Either it’s descended from an ancestry that diverged from ours before the archaeobacteria, or — more probably — it is no relative at all.” He isn’t smiling any more. “The gateways, colonel?”
“Yeah, that’s about the size of it. The critter you’ve got there was retrieved by one of our, uh, missions. On the other side of a gate.”
Gould nods. “I don’t suppose you could get me some more?” he asks hopefully.

One Nation Under Cthulhu!
Updated! Terrible news, people. The project has taken a step backwards. (Howl of NOOOOoooooooooooo! to the sky) Some claim it has been shot down due to concerns over the horror content. Others say it will happen, but after Del Toro’s ‘Pacific Rim’ project. Tentacles fingers crossed that it will.
Further updated! More terrible news. Del Toro is now saying that the Prometheus movie has killed any chance of at the Mountains of Madness getting made due to plot issues. (Warning link contains spoilers).





