english

To: Gordon Brown – From: An Icelander

Dear Mr. Brown,
(cc: The international press)

Here in Iceland, we are busy dealing with a series of mistakes in government, regulation and banking operations.

As a result of a total collapse of our banking system, customers in the UK have feared for their deposits in Icelandic banks, just like every single Icelander has. In accordance with international law, the Icelandic government has relatively clearly stated that it will guarantee the minimums required for private citizens. (They’re not always good with words – sorry about that…)

Municipalities and organizations – that are by law considered “professional investors” – may unfortunately have lost significant amounts. Up to 1% of their assets in some cases as I’ve understood. It may come as a surprise to you, but professional investors all over the world are losing gigantic amounts these days. Even 10s of % per day in some cases. Iceland is not to blame for that.

I know that it’s considered good domestic politics to find a common enemy in the time of crisis. It can hide facts about one’s own failing policies and create a team spirit at home, rallying against the new foe. Seems like your little scheme worked.

Up here is an entire nation – where most of us citizens are at a real risk of losing pretty much everything we own – we’re trying to put up a fight. If you can help even just a tiny bit by clearing up your deliberate misunderstanding – great. If you can’t, please stop bullying us, find someone of your own size to pick on and leave us alone while we try to rebuild our society more or less from scratch.

Your lad,
Hjalmar

The future of finance: Total transparency?

The financial crisis hit Iceland full force last Monday. One of our banks was pretty much nationalized, followed by a large investment company filing the equivalent of Chapter 11. This led to significant losses by a large “risk free” money market fund, that stored a part or all of the personal savings of some 12 thousand people in it – yours truly included. You can read more about the macro of this all in the international press. And don’t worry about me – my personal loss is manageable.

The whole incident – however – reinforced ideas I’ve been contemplating the last few months about the future of global finance, once the current economic hurricane subsides. I’ve been coming to the conclusion that the only thing that can restore the confidence in the financial markets is total transparency.

The reinforcement came as I spoke with my contact at the bank after learning about my loss. He said that what the bank was doing to gain confidence in the money market fund (and the bank in general) was to open the books completely. Meaning – as I understood it – that they’d publish the composition of the fund in detail online and update it “live” if and when there were any changes in that composition. Something that they’ve ’till now only done once a year in an annual report.

To me, this sounded like a small-scale version of my vision for the future of finance. Total transparency, where anybody can at any time dig through any details on his or her investments.

First, some history… When financial markets – as we know them now – were forming, one of the fundamental ideas was that all players in the market should have equal access to the best possible information on any bond, security and share available. For that reason, strict rules were put in place about how and when companies filed their data. Remember that this is about a century ago – in a very different age in terms of technology and communication. Quarterly filings were pretty much “real time” and very demanding on the companies’ financial operations.

Today we live in a very different world. Real time communication and crunching of terabytes of data is within the reach of pretty much anybody. “Quarterly”, let alone “annually” is not something we settle for when it comes to news, communication or even entertainment. “On-demand” and “real time” is the name of the game. Why should we settle for anything less in our investments?

My prediction is that we’ll see the rise of exchanges were real time access to EVERYTHING is going to be a prerequisite for listing. That investors will be able to dig through the portfolio of the funds they’ve invested in, into the fundamentals of the portfolio companies, all the way down to the smallest details of their financials; their current cash flow situation, bank account balances, write-offs and salary costs – to name a few examples.

Obviously, no single person would dig through all this data, but the very possibility and the combined power of the crowd, would put all actions under scrutiny making stupendous bonuses, hiding of Bermuda straw huts as mortgage in supposedly A-rated securities funds and golden parachute executive agreements visible to the famous “hand of the market”.

Quarterly filings of creatively accounted fundamentals is soooo “twentieth century”. Total real-time transparency is the way of the future.

The Case for Open Access to Public Sector Data

This article will be published tomorrow in The Reykjavík Grapevine.

Government institutions and other public organizations gather a lot of data. Some of them – like the Statistics Office – have it as their main purpose, others as a part of their function, and yet others almost as a by-product of their day-to-day operations.

In this case I’m mainly talking about structured data, i.e. statistics, databases, indexed registries and the like – in short, anything that could logically be represented in table format. This includes a variety of data, ranging from the federal budget and population statistics to dictionary words, weather observations and geographical coordinates of street addresses – to name just a few examples.

In these public data collections lies tremendous value. The data that has been collected for taxpayers’ money for decades or in a few cases even centuries (like population statistics) is a treasure trove of economical and social value. Yet, the state of public data is such that only a fraction of this value is being realized.

The reason is that accessing this data is often very hard. First of all its often hard to even find out what exists, as the sources are scattered, there is no central registry for existing data sets and many agencies don’t even publish information on the data that they have.

More worrying is that access to these data sets is made difficult by a number of restrictions, some accidental, other due to lack of funding to make them more accessible and some of these restrictions are even deliberate. These restrictions include license fees, proprietary or inadequate formats and unjustified legal complications.

I’d like to argue that any data gathered by a government organization should be made openly accessible online. Open access, means absence of all legal, technical and discriminating restrictions on the use or redistribution of data. A formal definition of Open Access can be found at opendefinition.org

The only exception to this rule should be when other interests – most importantly privacy issues – warrant access limitations.

There is a number of reasons for this. First of all, we (the taxpayers) have already paid for it, so it’s only logical that we can use the product we bought in any way we please. If gathering the relevant data and selling it can be a profitable business on its own, it should be done in the private sector, not by the government. Secondly it gives the public insight into the work done by our organizations in a similar way as Freedom of Information laws have done – mainly through media access to public sector documents and other information.

The most important argument – however – is that open access really pays off. Opening access and thereby getting the data in the hands of businesses, scientists, students and creative individuals will spur innovation and release value far beyond anything that a government organization can ever think of or would ever spend their limited resources on.

Some of these might be silly online games with little monetary value but yet highly entertaining. Others might be new scientific discoveries made when data from apparently unrelated data sources is mixed. And yet others might be rich visualizations that give new insights on some of the fundamental workings of society – showing where there’s need for attention and room for improvement.

A recent study on the state of matters with Public Sector data in the UK concluded that the lack of Open Access is costing the nation about 1 billion pounds annually in lost opportunities and lack of competition in various areas. Per capita, a billion pounds in the UK equals about 750 million ISK for Iceland and that’s without adjusting for Iceland’s higher GDP and arguably some fixed gains per nation.

Surely a huge opportunity for something that requires only a thoughtful policy change and a little budget adjustment to enable the institutions to make the needed changes and continue their great job of gathering valuable data.

– – –

See also an article from The Guardian, that helped spark a similar debate in the UK: Give us back our crown jewels

Enter the cloud – but how deep?

KenyaTanzania 665 The new company is officially founded and has got a name: DataMarket. You’ll have to stay tuned to hear about what we actually do, but the name provides a hint 😉

In the last couple of weeks I’ve spent quite some time, thinking about the big picture in terms of DataMarket’s technical setup, and it’s led me to investigate an old favorite subject, namely that of cloud computing.

Cloud computing perfectly fits DataMarket’s strategy in focusing on the technical and business vision in-house while outsourcing everything else. In other words, focus on the things that will give us competitive advantage, but leave everything else to those that best know how. Setting up and managing hardware and operating systems is surely not what we’ll be best at, so that task is best left to someone else.

This could be accomplished simply by using a typical hosting or collocation service, but cloud computing gives us one important extra benefit. It allows us to scale to meet the wild success we’re expecting, yet be economic if things grow in a somewhat milder manner.

That said, cloud computing will no doubt play a big role in DataMarket’s setup. But there are different flavors of “the cloud”. The next question is therefore – which flavor is right for this purpose?

I’ve been fortunate enough this year to hang out with people that are involved in two of the major cloud computing efforts out there: Force.com and Amazon Web Services (AWS). Additionally I’ve investigated Google AppEngine to some extent, and I saw a great presentation of Sun’s Network.com at Web 2.0 Expo.

These efforts can roughly be put in two categories:

  1. Infrastructure as a Service (IaaS): AWS and Network.com
  2. Platform as a Service (PaaS): Google AppEngine and Force.com

IaaS puts at your disposal the sheer power of thousands of servers that you can deploy, configure and set to crunch on whatever you throw at them. Granted, the two efforts mentioned above are quite different beasts and not really interchangeable for all tasks. Network.com is geared towards big calculation efforts, such as rendering of 3D movies or simulating nuclear chain reactions, whereas AWS is suitable for pretty much anything, but surely geared towards running web applications or web services of some sort.

PaaS gives you a more restricted set of tools to work with. They’ve selected the technologies you’re able to run and given you libraries to access underlying infrastructure assets. In the case of Force.com, your programming language is “Almost Java”, i.e. Java with proprietary limitations to what you’re able to do, but you also get a powerful API that allows you to make use of data and services available on Salesforce.com. AppEngine uses Python and allows you to run almost any Python library. In a similar fashion to Force.com, AppEngine gives you API access to many of Google’s great technological assets, such as the Datastore and the User service.

In short, the PaaS approach gives you less control over the technologies and details of how your application works, while it gives you things like scalability and data redundancy “for free”. The IaaS approach gives you a lot more control, but you have to think about lower levels of technical implementation than on PaaS. Example: On AWS you make an API call to fire up a new server instance and tell it what to do – on AppEngine, you don’t even know how many server instances you’re running – just that the platform will scale to make sure there is enough to meet your requirements.

So, which flavor is the right one?

The thinking behind PaaS is the right one. It takes care of more of the technical details and leaves the developers to do what they’re best at – develop the software. However, there is a big catch. If you decide to go with one of these platforms, you’re pretty much stuck there forever. There is no (remotely simple) way to take your application off Force.com or AppEngine and start running it on another cloud or hosting service. You might find this ok, but what if the company you’re betting your future on becomes evil? or changes their business strategy? or doesn’t live up to your expectations? or want’s to acquire you? or – worse yet – doesn’t want someone else to acquire you?

You don’t really have an alternative – and you’re betting your life’s work on this thing.

Sure. If I was writing a SalesForce application or something with a deep SalesForce integration, I’d go with Force.com. Likewise, if I was writing a pure web-only application, let alone if it had some connections with – say – Google Docs or Google Search, I’d be very tempted to use AppEngine. But neither is the case.

So the plan for DataMarket is to write an application that is ready to go on AWS (or another cloud computing service offering similar flexibility), but try as much as possible to keep independent of their setup. Not that I expect that I’ll have any reason to leave, but there is always the black swan, and when it appears you better have a plan. This line of thinking even makes me skeptical of utilizing their – otherwise promising – SimpleDB, unless someone comes up with an abstraction layer that will allow SimpleDB queries to be run against a traditional RDBMS or other data storage solutions that can be set up outside the Amazon cloud.

Yesterday, I raised this issue to Simone Brunozzi, AWS’ Evangelist for Europe. His – very political – answer was that this was a much requested feature and they “tended to implement those”, without giving any details or promises. I’ll be keeping an eye out for that one…

So to sum up. When I’ve been preaching the merits of Software as a Service (think SalesForce or Google Docs) in the past, people have raised similar questions about trusting a 3rd party for their critical data or business services. To sooth them, I’ve used banks as an analogy: A 150 years ago everybody knew that the best place to keep their money was under their mattress – or even better – in their personal safe. Gradually we learned that actually the money was safer in institutions that specialized in storing money, namely banks. And what was more – they paid you interest. The same is happening now with data. Until recently, everybody knew that the best place for their data was on their own equipment in the cleaning closet next to the CEO’s office. But now we have SaaS companies that specialize in storing our data, and they even pay interest in the form of valuable services that they provide on top of it.

So, your money is better of in the bank than under your mattress and your data is better of at the SaaS than on your own equipment. Importantly however, at the bank you have to trust that your money is always available for withdrawal at your convenience and in the same way, your data must be “available for withdrawal” at the SaaS provider – meaning that you can download it and use it in a meaningful way outside the service it was created on. That’s why data portability is such an important issue.

So the verdict on the most suitable cloud service for DataMarket: My application is best of at AWS as long as it’s written in a way that I can “withdraw” it and set it up elsewhere. Using AppEngine or Force.com would be more like depositing my money to a bank that promptly changes all your money to a currency that nobody else accepts.

I doubt that such a bank would do good as a business!

Starting up – that would be the fourth

Well, well, well.

I guess it’s some kind of a medical condition, but I’m leaving a great job at Síminn (Iceland Telecom) to start up a new company once again. This will be my fourth start-up, and I’m as excited as ever.

It will be a relatively slow migration as I’m finishing off a few projects at Síminn for the next couple of months, while at the same time setting up the new company, assembling a core team and refining the strategy for an idea that has been with me for some 18 months now, gradually getting more and more focused, until I became so obsessed that I simply had to go for it.

Some would claim this is a horrible time to start a company, with a gloomy economical outlook and a lot of turmoil in the world of business and IT.

I – however – see this as an opportunity. Due to these very conditions, highly qualified people are looking for exciting new opportunities. This is especially true here in Iceland, where the financial sector has drained the market of IT talent for the last 3-4 years, and those adventurous people that really would rather be working on something new and innovative have been tempted by the lucrative salaries and “never-ending party” of our booming banks. Now the banks (and others) are scaling down and being a lot more careful, so these people – many of them not necessarily in danger of loosing their job – might very well want to flex their start-up muscles again. Actually I know for a fact that this is the case.

Secondly, booms and busts in economy seem to come at an interval of 6-8 years. It takes at least 3-5 years to build a great company, so those starting now are likely to catch the next upswing, without having to run too fast for their own good – as long as they can build sufficient income or find the venture capital to fund their operations in the meantime.

The concept I’m working on – and my situation in general – is such that I believe I can pull this off.

I’m not willing to share publicly – just yet – what the concept is, but I’ll surely blog regular updates as things progress.

Long nights and fun times – here I come 🙂

iPhone, 3G and battery life

The rumours are getting ever louder. The new version of iPhone is coming out and it has 3G capabilities.

Many have expressed their concerns that this will seriously affect the iPhone’s already relatively short battery life time. 3G chipsets certainly drain the batteries faster than 2 or 2.5G (i.e. GSM, GPRS and EDGE) chips. Most 3G handsets – regardless of the brand – have short battery life compared to their lower generation counterparts and most 3G handset owners are familiar with their radiating heat, especially during heavy data – or even just voice – usage.

There have been talks about new OLED displays which would decrease the battery usage by the other main power sucker – the display – and then leave some room for added power consumption from the 3G chipset. And obviously the new iPhone’s battery won’t be inferior to the current model’s.

Before the first version of the iPhone came out, I made a few predictions that turned out to be pretty accurate, so I’ll give it another shot: During a chat with my colleague – Chad Nordby – last week, we came across another method to dramatically increase the battery life: Turn 3G on only when high bandwidth is needed.

The phone would stay on the 2G network during normal operations. There is no need to drain the battery on UMTS (i.e. 3G) communications while idly waiting for a call. When the user activates the device, 3G could be turned on and ready for your high-speed browsing – very much the same way as is currently done for the WiFi capabilities. The same might also work for background communication – 3G could be turned on to fetch large payloads, such as a big email attachment, but minor status updates and mail checks could stay on the EDGE network.

The handover technology from 2G to 3G (or technically from GSM to UMTS) networks is already there and works quite well. When you’re drive out of 3G coverage, talking on the phone, even the phonecall is handed from one technology to another instantaneously without dropping the call.

So there’s nothing stopping them, and as such a small fraction of time is spent on usage of data services anyway (compared to idle time) this would fully address the battery concerns. So it is my prediction the new iPhone will work this way. Obviously any handset manufacturer could use this method, but Apple is probably the only one crazy enough to actually be thinking on these notes.

The sheep might catch on later 🙂

Adventures in copyright: Open access, data and wikis

I’ve just had a very interesting experience that sheds light on some important issues regarding copyright, online data and crowdsourced media such as wikis. I thought I’d share the story to spark a debate on these issues.

For a couple of years I’ve worked on and off on a simple web based system for maintaining and presenting a database of inflections of Icelandic words: Beygingarlýsing íslensks nútímamáls or “BÍN” for short. The data is available online, but the maintenance system is used by an employee of the official Icelandic language institute: Stofnun Árna Magnússonar í íslenskum fræðum. She has been gathering this data, and deriving the underlying structure for years, during a period spanning up to or over a decade. As you can imagine, BÍN is an invaluable source for a variety of things, ranging from foreigners learning Icelandic to the implementation of various language technology projects.

Now before I go any further I think it’s important to say that I’m a big supporter of open data. In fact, one of the few things I’ve ever gotten involved in actively lobbying for is open access to data in the public sector (article unfortunately in Icelandic).

Back to the story. A couple of days ago I got a call from the aforementioned BÍN administrator. She’d gotten a tip that someone was systematically copying data from BÍN into the Icelandic Wiktionary and asked me to look into it.

I started going through the web server log files – and sure enough – comparing the log files to the new entries page on Wiktionary, the pattern was obvious: A search for a word in BÍN and 2-3 minutes later a new entry in Wiktionary with that same word. A pattern consistent with someone copying the data by hand. This pattern went back a few days at least. Probably a lot longer.

In light of this I blocked access from the IP addresses that these search requests originated from and redirected them to a page that – in no uncertain terms – stated our suspicion of abuse and listed our email addresses in order for them to contact us for discussion.

Now – BÍN is clearly stated as copyrighted material – and as the right holder of the content, the institute has the full right to control the access to and usage of their data. Inflections of a word are obviously not intellectual property, but any significant part of a collection of nearly 260.000 such words definitely is.

As said before, I’ve personally been advocating for open access to all public sector data, but I also know that this is a complicated issue – far beyond the opinion of the people working with individual data sets. This institute – for example – must obey the rules set to them by the Ministry of Education, and changing those rules is something that must be taken up on an entirely different level.

The Wiktionary users in question have since contacted us and stated that they were not copying the content, merely referencing it when proofreading their own information. I have no reason to doubt that, but the usage pattern was indistinguishable from a manual copying process, leading to the suspicion and the blocking of their addresses.

We’ve since then exchanged several emails and hopefully we’ll find a way for all parties to work together. It would be fantastic if the enthusiasm and great work that is being put into building the Wiktionary could be joined with the professional experience and scientific approach exercised by the language institute to build a common source with clear and open access.

In the end of the day, open access to fundamental data like this will spur innovation and general prosperity, but as this story shows this is not something that will happen without mutual respect and consensus on the right way to move forward.

Updated Apr. 24: Discussion about this incident is also taking place here and here (both are at least partly in Icelandic).

Firehose aimed at a teacup

Dogbert to Dilbert: Information is gushing toward your brain like a firehose aimed at a teacup.

Every company, organization and individual is continuously gathering and creating all kinds of data. Most of this data collection is happening in separated silos, with very limited connections between the different data collections. This is true, even for data sources within the same organization. A shame really, because the value of the data rises in proportion with the ways it can be interlinked and connected – a network effect similar to the one that determines the value of social networks, telecommunication systems or even financial markets themselves.

Lack of common definitions, data schemes and meta-data, currently make these connections quite hard to make. This is the very problem that the semantic web promises to solve. However a lot of this data is already finding its way to the internet in one form or another, and those that make the effort to identify and collect the right bits can gain insights that give them competitive advantage in their markets. At – and around – the Money:Tech conference I was so fortunate to attend last week, several examples were given:

  • Stock traders are monitoring Amazon’s lists of top sales in electronics and using them as indicators of the performance of the chip maker’s performance in the market. This is done by breaking down the supply chain for each of the top selling devices and thereby establishing who’s benefiting from – say – the stellar sales of iPods.
  • Insurance companies, utilities (energy sector) and stock traders (again) are constantly analyzing weather data to predict things like insurance claims, electricity demand and retail sales patterns.
  • By monitoring in “real time” publicly available sales data in the real estate market, companies like Altos have been able to accurately predict housing price indexes up to a month before the official government numbers are published. Similar insights might be possible to predict other major economical indicators, such as matching the number of job listings on online sites to changes in the unemployment rate, or online retail prices to predict inflation.

But it’s not only data gathered from the internet that’s interesting. Far more data is dug deep in companies’ databases and therefore (usually for a good reason) not publicly available.

Take the area of telecommunications. Mining their data in new ways could help telcos and ISPs to get into areas currently dominated by other players. Take the sizzling hot social networking area as an example: Call data records, cell phone contact lists and email sending patterns are the quintessential social network information. Whom do I call and how frequently? Who calls me? Who is the real hub of information flow within my company? These are all information that can be read pretty directly from the data that a telco is already gathering. Every customer’s social network can be accurately drawn, including the strength, the “direction” and – to some degree – the nature of the relationship. Obviously this would have to be transparent to the customer and used only on a opt-in basis, but in terms of data accuracy it is a “Facebook killer” from day one.

This is clearly something Google would do if they were a telco (and I’m pretty sure that Android plays to this to some extent).

Another interesting aspect of this whole data collection business is the value of data that one company gathers to other organizations. Again, the telco is my example. A telco has – by default – information about the rough whereabouts of every mobile phone customer. Plot these on a time axis and remove all person identifying information and you have a perfect view of the flow of traffic and people through a city – including seasonal and periodical changes in the traffic patterns. This is of limited value to the telco, but imagine the value to city planners, businesses deciding where to build a service station or the opening hours of their high-street store. Add a little target group analysis on top of this and the results are almost scary.

We are probably at the very beginning of realizing the potential in the business of data exchange and data markets, but I’ll go as far as predicting that in the coming years we’ll see the rise of a new industry focusing solely on enabling and commercializing this kind of trade.

Dear Apple – may we pay?

Update (Feb 7): Updated the estimated number of iPhones in Iceland in light of more reliable data.

As stated before: I live in a small country, nobody wants my money.

In the couple of years since I wrote that post, I’ve been watching in awe as my fellow Icelanders – and in fact a lot of people all over the world – have been going to great lengths to pay Apple money for products that are not supported here.

The two main products in question are iTunes credits and the iPhone.

  • iTunes: Iceland is not an iTunes country, and after the ongoing row with the Norwegians, I’ve heard that Apple is even more reluctant than before in entering more markets. This comes down to licenses and slight variations in the way laws are structured and the way the RIAA counterparts in each country interprets them and acts on the behalf of rightholders. With the wealth of illegal alternatives out there, one would assume that people would just ignore iTunes and use some of the P2P programs.

    But – no – Apple has done so well in the marketing and implementation of the iTunes / iPod ecosystem that Icelanders are going to great lengths to buy songs in the iTunes store anyway. The two most used methods are:

    1. registering a secondary address for your credit card in the US, then sign up for a PayPal account with that same address and use that PayPal account as a payment method in iTunes
    2. buying prepaid iTunes credit from online stores such as iTuneShop or even eBay.

    The interesting thing is that a song that you buy this way is just as illegal under Icelandic law as the one you’d download using a P2P program. Apple has no right to sell songs in Iceland and the song you bought is at best licensed to you in the markets were iTunes operates. Strictly, it might even only be licensed for you to listen to it in the US!

  • iPhone: This is the same story as elsewhere in the world. People here are buying the phones abroad, bringing them in and then finding various ways to make them work. An educated guesstimate would be that there are somewhere north of 500 1400 iPhones already in the Icelandic market. That could be as much as 1% is around 2% of the entire handset market here in the 7 months since the phone was launched in the US! All of these phones are bought full price, but obviously Apple is not getting their share in the mobile subscriptions and data revenues.

Of course, the sales of these two products in this tiny market alone doesn’t matter at all to Apple’s bottom line. The lesson here is that when people are going to great lengths to overcome your obstacles and pay you money, you must be doing something right.

On the other hand when you’re making obstacles for people that would happily pay you in the first place, you must be doing something wrong.

So: Dear Apple – may we pay?

The inevitable business model for music

There is a lot of turmoil in the music industry. The big publishers (usually dubbed “the majors”) are finally waking up to the fact that a decade of neglecting to come up with suitable business models for the web has bred a generation of consumers that have never paid for music. For them, music is free. It’s something you get from your friends via IM, through peer-to-peer applications or download directly from artist’s web sites or Myspace profiles.

This will never change back. From now on, basic access to music will be free – i.e. free to the consumers. As TechDirt’s Mike Masnik has rightfully pointed out, the most fundamental theories of economics state that in a market with infinite supply, the price will be zero. Infinite copies of digital music can be made with no cost, hence the ultimate price of a copy will be zero. Zip, nada, nix – nothing whatsoever.

This does not mean that music is dead. On the contrary. The likelihood of a musician being able to live from his art has probably never been greater, and being a music consumer has never been better. Its the roles of the middlemen that are changing, decreasing – and for those that don’t adapt – vanishing.

In my mind, there is an inevitable business model for music. This model consists of three layers.

  1. Basic access for free: Basic access to music will be free to the consumer. Their ISPs, telcos, portals or other music providers will pay a small fee to the rights-holders – something like 2-4$ per active user per month. This will allow them to legally provide their users with the ultimate music services – every song ever recorded, discoverable in every possible way for free – yet with higher quality and in a more user friendly way than today’s illegal file sharing methods.

    These music providers will recoup this cost via other channels, such as advertising, bundling of music services with other products and services, such as telecommunications, and upsell of music related products (see layer 2).

    For some time we’ll probably see this limited to streaming music. The right-holders will struggle to try to sell copies of songs (full track downloads) but a stream can easily be turned into a copy and copies will still float around for free illegally. That, coupled with ever more ubiquitous internet access means that the imagined difference between a stream and a copy will eventually fade.

    At a great conference here in Reykjavik last week, we heard from “media futurist” Gerd Leonhard, that even at the peak of music industry’s revenues, music spending per household in the US never surpassed 5$. This obviously means that 2-4$ per user, sometimes even active on more than one such service in a given month is actually a great deal.

  2. Merchandise and ads: While copies of songs are an infinite resource, most other things aren’t. Merchandise, such as T-shirts, baseball caps, autographed posters and images, special edition vinyl records, books, etc. are desirable items to devoted fans. These can be a great source of revenues both for the bands directly and for the music service providers.

    A captive and focused audience is also of great value to advertisers. Fans of a certain band or of a certain type of music, tend to be highly targetable consumer groups; sweet music to the ears of every ad network and advertiser and quickly worth more than the basic price paid to the rights-holders in layer 1.

  3. Access to artist: The artists themselves, already making money from the activities in layer 1 and 2 are (rightfully) the kings of the future of music.

    The most valuable asset in the whole music value chain is access to the artists themselves in one way or another. Concert tickets and backstage passes are obvious examples. An online video chat with your star together with a group of fans is worth something, and merely a glimpse of an opportunity to get on a devoted phone call with your favorite pop star could be worth dying for!

    Other types of direct access include getting a band to write a new song for your movie or advertisement, or sponsoring the band’s latest tour – in both cases riding on the bands most valuable asset: their devoted audience.

    Finally: While economists may not understand tipping, human beings actually do part with their money even if they don’t have to, just to show their gratitude or affection for something provided to them. Note to hopeful bands: Make it easy for your fans to donate money to you directly – they will.

These layers feed of each other. Unlimited access to music (layer 1) captures bigger audiences. Bigger audiences mean more value to the music service providers (layer 2) which in turn leads to more interest in the direct access (layer 3).

All indications are that this is actually already happening: As revenues from music sales are going down, the revenues from concerts and gigs are at an all time high. What musicians DON’T need anymore is someone to handle the physical distribution of their music – there is no physical distribution. They DO need distributors (music service providers) and they DO need promoters that help them capture and grow their audience. These needs are spurring new types of music companies, but nobody will ever again have the kind of stronghold on the artists that the publishers have had for the past decades.

So it goes.