The Economics of UGC

How do the User Generated Content sites do it? How can Flickr, YouTube and the like possibly make money through limited revenue options while simultaneously giving away absolutely massive piles of storage and bandwidth?

Economies of scale kick in big-time, and there’s still a lot of unused capacity out there, but you have to wonder how sustainable it is to allow users paying very little or nothing at all to dump the entire contents of their Flash memory cards onto Flickr every day. Not to mention the fact that uploading 13 nearly identical pictures of your cat onto Flickr rather than one pollutes the quality of the datastore for all users (I’ve never understood why Flickr doesn’t strongly limit the number of images that can be uploaded per day, forcing people to edit their collections).

Some discussion in TWiT episode 47 about Yahoo’s purchase of Flickr and how they’re now finding it an economic albatross. Photo printing from Flickr is an obvious revenue opportunity, but according to a TWiT insider, 10 million Flickr users generate about 80 print orders per week. News flash: People are there for the community, not for abstracted printing possibilities. But once you invite people to upload their lives into your service, you’re committed, no backing out.

Despite seemingly problematic revenue opps, Yahoo! is continuing their UGC/Web 2.0 purchasing spree: they apparently have an offer on the table to buy Digg. UGC is a critical aspect of Web 2.0, and they can’t afford to miss the boat.

The recent proliferation of free massive storage systems has changed user expectations for all hosting systems. Alex King, on user expectations at FeedLounge:

When I hear someone say “a service like this should be free”, it feels a little like they are saying “your time and investment are worth nothing”. I know it’s not personal, but to make a really great product, you have to invest yourself personally.

Birdhouse struggles with this too. For example, we simply can’t offer a webmail system as good as GMail’s (for any amount of money), and we sure as heck can’t offer 2GB of storage to anyone who comes by and asks. But due to the quality of modern webmail systems like Yahoo’s and Google’s, people just assume that all webmail will be of similar quality. Without truly massive investments and economies of scale, small and medium-sized hosts are stuck offering Web 1.0 technology in a world that already expects Web 2.0 quality and scale.

But it goes beyond webmail: Now that Google and Yahoo (and soon Microsoft) are making quick inroads into the web hosting business, the picture isn’t pretty for smaller hosts. What we can — and do — offer is excellent hand-holding and custom setups that the cookie-cutter monoliths can’t offer. And while the bandwidth and storage we provide may seem puny by comparison, I haven’t met a customer yet who actually felt cramped by our offerings – 500MBs is a huge web site… unless you’re throwing a ton of audio and video around.

I’ve been experimenting with UGC for nine years at the Archive of Misheard Lyrics, and have made money from it. Not big money, but some. But I’ve had the advantage of being able to do it on a high-impressions/low-bandwidth model – lyrics pages are tiny chunks of text in a database. And unlike free-for-alls like Flickr, I exert editorial control over the content, and don’t let just anything onto the site*. I know that UCG can be a workable revenue model, under the right conditions. But how this scales to unlimited free photo/video/audio hosting remains to be seen.

* Although in the past I’ve used volunteer editors, some of whom have let huge numbers of unfunny lyrics into the live pool; the current user voting system (which I guess is a bit Web 2.0 itself) will eventually correct for that.

See also Nick Cubrilovic: The Economics of Online Storage.

Music: The Minutemen :: Futurism Restated

Technorati Tags:

6 Replies to “The Economics of UGC”

  1. Hi Scot

    Aha, another one who has seen the light. I agree totally with what you say, Scot. Current UGC offerings are like the fuel guzzling juggernaughts of the web, they consume massive bandwidth and storage for very little (or at least disproportionate) return value. Is it any wonder that many ISPs have started bandwidth shaping to relegate these services’ bandwidth priority compared with other traffic. Strange isn’t it, that in a world where we’ve started to wake up to the need to conserve our natural resources, we’re supporting services that are just trashing our online resources.

    Do you recycle at home? Many people do, we get no personal gain from it, but we conscientiously separate out our glass, from our card, from our plastic. Some people drive dual-fuel cars, others drive diesels, and more people car share… get the picture. If we consume it all and throw it away we’ll have nothing left for others. OK, so in the physical world the consequences are perhaps more serious, but I believe that gradually we’ll wake up to the fact that the Internet doesn’t have unlimited bandwidth, and we don’t have unlimited storage – no matter how fast we can grow it.

    True we *can* grow capacity (storage and bandwidth) very fast but, as you say, when users are just dumping memory hungry content en masse it seems obvious that the exponential growth of content will outstrip the rate at which we can upgrade bandwidth and storage. The growth in number of users on UGC sites is exponential; the rate of growth in storage and bandwidth requirements is exponential to the number of users. How fast can you physically rack up new capacity, you’d need exponentially growing resources to stay ahead of the curve.

    Anyway, there is a new breed of UGC, using a more pure P2P architecture. Why have fifty million copies of the same content scattered around and consuming fifty million times the resources, when you can just share the links to the content and make it available via a true P2P architecture. Where many P2P apps falls down is when they go back to the client-server model for some of their key processes. Do that, and your back to the model of racking up servers at an exponential rate as the network grows. P2P should get more efficient as the network grows, not less. Now, develop a UGC offering on a true P2P architecture and you have a different story, no more need to provide centralized storage, share the burden over the network. I’m involved in a project called izimi which is doing just that. It’s been in development for about 18 months and we’re about to go Beta in August. The Beta is going to be limited distribution, and I’d be thrilled if you’d be interested in being invited. If not that’s cool too. I enjoyed reading your blog, keep it up, I’ll visit again. David Ingram, izimi (www.izimi.com)

  2. Thanks for the comment David, well said. I’ll just add though that the P2P solution merely distributes the bandwidth and storage wastage evenly around the net. That helps, but distributed waste is still wasteful, even if it does take the onus off the original provider. P2P solutions to UGC/Web 2.0 sites will only encourage massive bandwidth wastage overall. Makes me think of the start of America, and the pioneers thinking that timber and other resources were limitless. How long before we find ourselves up against the wall, facing distributed bandwidth shortages?

  3. Scot, I think what David is referring to is a P2P architecture where ideally there’d be no need for UGC sites, i.e. if you share certain files with a group and I share files with the same group then those files never get relocated to a central server. Consequently, there would be significantly reduced waste on the storage front. There may even be reduced bandwidth waste as one only accesses a file when it is of interest rather than uploading it to a central location for it to be accessed.
    Whatever, you’re probably right in the first instance that because we already have numerous UGC sites, they’re not going to disappear that quickly irrespective of how effective emerging P2P architecture solutions might be. Moreover, people being people, a lot will still choose to transfer a file to their own system just so they have their own copy despite that not always being a necessity.
    I like the notion of a true P2P environment a lot. Similarly, I like developments like those originating from the Ndiyo Project. Having made a mess of the physical world we live in, it would make sense to get ahead of the curve in the cyber world and prevent waste wherever possible.
    Regards,
    Alan

  4. Scot, I think what David is referring to is a P2P architecture where ideally there’d be no need for UGC sites, i.e. if you share certain files with a group and I share files with the same group then those files never get relocated to a central server. Consequently, there would be significantly reduced waste on the storage front.

    Alan, I disagree with this in two ways.

    First I’m not sure I follow what you mean when you say that certain kinds of P2P architecture would mitigate the need for UGC sites. Even if you offload the distribution of bits for Flickr or YouTube from a central server to all of the users, Flickr and YouTube are still “needed” – ultimately the users aren’t going to care how the bits are distributed.

    Second, my point about wastage is that YouTube and Flickr and all UGC sites are still going to be 90% crap. People are still going to upload 13 pictures of their cat rather than 2 because they’re lazy and don’t want to edit or trim their collections. And podcast aggregators are still going to download tons of episodes of shows that never get listened to. These kinds of wastage will remain true in exactly the same way regardless whether a central server or a distributed P2P network is in use.

Leave a Reply

Your email address will not be published. Required fields are marked *