A case study of online competitive video dynamics
In the few years YouTube has been around, it has grown into a powerful new medium which has affected fame, politics, marketing and activism. Yet in spite of its powerful cultural impact, there seems to be very little information that talks about how exactly YouTube works and how it has the effect it does.
This is not a big surprise: like other web giants, YouTube is a closed, monolithic web service. To help its users sift through the massive video database, sophisticated algorithms produce real time charts, generate content suggestions and map video relations. And just like Google PageRank, the algorithms behind all this have become a valuable trade secret.
This raises a bunch of questions. Is there such a thing as a 'PageRank' on YouTube? Just how does a video get popular? Does everyone have an equal chance at fame?
These questions become even more important when you consider the phenomenon of online contests powered by user-generated media. And of course, that's something that we're very interested in here at Strutta...
How we browse
When you look at how people find new videos to watch, we can identify four major dynamics:
- Recommended by a friend (link or blog embed).
- A new video created by someone you are directly subscribed to.
- Search query for something you're interested in or looking for.
- Browsing galleries – daily featured, most viewed, highest rated, most subscribed, etc.
In all of the above, it's safe to say that the social influence dominates in deciding what we watch. The rules of social interaction and sharing are now well understood and hard to change. The result of this is that there is a general hierarchy of what is popular and what is not.
However, we ignored a fifth, but major hook: after watching a YouTube video, we often continue watching by picking one of the 'related videos' list. The bite-size nature of most YouTube videos means that most of the time, there is an endless trail of related content to satisfy one's appetite.
The algorithm YouTube uses to pick these related videos defines what gets seen and what doesn't, and can have a huge influence on popularity of videos, as well as the entertainment value for the viewers. While obviously we can't peek behind the curtains to read the code, we can analyse the publicly available statistics to gain some insight into it. Some very interesting observations pop up.
The long tail
First of all, when considering recommendations, we need to consider the importance of feedback on popularity. A simple example can be seen in Top 40 charts: the presence of a hit in the charts (which is decided by sales) will usually boost that song's exposure and thus sales, thus making it go up in the charts even more.
Similarly, if we imagine a video site where the main mode of finding content consists of watching what is already popular – i.e. what is already being watched the most – then it will be near impossible for new material to break through. To avoid this deadlock, the effect of feedback must be countered and balanced by other incentives.
A second important factor is that the social dynamics of the web have empowered a new model of popularity, known as the Long Tail. In a nutshell, its effect on media can be described as this: the small amount of all content that is very popular is now dwarfed by the very large amount of content that is only slightly popular or more specifically only popular with a small group of people.
For every single Leave Britney Alone video with a couple million views, there are a hundred thousand other videos with a hundred views each. The reason behind this shift is that digital media and the internet have drastically reduced the cost of storage and distribution. It is now entirely feasible and even profitable to expand one's focus deep into the long tail, and this is exactly what successful online retailers and publishers have done. Before the advent of the internet, such content would simply not get any meaningful exposure.
How recommendations are made
With these two facts in mind, we can examine YouTube's recommendations in a new light. There appear to be two main deciding factors.
The first (predictable) factor is traditional keyword relevance between videos: using the title, summary and tags, other videos are found that relate to the same topics. Natural language processing and semantic mapping are important here, as well as spam prevention, something which search engines like Google have become very good at.
The second deciding factor is more surprising though and comes from observing an easily missed fact: the view count of related videos tends to match the view count of the video they relate to. To determine how strong this effect is, I did a statistical test.
First I picked 600 random YouTube videos across the fame scale, selected through random keyword searches. For each video, I looked up the list of related videos and compared each video's total view count with the view count of one of its related videos. I then repeated this, using the average daily view count instead (view count divided by the age of each video).
Both total and daily view count are good relative measures for the popularity of a video, as they predict how many people might be watching the video right now. Because the amount of attention a video gets over its lifetime can vary a lot, the true 'popularity' is somewhere between these two values.
The comparisons can be shown on a scatter plot, which has the view count of one video on the X-axis, and the view count of one of its related videos on the Y-axis. Each resulting data point represents a pair of related videos. The density of the plot shows which kinds of recommendations are made the most.
Note: All scales are logarithmic.
The vertical striping is a result of the one-to-many nature of the plotted relationships.
I also repeated each plot, but only showing the first 5 recommendations for every original video. Because we know a user's attention is mainly focused 'above the fold', we can expect the majority of YouTube viewers' attention to go to the first couple of recommendations.
The points are clearly clustered along the diagonal, even more so in the 'top 5' graphs, and there is a strong correlation between the related videos' view counts. When picking videos to recommend, YouTube will prefer videos that are as popular as the current video, i.e. located in the same region of the 'long tail' of popularity. This means that YouTube's recommendations will not significantly affect the overall popularity of a single item in the long tail and will instead spread the attention between equals.
We can also see that almost all outliers are located above the diagonal rather than equally spread. This shows the asymmetrical nature of YouTube recommendations: while YouTube will often recommend a popular video to go with a less popular one, the reverse will almost never happen. Thus, an increase in attention to a popular video will not trickle down much.
This has a very important consequence, namely that new content on YouTube will never get popular just by virtue of being on YouTube. There must be an active effort on the part of the creator to promote the video and a continued effort by viewers to share it with their friends. If not, such attention-starved content will only be shown alongside other attention-starved content, which obviously only exacerbates the problem.
Furthermore, it's no big secret that YouTube isn't the most friendly, civilized site on the web and has a horrible signal to noise ratio. So rather than trying to gain favour and exposure through YouTube's own social networking features, the best place to promote a YouTube video is in fact outside of YouTube altogether.
Looking back at the results, we might theorize the existence of 'TubeRank' – an ever-evolving number which sorts every single video on YouTube in a global ranking of popularity. This measure would be used to find and group equally popular videos and determine which videos get seen the most. However from released computer science papers, we know YouTube doesn't work that way: what it actually does is track what people are doing, and which video they watched after the other. They're charting the 'flow' of eyeballs if you will, from video to video. This average is then presented to you in a neat package of recommendations per video. You could say that they reflect the choices of others who watched the same video, though this is a simplification. Ultimately, this 'TubeRank' effect is just a result of what people already do: we go off to explore niches, or sit down to enjoy some media junk food. We browse idly, or seek specific detail. But when we take our own recommendations, we end up looking mostly at the same thing, and YouTube's popularity effects reflect that.
How does this affect contests?
The above is a relatively reasonable approach for serving up interesting and popular content, but it has some problems when applied to contests. This is especially important when you consider contests where the community not just provides the content, but also votes on it. In this case, you want every video to have an equal chance of getting viewed and voted on. Voters should be able to easily browse all submissions and encouraged to form an informed opinion in their vote. And it is here that YouTube's viewing model falls flat.
A YouTube contest is really just a slightly souped up YouTube channel page. It's peppered with the standard YouTube clutter of comments, friends, favorites, etc. and only has a special contest box near the top. An individual entry however gets even less love: compared to the normal video page, the only addition is a tiny link underneath the video description that links back to the contest channel. There are no direct links between contest entries. At most, a savvy contest organizer might put together a YouTube playlist that can be linked to and shared.
Thus, YouTube's existing navigation dynamics dominate, and there is very little flow from contest entries back to the main contest. This defeats the point of using viral competitions as a promotion tool, because most viewers will not notice the contest at all. Even more, the YouTube recommendation algorithm, as we saw above, will almost never radiate attention from a single popular contest entry down to less popular ones – and even then, only if all contestants used consistent labelling and tagging. As such, YouTube contests will have difficulty getting enough good votes.
This is the conclusion we reached at Strutta a while ago, and which prompted us to deal with the follow-up question: how can we optimize a contest site for maximum fairness and network effects between entries, while not compromising the overall viewer experience? Our answer was found in a series of simulations, where we modelled the behaviour of authors, viewers and voters on a typical contest site. This approach allowed us to challenge our existing ideas and experience, as well as let us improve our own algorithms even before we ran our first contest.
There is an important lesson to be learned here. The social dynamics around online media are complex, and too complicated to be satisfied by a one-size-fits-all platform. YouTube is so entrenched that it is unlikely to be dethroned, but it does leave the door open for complementary, alternative solutions. We hope you'll like ours.
Steven Wittens is a wily veteran in the web development space, having been one of the major contributors to the Drupal open source CMS since its inception. When not coding, he dabbles in design and video games. With over 12 million views on his YouTube account, we here at Strutta like to think he knows what he's talking about when it comes to online media...