Running a business day in and day out is a lot of work. Each day poses a new challenge and decisions have to be made in order for the business to generate revenue and remain operational. When one part of the business veers off course everyone feels the impact and the question that appears is how was this change discovered and measured? Was it discovered from analyzing a single primary KPI metric like revenue? Or was it from careful analysis of underlying metrics that were used to determine revenue that pinpointed areas where performance has slowed?
During my time working for a digital media agency I audited countless prospective clients who fit the mold of measuring performance based on a single metric. In some cases the metric being analyzed was a KPI, while in other situations it was a vanity metric that was taken from a click bait blog post titled, “One Metric to Rule Them All”. In either case, I saw first hand that making business decisions based on a single metric of success will ultimately send you on a path of performance volatility, which will leave you with many unanswered questions. The reason for this volatility is because the analyzed data lacks any context around why the primary metric is performing the way it is, which increases the likelihood of making a poor business decision.
Think of this concept in terms of a person entering the stock market with a basic understanding of how it works. The shareholder only understands that a positive share value means the value of the stock is rising and a negative share value means the value of the stock is declining. When the shareholder sees the value of the stock rise, she decides to buy more shares and when she starts to see the stock fall she sells her shares. In an extremely volatile place like the stock market, taking so many actions in a small span of time would ultimately lead our shareholder down the path of performing guesswork with her decisions. This type of analysis and optimization leads to a slim chance that the investment will ever turn out to be a positive one and leave the shareholder clueless on how her portfolio has performed.
With this understanding in mind, it is time to conceptualize this idea in terms of an online business. Let’s say that you run a cooking blog that makes money off of the amount of website users that you attract and their website engagement. In your reporting, you try to get a sense of how your audience is embracing your content and decide that the best metric to indicate quality and engagement is the bounce rate. Bounce rate is the percentage of all sessions that enter and leave without viewing a second page. It is a very effective engagement metric, but should be carefully analyzed as the interpretation of this metric changes depending on a variety of factors such as page type, user acquisition and expected action taken on page.
Now let's say that in the recent month your blog posts have had around a 50% bounce rate. This monthly average is far off of the historical average of 35% and indicates that your audience has not found your website's content to be relevant and engaging enough to navigate the rest of the website like they did in the months prior. The conclusion you come to from your analysis is that you need to pivot your whole content strategy. This is a major decision to make that has two possible outcomes; you fix the bounce rate issues you had or you steer yourself further off course and potentially destroy your business (a bit extreme, I know).
This type of high-risk decision making should be avoided whenever possible and a way to do exactly that is to find additional data points that either support or disprove your original analysis results. In order to reanalyze your results, multiple engagement metrics should be compared to get a better understanding the current bounce rate variance and its relevance as a measurement of overall content quality.
One way this 50% could be reinterpreted with additional metrics is by combining it with another known engagement metric, Avg. Time on Page. When this value is included in your analysis you reveal that despite the 50% bounce rate from the recent month's blog posts, you had an avg. time on page of 2:15, which is 137% higher than that of the historical average which had a 35% bounce rate and an avg. time on page of 0:55.
This new discovery demonstrates that the 50% bounce rate does not necessarily mean that there is a content issue since the users in the recent month are staying on the page at a the much higher avg. time on page than our historical average. This is a good sign that users are engaged and reading the content, which should be enough to convince you to hold off on making any drastic plans to content strategy and encourage you to pull additional metrics to get a more accurate representation of the situation. Some additional metrics could be KPI’s, email sign ups, social shares, pageviews versus bounce rate, channel… The list can go on and on :)
As I have shown in the example above, single metric data analysis leads to inaccurate interpretations of performance resulting in misguided business decisions. Additional context needs to be presented to confirm or disprove initial conclusions, and decisions should not be made until this has happened. There are no short cuts in data analysis so pull as much data as possible and look at various combinations of data points before you are confident in the results and understand the reasons behind the current performance.