Does “Measure Everything” Means “Data-Driven”?

Does “Measure Everything” Means “Data-Driven”?

Data-driven “everything” is a trend. This is a graph that displays the volume of queries related to “data-driven” topics in Google search during the last 5 years. It’s clear that many industries, departments, and professionals are trying to embrace the data-driven approach which means that we should incorporate data within our decision-making process. We are living in the data-led era, there is a ton of data about everything and the access on this data has been democratized with the expansion and usage of free tools, such as Google Analytics.

 

Data-driven versus Measure Everything
Data-driven queries trend in the last 5 years (source: Google Trends)

I am a big fan of data. By using data, we are able to optimize our targeting strategy, avoid HiPPO* influence, prioritize projects and make better decisions that can be justified in a reasonable way. Moreover, data can help us validate and further enhance our decision-making process because we can measure and identify the true impact of our decisions.

Along with data adoption, another trend emerged as well. People are trying to use data for EVERY decision they have to take. This is not necessarily bad, but as Kleovoulos, an ancient Greek firmly said: “all in good measure, all in moderation, moderation in the best thing” (English translation of: “παν μέτρον άριστον”). This quote applies perfectly in this situation because if we use data for every decision we have to take, we are not “data-driven” we are accountants. In this post, I will share what data-driven means for us and how we use (and not use) data when we are building our business.

Our data-driven approach

We use data every day. But using data without a framework or a couple of rules and conventions led us nowhere. That’s why we built an approach consists of key pillars that every member of our team should understand and embrace. These pillars are not something difficult to digest and they help us communicate better with each other by building actionable reports and insightful analyses.

  • Measure what matters

    Everything can be measured. Literally, from a business standpoint, (almost) everything can be measured. But that does not mean that we must measure everything and make decisions based on metrics. Let me give you an example: During 2018 World Cup, Apple’s earbuds were everywhere (Apple was not even in the sponsor list for World Cup). Millions of people around the world saw popular football stars wearing Apple’s latest invention. How can you measure the effect of that? One answer could be that we can check the overall sales during the World Cup. Another one (more sophisticated) could be to check which days the earbuds showed on TV and allocate the number of incremental sales of this day (compared to a similar day without the World cup placement) to this event, but how can someone be sure about the percentage of sales that was influenced by this fact? There are a lot of similar examples out there especially in the offline marketing and the branding industry that can’t be measured properly.
    We must be brave enough to resist measuring that kind of initiatives even if our manager asks to evaluate this situation. Some initiatives have to be done based on our “faith” and strategy without trying to identify the ROI of them. Here are two popular measurement concepts that we use to evaluate customer journey within our product and the mobile performance of our apps.

    Part of Mobile Growth Stack
    Part of Mobile Growth Stack concept (source)

  • Use the right metrics

    Our tendency to measure everything lead us to use every metric that is available in our data arsenal to find that positive (or negative, it is up to us) correlation that will lead to “data-driven” evaluation heaven. It’s called confirmation bias and most of us are wired to do that. We need to train ourselves to avoid this bias and treat performance channels and product projects equally. One of the most popular examples is YouTube performance. YouTube ads channel can be considered as a performance channel (along with SEO – SEM – Email etc.) so a click-based evaluation would be reasonable. But if you evaluate YouTube Ads by the number of clicks that they generate most probably you will stop using them. YouTube is not a click-based channel, BUT it can be a very effective channel for every brand and it can provide tangible results as well. We wrote a relevant article about our YouTube channel strategy in Google Think’s platform feel free to check it.

    From product standpoint, A/B testing suffers from confirmation bias as well. According to industry standards, a successful A/B test increases the conversion rate of the variation. In our industry this practice does not apply well, leading to many inconclusive A/B tests because the conversion rates of online delivery industry are higher than the average e-commerce industry. After years of experimentation, we created a new convention that defines a successful A/B test as an A/B test that increases the selected micro-conversion rate (with statistical significance) while the overall conversion rate remains the same or increases. So, when we are doing a test that optimizes the login experience (see below), we define the successful login event as the primary metric (trying to increase it with statistical significance) while order conversion rate is part of supplementary metrics.

    A/B testing
    Results of an A/B test that focused on login experience. The green color in the first (primary) metric means that it is statistically significant while all other metrics are not statistically significant.

     

  •  Apples vs Oranges Syndrome

    This one is very common. Monthly marketing budget ROI (how much money we spent against how many conversions we had within a month) and platform conversion rate (how many conversions against how many sessions we have every month) are two of the most common reports. Let’s take the example of TV marketing. TV marketing is 100% paid so we need to make sure that money is well spent. There are a lot of tools and plenty of methodologies for measuring TV performance. We do two types of campaigns on TV, brand campaigns and direct campaigns.
    Brand campaigns are creative campaigns that highlight our USPs while direct campaigns include specific offers which generate high buying intention. We always have hard times to evaluate brand campaigns, but direct campaigns drive immediate traction that can be measured pretty accurate by checking the incremental orders we have after each TV ad. Comparing TV brand campaigns with TV direct campaigns would lead us to biased results against brand campaigns, so we compare brand campaigns versus past brand campaigns to set the right benchmark and understand what works better. The same methodology is applied to direct campaigns as well.

    When it comes to product evaluation, there are many examples of comparing irrelevant dimensions. The conversion rate of users that landed logged-out is significantly lower to users that land as logged-in users. Same applies to users that are new visitors (cookies) and users that are returning visitors. That means that if the Android app that gets the most traffic of new users and logged-out users, it will have the lowest conversion rates but that doesn’t mean that Android is a bad platform. The way we use for evaluation is to compare platforms performance within the same period, same landed flow and same marketing channels, try it and tell me if it works for you or not. Investing time & effort in our evaluation method was one of the best decisions we ever took, simply because the ROI of this action is 10x!

     

  • Understand how data is collected

    Not all data is the same. In our business we have behavioral data (from Google Analytics), database data (for our database), qualitative data (reviews, chat logs etc.) and other. Understanding what and how we measure is critical. For example, since we have only specific data in our database we can’t create reports about conversion rates for specific conversion types without the use of Google Analytics that include sessions data as well (which is one of the two parts of the conversion rate equation).

    Another nice example is direct traffic and how it is calculated in Google Analytics. According to its definition: “Direct traffic is defined as URL’s that people either type in directly or reach via their browser bookmarks.” So direct traffic happens every time someone types your website’s URL which most of the times is a good thing because the user memorized the website and did not have to search for it. Direct traffic is part of “branded performance” (along with along with brand SEO & brand SEM) which is considered as one of the best channel groups with relatively high conversion rates.
    What many people do not know is that direct traffic act as a fallback channel as well for visits that are simply untracked or unrecognized which may cannibalize the performance of channel overall, leading to wrong conclusions. A good way to keep your direct traffic “clean” is to add utm’s in all your marketing campaigns, even the links you share as organic posts on social media.

This four rules can apply to many more examples which I will share from time to time. The most important thing is that by having an agreement on them, our team became better aligned and avoided misconceptions and saved hours of meetings, discussions and unsuccessful projects. Last but not least, it is really crucial to avoid the situation that is described in the image below because it will be very difficult to recover.

Data-driven quote
Great & realistic quote from an economist