Are you tracking the right metrics for your digital marketing efforts? Do you know what those metrics mean?
One of the most practical aspects of digital marketing is the ability to look at real data to see how your campaigns are performing. However, with so many different types of metrics available, understanding what you are (or should be) looking at can be difficult at best and downright confusing at worst. If you aren't careful, you can end up tracking performance measures that don't really show what you think they do or aren't relevant to your business goals. With that in mind, I've put together a list of 10 of the most commonly misunderstood metrics in digital marketing — and what those metrics might really mean.
Bounce rate is perhaps the most overblown metric we regularly hear about at Station Four. Google Analytics defines a bounce as the percentage of users who see a single page and then leave the site. However, this fails to take into account the context of the page users have viewed.
Sometimes clients are worried “our paid media landing page has a 75% bounce rate!” when the average range is between 70 and 90%. Landing pages are designed to force a decision in users: convert or bounce. If that bounce rate is complemented by a proportionately high conversion rate, you may be fretting over nothing. Similarly, pages without any conversion pathway – like, say, blog articles – may have higher bounce rates. That doesn’t mean users aren’t coming to the page and consuming your content before departing. The common misconception is that a user who has bounced came to the page and immediately left. In reality, they may have spent twenty minutes looking at a single page before departing, and were still counted as a bounce.
A high bounce rate may be an indicator that your site needs improvement, but it isn’t in and of itself necessarily a problem.
This is the vanity metric to end all vanity metrics. First of all, there’s a consistent assumption that consumers want to engage with your brand. In many cases, I think this is an unwarranted assumption. Do you really care what your toothpaste company has to say on social media? Probably not, and your audience may not either. Your audiences does, however, want to engage with interesting content, and maybe they’ll want to engage with your brand if you can consistently provide that content.
“Organic” social reach has diminished significantly in the past few years, lowering the potential level of impact your unpaid social efforts might have. More substantially, the current incarnation prioritizes engagement over raw follower numbers. Put simply, Facebook isn’t going to show you something no one seems to be interested in, as measured by interactions with the content. Moreover, there are a ton of bots out there, and it’s easy to acquire them to inflate your numbers. Last I checked, though, bots don’t buy products. At least, not yet.
Which brings me to the final and most important aspect of social likes. If social referrals aren’t a channel that generates substantial traffic or revenue for you, then why does it matter if you have a ton of followers or not? Making sure any marketing effort generates real value for your company is the most important step.
This is one we see whenever clients are dealing with skittish internal marketing teams that want to appear to be more successful with their email stats than they truly are. Broadly speaking, email open rates tend to average around 25%, and clickthrough rates tend to hover around 4%. There’s a lot of variation across industries, but a well-optimized email marketing program should be performing somewhere in this ballpark.
4% looks like a pretty small number, and a lot of old-school marketing managers who aren’t familiar with email might find it unacceptable. I’ve often seen marketing teams simply divide the number of clicks by the number of opens – and voila – you have a new metric: the click to open rate. Using the average statistics employed above, we have a higher number. 16% looks better than 4%. There’s no additional information conveyed in the CTOR that can’t already be gleaned from CTR and open rate. Unless you’re sending multiple versions of each email – or you’re conducting extensive A/B split testing on all of your campaigns – chances are, this is a metric you don’t need to pay much attention to.
Obviously, you should know how many people view your ads. However, the problem with using impressions to gain insight is they don't calculate how many people actually saw your ads. Roughly half of all impressions are seen for under 5 seconds, and many are completely ignored. Ad networks often register an impression if the ad loads, not if a viewer scrolled down and saw it. Display networks are improving on this calculation, but there is still a great deal of work to be done to provide more accurate numbers.
Beyond that, the definition of seeing an ad is pretty loose. “Banner blindness” is a real phenomenon. We’re so inundated by advertising – some sources report we see as many as 4,000 ads a day – that we’ve trained ourselves to ignore much of it. Eye tracking studies consistently indicate that users ignore banner ads. The heat map below illustrates where site users focus their attention.
There’s also the issue of fraud. With so much bot traffic, many of your impressions aren’t even seen by actual humans. Because impressions are set to fire on page load, the ad network detects whether or not the ad loaded, not whether the cause of that load was from human interaction or from a bot. Fraud detection is also improving, but it’s not yet accurate enough to provide reliable numbers.
Attribution is one of the most important concepts in digital marketing, and it’s also one of the most poorly understood. Frequently, a campaign will be run across an array of channels, possibly incorporating paid media, organic efforts, email, and/or social. In order to most effectively gauge the performance of each of these channels, we need to make sure that the conversions each generates are properly associated with the channel the conversion originated from.
The problem is that the person who converted might have interacted with multiple channels before converting. Perhaps they originally clicked a paid ad, saw a remarketing ad on social, signed up for email newsletters, and then purchased as a result of an email campaign. It’s something like a soccer player scoring a point, with other team members passing and providing assists. Each channel may have contributed in some way, so how are they appropriately recognized?
Google provides a number of different attribution models, including position based, time decay, linear, first interaction, and last interaction. Each may be more or less appropriate depending on what makes the most sense for your campaign. First and last interaction are just what the name implies; full credit goes either to the first point of contact or the last. In the example above, that paid ad would get full points on a first interaction mode, whereas the email would get credit in a last interaction. Linear spreads credit equally across all channels, and time decay weights credit to channels closest in time to the point of conversion. Google provides a full description of these models, so you can decide which makes the most sense for your campaign.
One of the most important things to know about Google’s time measurements is that Google cannot measure the time a user spends looking at the last page of their visit. Google can only measure the time spent between pages; it uses the time of the following page view to determine the amount of time spent looking at the current page. Since there’s no “next page” on the last page a user views, no time is recorded. Similarly, the session duration ends when they open the last page they look at.
In cases where a visitor bounces, there’s no next page recorded, so time on page and session duration are both captured as 0. Google isn’t indicating that they spent 0 seconds on the page, they simply can’t calculate it. So even when time on page reads as 0, just as when users bounce, they’re actually likely to be engaging with your content. We just don’t know how much.
If a particular page has a very low exit rate, then average time on page is probably a fairly accurate measurement of how much time users are actually spending consuming your content. If your exit rate is high, then it’s likely that all those zeroes Google is recording as time on page is skewing your data toward the lower end.
“What’s our traffic? How many hits are we getting?” are questions I hear quite frequently. While traffic is certainly an important thing to understand, a lot of marketers still fail to distinguish between the different primary types of traffic. The three biggest ones to compare are users, sessions, and pageviews.
A user is a specific individual who comes to your site and is tagged with a unique identifier. Don’t confuse a user with a distinct human person, as they could visit your site on a work desktop but later visit on their personal laptop. They would be counted as two separate users, since the devices have distinct IDs. In general, however, the number of users correlates pretty closely with the number of humans visiting your website. They’re broken into new users and returning users, meaning people who have come to your site for the first time, and people who have been more than once. Keep in mind, it’s possible to count a new user more than once if they clear their cookies.
A session is the set of interactions a user takes on your website that ends with 30 minutes of inactivity or after rollover at midnight. Previously, Google called this metric a “visit.” One user can, and often does, have multiple sessions. Sessions are also further broken down by new and returning.
Pageviews are the number of pages someone who came to your site saw. These are distinct from unique pageviews, which are an aggregation of the number of sessions in which that page was viewed one or more times. If I go to your homepage, reload the page, go to a product page, and then return to the homepage, that would count as three pageviews of the homepage, but only one unique pageview of it.
To recap, “traffic” is really an oversimplification of several metrics— how many people came to your site, how many times they came, and how many elements of your site they interacted with.
I know what you’re thinking. Conversion is the easiest metric to understand. It’s the people we got to do something we want them to do— people who filled out a form, bought something, attended our webinar, or gave us their social security number and their mother’s maiden name. What’s there to get wrong about it?
As you’ll see from the examples I just used, not all these metrics are the same, and don’t convey the same value to your business or website. A signup to your email newsletter isn’t as valuable as a closed sale, and I can do a lot more with your SSN than I can with your email address. And yet I frequently see marketers lumping in all these metrics together as if they’re identical. This can lead to a false equivalency between all these performance indicators, each of which ultimately has a different business value.
Additionally, I’ve seen a number of marketers set up inappropriate goal conversions in both Analytics and AdWords. For example, they’ll include behaviors like visiting a particular page or spending a certain amount of time on site as a conversion. While these are potentially useful indicators of your site’s effectiveness and overall health, they’re not quite the same as a full blown purchase or conversion, and can artificially inflate conversion numbers. I’ve seen AdWords campaigns with an ad's destination URL entered as a conversion goal. This meant that every click counted as a conversion, which doesn’t actually illustrate the performance of the campaign, and falsely inflates performance numbers.
When measuring conversions, it’s important to make sure you’re measuring the right thing, and that you’re not lumping that measurement in with incommensurable metrics. Most importantly, it should be something that actually drives your business, not something that makes your team look good for having higher numbers.
Direct traffic is generally defined as people who actually went to the browser and physically typed in your site’s URL, or users who have bookmarked the site and are returning that way. Often, this number is higher than one might expect, and so marketers occasionally misinterpret it, thinking “A lot of our users sure love our site, and have bookmarked it to return again and again.”
Unfortunately, if Google can’t detect a source, it often categorizes that traffic as direct. (The problem here is that Google often categorizes traffic for which it cannot detect a source as direct.) Improperly implemented tracking codes and some privacy settings on certain browsers might strip the referral details out, resulting in traffic being tagged as direct. If a link is on a secure (HTTPS) page and directs a user to a non-secure (HTTP) page, no referral data will be passed, which also calculates the session as direct. Links inside a word document, a QR code, or in a PowerPoint presentation can’t be identified as a specific referral either, so those show up as – you guessed it – direct traffic. So, if you’re seeing a lot of direct traffic, it’s likely users are coming from a source that Google can’t properly detect.
Search engines like Google pay attention to approximately 200+ different signals in order to rank results when a user types something into the search bar. Domain authority, a metric established by the SEO wizards at Moz, is a sort of shorthand for some of the most important 40 or so signals, like linking domains and total links. It’s listed on a 100-point scale, so, broadly speaking, a site with a DA of 70 is probably going to rank higher than a site with a DA of 35. It’s also worth mentioning that it runs on a logarithmic scale, so it’s easier to increase your score from 10 to 20 than it is to go from 80 to 90.
There are several issues that come up with DA. The first is that it is essentially a comparative measure, not an absolute one. Since the internet is constantly in flux, all search rank positions are in relation to one another. You could improve your DA but still diminish in rank relative to a competitor, or vice versa. Further, the relative strength of each search engine signal is constantly being tweaked. Google puts out algorithm updates regularly, so what worked yesterday may not work tomorrow. Consequently, you shouldn’t worry too much if your DA increases or decreases by a point. However, if there are trends that consistently occur over a long period of time, or if there’s a significant shift after a major algorithm change, then you should probably take a closer look.
Each of these metrics reveals something about the behavior of users who are interacting with your collateral, whether it’s your website, emails, or social content. It’s important to make sure that you have a comprehensive understanding of what you’re measuring, and to ensure that you’re measuring it correctly. Last, no one metric in isolation can paint a full picture of how successful your marketing efforts are, but when used in conjunction with one another, you can gain significant insight into how your audience is behaving and your site is performing.
Ryan Hickey is a rapid-fire, coffee-drinking devotee of hard data. As the Director of Digital Strategy, he guides S4’s strategic approach to both internal and client work with a near-obsessive commitment to critical thinking and substantiation through methodical tracking and analysis. Ryan studied diplomacy at Georgetown and has been known to turn a poisoned pen to album reviews in his spare time.