The Lever10 team has just concluded a 10 month analysis of ‘email clicks’. As all of us in the outbound marketing world know, you cannot trust the ‘Open’ statistics. Why? Because preview panes can and do cause many false positives for email open rates, and introverted people (technologists) often disable all graphics downloads, giving us a complimentary false-negative count on the ‘Open’ rates. In addition, email platforms such as Gmail often cache the tracking pixels used for ‘Open’ tracking. This can cause both false positives and false negatives.
But, until about a year ago, we could trust ‘Click Through’ rates. That all ended in 2017. Much of the new generation of anti-malware software is doing deep analysis of links in email; there is a new surge in false-positives for outbound email.
How do we know?
We have been inserting impossible-to-click links deep in the footer of emails. These are 1 pixel <a> tags in the html. If you know they are there and try to click them it is almost impossible to do so (the mouse or trackpad does not have a 1 pixel fine grained control).
These harmless links go to client websites. Normally with a url such as:
If you trace back these apparent clicks, the originating IP addresses often resolve to IP ranges owned by anti-malware companies, but the ranges are huge. In addition, many of these false-positives seem to originate from the end-user IP address (traced by locally hosted anti-malware).
We have no problem with people and IT departments doing more assertive email scanning. We even think it is a good idea.
When we contact the support teams at the major Marketing Automation Platforms (list the usual suspects here), we are told, ‘Oh, we have a whitelist of all those IP addresses – we don’t count those as clicks.’ We of course, are staring at our reports showing these MAPs as reporting clicks that we just proved were false-positives.
What can the MAPs do? Probably nothing. We have exited the era in which email CTRs could be trusted; they no longer can.
But is this a material number? Does it really distort metrics?
In a recent large scale outbound campaign, the Lever10 team sent a monitored 250,000 emails. The bounce rates were trivial; the lists were clean. The apparent correct CTR was 0.35% (not good / not bad for an outbound program). The false-positives exceeded 1%. So the incorrect data were 3 times larger than the apparently correct data. Why do we say ‘apparently’? Because we still have no definitive method to prove human clicks. Our best heuristics give us a high probability of being correct, but not 100%.
Speaking of heuristics: it is our experience that when the ‘noise’ becomes as much as twice the signal in low sample size data (read response rates for B2B email), then the metrics become useless. CTRs (click through rates if you are not an emailer), are dead as a meaningful metric.
Do we care? Some of our most important clients are very dependent upon email CTRs for program baselining and sentiment analysis. We are building a replacement system for CTRs now.