Elliot Kemp is lead consultant at Push, a PPC specialist agency.
Data should be the core of all digital marketing moving forward. You can’t make bricks without clay – you can’t do digital marketing without data. More and more data is the key to making digital marketing succeed.
Elliot is presenting a new strategy at Push: the idea of constantly building tests that we haven’t thought about beforehand – tests that break the mould of what we do traditionally.
Before the age of digital we didn’t know where our money was going. Now we can work out where money is being used well and where it’s not. It’s our duty, in fact, to find out as much as possible. We can get rid of the bad and keep the good.
Proctor and Gamble were the masters of traditional marketing. They tried to blast TVs and all media that they could get their hands on with as much marketing as possible. That used to be a good tactic.
Now we talk and share reviews, blasting TV advertising doesn’t work anymore.
Kodak used to be a leader in cameras, but now they’re a great example of a business that wasn’t watching what their customers wanted. They were the first business to create digital cameras, but they never marketed those products, and they were too late to the party in the end.
Seth Godin asks the question of who marketers are trying to reach. We should ask our clients this, but clients often only have a vague idea. Their data is flimsy.
We should want to know as much as possible about our clients’ customers, even if they don’t know it themselves.
What happens if your problem is bad or not enough information?
We also need to clarify a difference of intent between search and display ads. If someone is searching for something, they have already admitted that they have a problem and might want to buy. On display, the intent is unstated and much less clear. Search involves a level of understanding of intent that you should incorporate into your strategy.
The display network is where Elliot’s proposed strategy is particularly effective.
There’s a lot of info in Analytics that you can access straight away, but you’re missing some key data, such as your spending information. We also can’t see the CPA at each level’, conversions and more.
If you haven’t asked Google for certain data, the information available to you for display ads is very limited. Tell Google that you’re targeting people who are looking for travel in Europe, but that you’re also looking at as much other pieces of data as possible. This info will never change the number of people you’re targeting if you set this as bid only – it’ll keep the size of your audience the same.
Choose bid only to gain data from your target audiences without affecting their size. There are lots of affinities and topics that you can gain a greater understanding – what are the interests and actions of customers in each area?
You have to understand your limitations within display advertising. Where can you start off? Perhaps the UK, in this particular area. Then you can start building information on top of that. Then you’re building your tests: give me all the information in this particular area. Then review the information, review, adjust and repeat. That’s the process and it takes a while. It gives you lots of information, but it’s also the only way to get that information. They’ve seen display campaigns go from zero to hero in a month with this process.
The principle for remarketing is similar but distinct. There are so many potential remarketing lists that you can build, and you can put all of these into a strategy for search ads. Put all your lists into a campaign like this and see what works and what doesn’t. Then you can move it into display, or you can move it all from display to search. Remember to do this all in bid only, then it won’t affect the size of your target audience.
Smart display is controversial. It’s success depends on its size and how good the data is. If you build a smart list and put it into bid only you can find out if the display is going to work very quickly.
Location targeting is another step. We currently see, if we go into dimensions, that you can only see the country. It doesn’t give decent information. Instead, Elliot built over 900 locations, with bid only on all of them, then if anything comes up, he sees every stage in the locations list rather than in the locations. This helps you cut down to singular areas of interest if you have a product that only works in a certain place for certain groups. This was applied to a holiday company that did domestic flights in the USA and it worked very well. Loads of data was gathered very quickly.
Building location lists in this way suddenly leaves you with a lot of data in a very complex table, but it’s actually simple to process. You can see which locations are good value and which aren’t. It helps you to refine your strategy in great detail.
Mobile vs desktop is an important conversation right now. The two platforms should not be treated the same. This is the first time that mobile is bigger than desktop and the way people search on each is very different. Desktop has a different kind of intent than mobile, so view it differently.
If you don’t split mobile and desktop into separate campaigns you can’t give them a different budget. If something works very well on desktop, it could be damaged in terms of quality score by sitting in the same campaigns as mobile ads.
If one is more important than the other you can prioritise a certain device in your budget.
However you run into problems on big accounts. A small amount of automation can make a big difference. Create rules that allow us to change bids if certain things happen.
This strategy does create problems, one being the need to understand a longer timeline. The timeline to build out this data and understand it is 2-3 months, depending on how much budget is available. Building this way is broad to specific and it takes a while to see CPA fall, but the long term rewards are higher.
Everybody has to be working towards the same goal. You need clean tests for it to be effective.
Continuous testing allows for you to take into account time, environmental and product-related factors. It’s important to get clean results.
- Gather as much data as possible.
- Test everything.
- Alter strategies based on the data we have available.
- Allow for adequate resources – time invested at the start leads to efficiency and results later.