by Mark Palony
Special to the eTail Blog
Set-it-and-forget-it is not a paid search strategy. If you want to maximize the ROI of your PPC campaigns, it’s necessary to review and optimize the performance of the several moving parts involved. Providers such as Google and Microsoft make enormous amounts of campaign data available, right down to keyword-level conversions, but it’s up to you to analyze the data and use what you learn to improve campaign performance.
Paid-search bid optimization comes in two flavors: rules-based and model-based. Within the broad realm of model-based optimization, you’ll find three common methods: global cluster-level modeling, local keyword-level modeling and global keyword-level modeling. Each has its pros and cons, and each offers various degrees of performance improvement.
If you’re not certain what method you’re using currently, or are considering changing what you’re doing, here’s an overview of the different optimization approaches and their relative strengths and weaknesses.
Global keyword level: Every keyword receives individual analysis and an individual bid so the entire portfolio achieves the stated goal. The approach is difficult to accomplish because the size and complexity of many paid-search programs requires literally millions of pricing decisions to be made each day. But a solution that does accomplish global keyword-level modeling is likely to be pure software and, therefore, fully automated.
Local keyword level: In effect, this is the approach advocated by Hal Varian, Google’s chief economist: bid each keyword separately based on the predicted value. Simplicity is the key because you don’t need to predict behavior across a range of bids – just bid a percent of the predicted value. However, with simplicity comes lower performance and limited settings. In a lot of cases, a local solution leaves money on the table. You can set a target but you can’t layer on multiple constraints.
Global cluster level: Here you still have a global optimization but models based on clusters of keywords are used to handle the sparse-data problem. Some vendors are actually a hybrid of this plus the keyword-level global approach — that is, they use keyword-level for head terms and clusters for the tail. Global cluster-level bidding tends to be stable with results that are repeatable. But what you gain in stability, you lose in performance and automation. Performance suffers because clusters ignore the fact that every keyword is unique and the value of the aggregated data is outweighed by the loss of that uniqueness. A variety of factors such as seasonal changes, expanded keyword lists and changes in product offerings can render clusters obsolete. When obsolescence occurs, statisticians are typically needed to manually tune the models, so cluster-based solutions are rarely pure software applications.
The most common solution available, rules-based optimization is touted for being simple and easy to understand. For example, a rule might state, “If ROAS is less than 200 percent, lower bids by 10 percent.” However, when rules are layered upon rules, simplicity and understandability are quickly negated, making it difficult to understand what will happen to the bids. The big loser in rules-based optimization is performance. Rules-based systems are reactive, with pre-defined responses to certain situations. In a rules-based system, historical data is not considered because the situation drives the reaction. Because of their reactive nature, rules-based optimization can be very good at protecting your position, but playing defense rarely leads to optimal results.
For a more granular look at these approaches, download OptiMine’s whitepaper (no registration required), “Achieving the Gold Standard in Paid-Search Bid Optimization.”
Mark Palony is the Director of Marketing for Optimine Software, a provider of bid optimization software which helps forecast the performance of each paid search ad placement each day and automatically sets optimal bids. This post has been brought to you by Optimine.