Now I get to tell jokes about having a baby. Because I have a baby. Enjoy my lame jokes!
New set with some new jokes. Enjoy!
I wrote a few new jokes to sprinkle into the set. I am not sure what kind of camera the video guy used, but it seems to have added more than 10 pounds. Or I can blame the aspect ratio. Yeah, that must be it.
This article originally appeared on Search Engine Watch.
As digital marketers and agencies seek to fully understand the power of the display/search one-two punch to drive sales, there remains a void waiting to be filled by tools that can aggregate measurement for both in a single solution. Ultimately, the industry is craving something that will not only track results across numerous media and touchpoints, but enable advertisers to fine-tune campaigns and adjust allocations on the fly to optimize impact.
Unified tracking can get us closer, but there are few off-the-shelf solutions. Setting up the proper protocols and making sense of the data is a fine art, especially while the tools are still evolving.
In theory, unified tracking enables us to track results at the user level, to know where and when they saw an ad and when the conversion took place – the long tail of the cumulative results of search and display. The problem is there can be a great deal of overlap, and it’s tough to measure the actual tipping point of what pushed the customer over the edge to buy.
Display relies on view-through attribution, measuring how many times they saw the ad, then perhaps purchased at a later time.
Search is almost entirely click-through, measuring only immediate gratification. Often search ends up taking the credit for the sale because it’s the last-mile medium, but it’s likely that display played just as pivotal a role – we just can’t see it in the results.
Unified tracking lets you see all the given touchpoints before the user converts and gives a better overall picture of the effectiveness of the display/search mix. This is the Holy Grail of digital advertising: a single platform that manages and measures search and display concurrently and conditionally. While there are tools that are coming along in their capacity to handle this tall order, interpreting the results still requires some finesse and expertise.
To achieve unified tracking, look for a display platform that delivers all the functionality of an ad server with frequency capping and display metrics, then marry this to a search management platform that is mature and offers all the necessary functionality. Map out well in advance the data to be collected and analyzed: views, searches, clicks. Make sure that your two chosen solutions are able to share cookies (assuming they are separate) so you can measure every consumer touchpoint.
Once you’re tracking both display and search in concert, making sense of the data requires a bit of tinkering. Look for user behavioral trends to determine the “sweet spot” for the number of impressions by comparing search data with clicks.
For example, if it takes more than five impressions to prompt one search, perhaps you’re wasting impressions. Do you see an uptick in searches based on a strong run of display in a particular region? By adjusting frequency and placement, you can tweak and test the variables on the display side to see what kind of impact this has on search – and vice versa.
While the ability to make mid-stream campaign adjustments is a key tactical benefit, ultimately, the strategic goal of unified tracking is the ability to allocate budgets as efficiently as possible and, in doing so, we may finally demonstrate the quantitative impact of display on search.
Theoretically, we know that display kicks up the dust and search vacuums it up. But, with hard numbers to shed light on this symbiotic relationship, digital marketers are better able to back up display strategies to their clients or CMO and better allocate dollars on both sides of the equation.
This article originally appeared on Search Engine Watch.
With multiple campaigns in full swing and multitudes of data pouring in, it can be easy to misinterpret details and jump to conclusions about results without sufficient evidence. For example, media buyers may frequently be marketing to consumers who would have searched and/or bought anyway, without being hit with display impressions.
Buying behavioral, retargeting, search and other types of targeting data can make it even more likely that you are preaching to the choir. The trick is to determine whether your ads reached those consumers who truly needed to be persuaded or if they reached those who were closer to the conversion tipping point—either they already are or were likely to become customers anyway.
Certainly we want to avoid overkill and wasting money and impressions on consumers who didn’t need it. Before making a hasty assumption that may prove to be unfounded upon a deeper inspection, consider these common mistakes when examining digital media data.
Assigning a Causal Relationship Where There Was None
It can be quick and easy to assign causality when much of your data seems to point in the assumed direction. However, thorough testing of the hypothesis is required before jumping to conclusions.
For example, perhaps we have a lot of display impressions correlated with high search volume in one geographic area. Don’t assume that your display impressions caused the increased search volume. Perhaps instead there has been a general overall spike in brand interest in this market. Could offline tactics be the driver? Perhaps there was local news coverage related to your products.
To test the hypothesis that higher display impressions are driving search, increase or decrease display impressions and isolate other potential factors to see what kind of measurable impact—if any—this has on search.
Assigning Attribution for Sales Incorrectly
Particularly in markets where there’s a high likelihood that you’ll be targeting customers who are already buyers, attributing the sale can be complicated. This is especially true with site and search retargeting tactics.
Dropping a cookie on a user who visits your site or delivering ads across multiple networks to anyone who searches for your keywords can be very effective. However, in a typical purchase cycle, consumers shop around quite a bit. Absent a direct click-through-to-conversion path, it’s difficult to say that those who come to your site and viewed a banner made a purchase because of that banner. And, we don’t know what got them to the site the first time.
Are you showing ads to an audience who would have bought anyway and then attributing their buy to the fact that you showed them an ad? It’s a slippery slope that requires testing to measure the real impact.
To test your attribution theory and be aware of how retargeting might influence your results, adjust the number of impressions, frequency caps and other parameters and closely monitor and/or control for external impacts on search. When you have an overall picture of the pre-purchase drivers, you can more clearly begin to see what’s sparking the tipping point of conversion.
Failure to Consider the Big Picture
Digital media marketing through search and display don’t exist in a vacuum. Therefore, we must take a more holistic approach in determining the results. Don’t just look at click-through or search rates, but consider conversion rates, basket size, and other KPIs in relationship to these metrics.
It can be easy to say that display isn’t driving conversion if there’s no direct click-through to attribute, but how many consumers might convert with a higher basket size because of display impressions? If we look at the total number of impressions, but don’t see an increase in clicks, we might think it didn’t work, but we may be actually making more money because consumers trust the familiarity of the brand enough to make larger purchases. And, ultimately, isn’t that what we’re after?
Had we just looked at clicks or just at impressions, these results may have been obscured. To get a more accurate picture of results, we must look at all metrics from a holistic perspective to arrive at a bottom line.
Digital Media Analysis: A Double-Edged Sword
We definitely have access to hoards of data—infinitely more detailed than we could have ever dreamed of in the offline world. However, without careful critical analysis of this avalanche of information, we run the risk of jumping to conclusions without hard evidence or misinterpreting the data we collect.
Real-time technologies enable us to quickly and accurately collect data, but it is even more critical that we interpret it correctly in order to enable mid-campaign optimization. By understanding the caveats of digital media data analysis and examining relationships carefully, media planners and buyers can launch and manage better performing campaigns with accurate, proven results.
This article originally appeared on Search Engine Watch.
One of the biggest challenges facing digital advertisers is the ability to accurately and effectively measure the impact of search marketing on display, and vice versa. Much like a billboard wouldn’t be measured in the same way as a direct mail campaign, the effectiveness of display and search tactics must be measured separately, yet considered comprehensively.
Measuring display clicks and impressions doesn’t give the full picture, but there’s no doubt it plays a critical and cumulative role in brand awareness, search volume, and ultimately, conversion.
Unfortunately, no single solution is the best fit for all advertisers, so we are frequently left to attempt to measure the correlation between search and display ourselves. In many cases, some simple and well-controlled testing can yield results.
Compare Lookalike Geographies
To measure the impact of search on display and vice versa requires an experiment whereby we analyze the results of each campaign on two demographically similar geographies. By comparing the results of search only, then search with display layered on top for one market, we can see how the display effort influences the overall campaign effectiveness.
Here’s how to do it:
1. Identify Two Similar Geographic Areas to Target
Using data from the U.S. Census Bureau, Nielsen, comScore or other third-party demographics service, select two areas that, based on relevant criteria for your audience, would likely show similar behavioral patterns. Consider the typical data like household income, age distribution, etc., but don’t overlook qualitative issues like weather and other factors.
For example, if you want to compare results between Los Angeles and New York for an ice cream brand, it’s a safe bet that New Yorkers won’t be nearly as responsive in wintertime, especially if the weather is anything like we’ve experienced this season. If one market is a college town, this will also significantly impact consumer behavior patterns, depending on whether the experiment is run when class is in session or not. And, even if both are college towns, the results may skew based on their specific schedules.
Be aware of these qualitative factors before you begin.
2. Run a Search Campaign in Both Markets For 2-3 Months
Identify patterns and gather a baseline. If you’ve done your homework correctly, the patterns should be very similar for each market.
3. Layer in a Display Campaign in One of the Markets For At Least a Month
The other will serve as the search-only baseline or “control” market. Continue to measure results on the search/display market for at least one month after the display campaign is discontinued. Because most branding campaigns often have a cumulative effect on audiences, this will enable you to continue gathering data throughout the typical response lag time.
4. Analyze the Results
The goal here is to see measurable change in the search results for the market where display was layered on. Just as you should experience when you run an offline brand campaign, you should see an uptick in the search results for the brand keywords you’ve selected for measurement. Examine the search volume and display delivery and find patterns that you can quantify to show measurable results.
5. Tweak the Mix to Achieve the Optimum Response
Now that you’ve established the relevancy and impact of display on search, roll out this highly repeatable operation in other comparable geographies wherever appropriate for your brand and target audience.
While this is by no means a foolproof method, a similar tactic has been used to measure the effectiveness of offline spending for years. It is just another way that digital marketers can leverage the same principles to uncover hidden results for their own efforts.
The key factor in the effectiveness and accuracy of lookalike measurement is to begin with geographies that are as identical as possible.
While you may see some impact from competitive marketing efforts, in terms of both cost and response, the experiment is well worth it for helping to establish the relative effectiveness of search and display.
This article was originally published on Adotas.com.
When running a large-scale search engine marketing (SEM) campaign, there are some things that marketers typically assume. For instance, it’s natural to assume that search ads appear in search and content ads appear in content. Constantly policing this would be far too time-consuming to be worthwhile. That assumption, however, appears to be slightly flawed.
Recently, while reviewing some of the search query reports for one of our clients, we came across some very interesting trends. The query with the third-highest impression volume was “prints patterns pillows pillows throws home décor home.”
This query triggered the keyword “decorative pillows,” which was set to Google’s broad match. However, this was very obviously not the kind of query that a user would enter in a search box, and certainly not 20,000 times, as the impressions in the report would suggest. This query also sported a click-through rate (CTR) of .02%, very similar to what would be expected on the content network.
To find answers, I did what anyone in our industry would do… I searched. I copied the long query and pasted it into Google. The top result was a page on Target.com that bore a page title similar to the query I saw. When I looked at the bottom of the page on this site, I saw my client’s ad. This client is opted out of the content network, so how could their ad appear in a context such as this?
When I went back to my search query reports, I found over a thousand queries that bore similar characteristics to the one I initially investigated. Spot-checking these by searching them on Google invariably led me to pages on Target.com. When I aggregated the performance of these queries, I found a cost per conversion that was more than triple that of the rest of the search partner network and four times that of Google.com search.
In addition, I discovered a few other sites, including Macys.com, that were sending queries using a similar method. However, many of the queries sent by scripts on these sites look very much like standard search terms, making them difficult to identify and evaluate separately from actual user-initiated searches.
The big issue here is that my client was paying search CPCs for a placement that is a content placement with no way to set a bid based upon performance. Since Google does not provide an option to block individual sites in the search partner network, nor do they allow us to set different bids on a site level or even for the partner network, I came up with a solution that appears to be working.
First, I duplicated all of my campaigns. All of the original campaigns were opted out of the search partner network while all of the duplicated campaigns were opted in, but their bids were lowered substantially. I also set many of the hardest-hit keywords to exact match and lowered the bids on any keywords that had a CTR that looked like a content placement.
The results were immediate. The search partner network now provides our client a cost per conversion close to the level we see on Google. Our aggregate CTR jumped from .5% to 2.9%. Functionally, we gave Google keywords with a higher CPC that they will select to show on a Google search before the lower-CPC duplicates, but we gave them only one option at a lower CPC when the query originates from the search partner network.
A great number of competitor retail sites were included in the ad units on the ecommerce sites I came across. I recommend that all search engine marketers look through search query reports, separate out the search partner network, and look for any queries with a very high impression volume and a very low CTR.
You should be especially concerned if that query looks like a website breadcrumb or is nonsensical. If you find that you are affected, you may want to consider duplicating your campaigns and taking some degree of control over these placements.
So the question remained, how can this be within Google’s AdSense terms of service? A little bit of further exploring led me to a page in Google’s help documents for advertisers that suggested that search ads may appear on pages within a site’s directory: “Your ads may appear alongside or above search results, as part of a results page as a user navigates through a site’s directory, or on other relevant search pages.”
Upon asking my Google rep, I was told that this is, in fact, an example of this kind of directory placement and some “premium partners” are allowed to do this. So basically, Google considers drilling down into the content of some sites to be a type of directory search, even though there is no query entered by a user.
What Google tells their publishers appears to be substantially different. Google’s terms and conditions page for AdSense for Search partners states, “Queries must originate from users inputting data directly into the search box and cannot be modified.”
This seems pretty clear-cut; AdSense for Search can only be initiated by user queries. Why, then, does Google make exceptions for “premium publishers?”
This seems incongruous, as it seems that Google disallows this practice in general, but provides exceptions for a few large retailers who subscribe to AdSense. It is possible that Google has some internal mechanism that evaluates a “directory” site against some unknown criteria for inclusion in the directory search category.
It is also possible that some premium publishers did not want the considerably cheaper content ads on their pages and lobbied for the higher-yield search ads.
The bottom line is that advertisers have zero transparency, zero control and usually zero knowledge of the existence of this practice. Regardless of why this is happening, Google will change this practice only when enough advertisers have made enough of a clamor to effect change.
If every advertiser reduced their bids to a level commensurate with the worth of this type of traffic, it would effectively bring the overall CPCs down to the worth of the lowest common denominator in the network, including the CPCs on Google’s own site. I doubt that is something Google wants.
This article was originally published on DMNews.com.
Google’s new Instant Search, the nifty feature that updates search results as you type a query, represents an interesting leap in user experience and ostensibly helps users find results faster. After experimenting with it as a user, I was impressed. As an SEM professional, however, I harbor a few reservations.
- Impressions will jump and click-through rates will fall
- Conversion rates will drop as negatives lose effectiveness
- Reporting will become muddied
- Long-tail keywords will lose volume
First, I would expect impression volume to jump substantially. When I type the word “toy” into Google, instant search predicts that I will continue typing “Toyota” as my query. It preemptively fills in my search results with the homepage of Toyota and also fills in the AdWords with ads for Toyotas. However, as I continue typing my query, I might fill in “Toys R Us.” Toyota has now served an ad for cars to a user who is looking for toys, a query they never would have displayed on before.
Next, negative match will become less effective. If I continue the example above and type out “toys r us lead paint,” I get no results. Assuming that Toys R Us has used “lead paint” as a negative match keyword, they have now served an ad to a user who was using one of their negative keywords, dampening the benefit of using that negative.
Reporting will become an issue as well. If I type “Toy” and I click on an ad for Toyota, am I reported to the advertiser as having searched “Toy” or “Toyota?” After all, I had not typed the full word before clicking and yet I was shown an ad for the full query.
Finally, and most importantly, the long tail of search may be severely impacted. If I intend to type “Toyota prius blue used near New York City,” I might click on an ad after typing “Toyota pri.” If the advertiser had bid on “Toyota prius new york” for a $1 average CPC and “Toyota Prius” for a $3 average CPC, the user query that would have cost $1 before now costs that advertiser $3. The user effectively abandons the query before it has a chance to become the less-expensive long tail version.
Google has said that they will not count an ad impression if a user does not see the ad for more than three seconds. The implication of this is twofold: First, you may get clicks with no impressions. Users can be quick clickers and they may hit your ad before an impression even registers. Second, many fleeting impressions (less than three seconds) will go unrecorded.
I would prefer to know when a user sees my ads displayed out of context. I would also like to know what keywords Google is predictively filling in so I can enter those as negatives. Google has effectively defined what an impression is. Even if a user loads your ad for two seconds, they still have an opportunity to see it. An impression can be registered with the user and not with Google. Granted, there can be a branding benefit to such “fleeting impressions,” but marketers need to know how often it is happening.
One way to prevent these detrimental effects might be to use more exact match to prevent your ad from appearing before the query is complete but this might not be an optimal strategy in every case. For the next few weeks we search marketers will simply have to keep a close eye on how our accounts are trending and adjust accordingly.
This article was originally published on Resolution Media’s blog, FindResolution.com.
In my highly-biased and in no way impartial opinion, there is no more effective place to spend marketing dollars online than in search marketing. The level of control, accountability, and relevance is unparalleled in online marketing and even offline marketing. However, this degree of control can come at the expense of predictability. Anyone who has ever been tasked with forecasting search spend is familiar with this problem. There are so many factors that can have a huge impact on spend (CTR, CPC, search volume, seasonality, news cycles, etc.) that it becomes a very tricky task to make accurate guesstimates. So how does one budget for search marketing if it is so unpredictable? There are a few schools of thought, and each has its own incentives and pitfalls.
The first school of thought is carte blanche, a blank check to spend as much as you can until you get diminishing returns. This option is especially optimal for ROI-focused campaigns. The thinking is that a search campaign is a revolving door of money. Keep putting money in until you are no longer getting more than you put in. For obvious reasons this option is favored by professional search marketers and agencies. Who wouldn’t want an uncapped budget? However, this option favors clients as well. Only by being flexible with the amounts you spend can you be assured you are maximizing your returns. This helps minimize waste while maximizing opportunity. This also encourages efficiency from your agency partners; if they are not running efficiently enough to bring in higher volumes of profitable conversions, they get less to spend.
Another school of thought says to set budgets based on forecasts. This approach is well-suited to awareness campaigns, branding campaigns, and campaigns that are difficult to track such as “clicks and mortar” campaigns. It is favored by most marketers because it can fit the unpredictability of search into a predictable budget. For direct-response or ROI-focused campaigns this approach risks either missing out on some volume due to a low projection, or missing your budget number due to a high projection. However, in some industries there is virtually unlimited search volume so it makes sense to limit your search marketing to a set budget to avoid huge expenditures. Search can also be run on a shoestring budget, but if the returns are delayed due to credit terms or a long sales cycle then a company could have significant financial exposure if search spend is not capped. Setting a fixed budget encourages search marketers to be as accurate as possible with their projections, but can prove to be a constraint on maximizing efficiency.
The third approach to budgeting is somewhat of a hybrid of the first two. It basically amounts to a flexible budget that can adjust to market conditions. Realistically, this is how most companies allocate their search budgets. As the market changes, spending and targets can change with it. Flexible spending encourages strong communication between clients and agencies so that both can understand the extent of the opportunity as the campaign progresses and make adjustments accordingly. While not as freeform as carte blanche, this still allows the agency to seize opportunity while providing the marketer with some modicum of predictability. However, if the two cannot coordinate these changes quickly enough, some opportunity will be foregone in the shuffle.
How you allocate your search dollars will depend greatly on your potential market size, your product, your costs, and the type of campaign you are running, be it awareness or direct response. Regardless of where on this spectrum you fall, it is important to always keep your goals in mind and make sure that your budgeting serves your goals for the campaign.
This article was originally published on Resolution Media’s blog, FindResolution.com.
As search marketers, we all know the value in testing ad copies and the importance of the ad copy to the overall performance of your campaign. However, in order to test ad copies we frequently add a number of them to Google AdWords, turn on the ad optimizer, and wait. But is this really the most optimal means of determining your best ad copy? For that matter, what numbers should you look at to compare ad copies?
First, it is useful to know what Google considers good ad copy. Google likes the ad copy that garners the highest CTR. With enough data they will always skew toward that one ad copy that does better than the rest. However, is that in your best interest? If I am a direct response client seeking ROI then this is certainly not in my best interest. A good example of this came up recently while looking through one of our accounts to boost ROI. The account’s ad copy statistics are below:
As you can see, ad copy A was receiving the best CTR, a whopping 21.23%. Ad copy B was “only” getting a 20.77% CTR. Over time, Google had optimized toward ad copy A, showing it 44% of the time and ad copy B only 7% of the time. But look at the conversion rate difference! If we assume the conversion rate would be the same regardless of traffic, we can see which ad copy is really best.
Assume that only ad copy A was in the ad group. The 26,706 impressions would have then received a 21.23% CTR and a 1.77% conversion rate.
26,706 (B) impressions * 21.23% (A) CTR = 5670 clicks, a lift of 123 additional clicks.
5670 clicks * 1.77% (A) CVR = 100 conversions, a decrease of 29 conversions!
Now let’s assume that only ad copy B was in the ad group:
157,021 (A) impressions * 20.77% (B) CTR = 32,613 clicks, a decrease of 715 clicks.
32,613 clicks * 2.33% (B) = 760 conversions, an increase of 169 conversions!
By removing ad copy A we should in theory receive 715 fewer clicks and 169 more conversions! Of course other factors such as landing page could skew the conversion rate, CPCs may vary slightly by ad copy, and this analysis makes some assumptions that the numbers will remain constant. However, this type of ROI-based analysis still yields measurable incremental gains in many cases.
Be on the lookout for these types of mathematical opportunities. In the example above, Google was optimizing toward delivering the additional clicks generated by ad copy A, regardless of conversion rates.Moral of the Story: You should not always trust Google to make these decisions for you.