The CafePress Experience With Ads

When CafePress first started printing shirts in 1999, online retail was still a nascent industry and Google had yet to sell its first ad. Soon CafePress started selling products through search ads on Google, and their business took off. Today, CafePress hosts millions of shops online where customers can choose from more than 325 million products on nearly any topic, from wall art to phone cases.

Just as CafePress has broadened its offerings over time, we’ve also worked to improve and expand our search advertising products. What started as three lines of simple text has evolved into ads that are multimedia-rich, location-aware and socially-amplified.

Today CafePress uses Sitelinks to direct people to specific pages of their website, helping customers find what they’re looking for faster. On average, ads with three rows of links, or three-line Sitelinks, are more than 50 percent likely to get clicked on than ads without Sitelinks. More than 200,000 advertisers have joined CafePress in using Sitelinks in at least one campaign.

Monday at Advertising Week in New York City, I’ll be talking about how advertisers have been quick to adopt these new formats since we first began experimenting nearly two years ago. Businesses from the smallest retailer in Idaho to the largest Fortune 500 company in New York have seen how these innovations in search advertising can help grow successful businesses. In fact, roughly one-third of searches with ads show an enhanced ad format.

Here are a few ways these new ad formats are helping people find valuable information faster:

Visual. Not only can you find theater times for a new movie, you can watch the trailer directly in the ad. Media ads put the sight, sound and motion of video into search ads. With Product Ads, people can see an image, price and merchant name, providing a more visual shopping experience. Because this format is often so useful, people are twice as likely to click on a Product Ad as they are to click on a standard text ad in the same location, and today, hundreds of millions of products are available through Product Ads.

Local. More than 20 percent of desktop searches on Google are related to location. On mobile, this climbs to 40 percent. Location-aware search ads can help you find what you’re looking for more easily by putting thousands of local businesses on the map—literally. More than 270,000 of our advertisers use Location Extensions to attach a business address on at least one ad campaign, connecting more than 1.4 million locations in the U.S. via ads. And, with our mobile ad formats, not only can you call a restaurant directly from the ad, you can also find out how far away the restaurant is located and view a map with directions.

Social. With the +1 button people are able to find and recommend businesses with their friends. Since introducing the +1 button earlier this year, we now have more than 5 billion impressions on publisher sites a day. If you’re a business owner, the +1 button enables your customers to share your products and special offers easily with their network of friends, amplifying your existing marketing campaigns.

We’re continuing to experiment with search ads to help businesses like CafePress grow by connecting with the right customers.

We’re developing ads that provide richer information to you because we believe that search ads should be both beautiful and informative, and as useful to you as an answer.

The Time Protocols

Have you ever had a watch that ran slow or fast, and that you’d correct every morning off your bedside clock? Computers have that same problem. Many computers, including some desktop and laptop computers, use a service called the “Network Time Protocol” (NTP), which does something very similar—it periodically checks the computers’ time against a more accurate server, which may be connected to an external source of time, such as an atomic clock. NTP also takes into account variable factors like how long the NTP server takes to reply, or the speed of the network between you and the server when setting a to-the-second or better time on the computer you’re using.

Soon after the advent of ticking clocks, scientists observed that the time told by them (and now, much more accurate clocks), and the time told by the Earth’s position were rarely exactly the same. It turns out that being on a revolving imperfect sphere floating in space, being reshaped by earthquakes and volcanic eruptions, and being dragged around by gravitational forces makes your rotation somewhat irregular. Who knew?

These fluctuations in Earth’s rotational speed mean that even very accurate clocks, like the atomic clocks used by global timekeeping services, occasionally have to be adjusted slightly to bring them in line with “solar time.” There have been 24 such adjustments, called “leap seconds,” since they were introduced in 1972. Their effect on technology has become more and more profound as people come to rely on fast, accurate and reliable technology.

Why time matters at Google

Having accurate time is critical to everything we do at Google. Keeping replicas of data up to date, correctly reporting the order of searches and clicks, and determining which data-affecting operation came last are all examples of why accurate time is crucial to our products and to our ability to keep your data safe.

Very large-scale distributed systems, like ours, demand that time be well-synchronized and expect that time always moves forwards. Computers traditionally accommodate leap seconds by setting their clock backwards by one second at the very end of the day. But this “repeated” second can be a problem. For example, what happens to write operations that happen during that second? Does email that comes in during that second get stored correctly? What about all the unforeseen problems that may come up with the massive number of systems and servers that we run? Our systems are engineered for data integrity, and some will refuse to work if their time is sufficiently “wrong.” We saw some of our clustered systems stop accepting work on a small scale during the leap second in 2005, and while it didn’t affect the site or any of our data, we wanted to fix such issues once and for all.

This was the problem that a group of our engineers identified during 2008, with a leap second scheduled for December 31. Given our observations in 2005, we wanted to be ready this time, and in the future. How could we make sure everything at Google stays running as if nothing happened, when all our server clocks suddenly see the same second happening twice? Also, how could we make this solution scale? Would we need to audit every line of code that cares about the time? (That’s a lot of code!)

The solution we came up with came to be known as the “leap smear.” We modified our internal NTP servers to gradually add a couple of milliseconds to every update, varying over a time window before the moment when the leap second actually happens. This meant that when it became time to add an extra second at midnight, our clocks had already taken this into account, by skewing the time over the course of the day. All of our servers were then able to continue as normal with the new year, blissfully unaware that a leap second had just occurred. We plan to use this “leap smear” technique again in the future, when new leap seconds are announced by the IERS.

Here’s the science bit

Usually when a leap second is almost due, the NTP protocol says a server must indicate this to its clients by setting the “Leap Indicator” (LI) field in its response. This indicates that the last minute of that day will have 61 seconds, or 59 seconds. (Leap seconds can, in theory, be used to shorten a day too, although that hasn’t happened to date.) Rather than doing this, we applied a patch to the NTP server software on our internal Stratum 2 NTP servers to not set LI, and tell a small “lie” about the time, modulating this “lie” over a time window w before midnight:

lie(t) = (1.0 – cos(pi * t / w)) / 2.0

What this did was make sure that the “lie” we were telling our servers about the time wouldn’t trigger any undesirable behavior in the NTP clients, such as causing them to suspect the time servers to be wrong and applying local corrections themselves. It also made sure the updates were sufficiently small so that any software running on the servers that were doing synchronization actions or had Chubby locks wouldn’t lose those locks or abandon any operations. It also meant this software didn’t necessarily have to be aware of or resilient to the leap second.

In an experiment, we performed two smears—one negative then one positive—and tested this setup using about 10,000 servers. We’d previously added monitoring to plot the skew between atomic time, our Stratum 2 servers and all those NTP clients, allowing us to constantly evaluate the performance of our time infrastructure. We were excited to see monitoring showing plots of those servers’ clocks tracking our model’s predictions, and that we were continuing to serve users’ requests without errors.

Following the successful test, we reconfigured all our production Stratum 2 NTP servers with details of the actual leap second, ready for New Year’s Eve, when they would automatically activate the smear for all production machines, without any further human intervention required. We had a “big red button” opt-out that allowed us to stop the smear in case anything went wrong.

What we learned

The leap smear is talked about internally in the Site Reliability Engineering group as one of our coolest workarounds, that took a lot of experimentation and verification, but paid off by ultimately saving us massive amounts of time and energy in inspecting and refactoring code. It meant that we didn’t have to sweep our entire (large) codebase, and Google engineers developing code don’t have to worry about leap seconds. The team involved in solving this issue was a handful of people, distributed around the world, who were able to work together without restriction in order to solve this problem.

The solution to this challenge drove a lot of thinking to develop better ways to implement locking and consistency, and synchronizing units of work between servers across the world. It also meant we thought more about the precision of our time systems, which have a knock-on effect on our ability to minimize resource wastage and run greener data centers by reducing the amount of time we must spend waiting for responses and rarely doing excess work.

By anticipating potential problems and developing solutions like these, the Site Reliability Engineering group informs and inspires the development of new technology for distributed systems—the systems that you use every day in Google’s products.

The 2011 Google Earth Outreach Developer Grant awardees

The nonprofit mapping community is alive with amazing game-changing ideas. In May, Google Earth Outreach asked nonprofit organizations to think big: what kind of map would they want to create if they had the funding or developer resources to do so? We were thrilled by the number of applications we received, full of concrete ideas for tremendously impactful maps.

While it was difficult to select projects with the highest potential impact from the long list of great applications we received, we are excited to announce the Google Earth Outreach Developer Grant awardees. Each organization below proposed cutting-edge visualizations in the public benefit sector utilizing a broad spectrum of tools ranging from narrated tours in Google Earth to Google Maps and Places API applications for Android to Google Fusion Tables. In total, we’ve awarded over $300,000 to the Google Earth Outreach Developer Grantees. We wish to congratulate all awardees for developing proposals that we hope will help them make the world a better place.

These organizations are all currently making great progress towards their project goals. Within the coming months, they will complete development of their mapping applications. We look forward to sharing the completed projects with you on the Google Earth Outreach site, so check back soon!

 

 

Atlantic Public Media One Species at a Time: Stories of Biodiversity on the Move, with the Encyclopedia of Life (in Google Earth narrated tours)
California Academy of Sciences A Global Water Story: Translating immersive programming about water from the Planetarium to Google Earth
David Suzuki Foundation Our Natural Capital: mapping ecosystem services in southern Ontario, Canada
Golden Gate Parks Conservancy The Story of Crissy Field: the transformation of an urban park in Google Earth
HabitatMap, Inc. AirCasting: citizen air quality monitoring using Android devices
The HALO Trust Notes from the (Mine)Field: a Google Earth tour of humanitarian landmine clearance
International Rivers Network The Wrong Climate for Damming Rivers
The Nature Conservancy Adopt an Acre in Google Earth & Maps
Pepperwood Foundation, on behalf of iNaturalist.org iNaturalist App on Android: citizen naturalists armed with Android devices can upload photos of flora and fauna to iNaturalist.org
Save the Elephants Tracking Animals for Conservation: Real-time mapping in the field on Android and Publishing Elephant Tracking Data in Fusion Tables
Water for People SanMap: supporting sanitation-related businesses in urban African cities
When I Walk, Inc. AXSmap mobile app using Google Places API for reviews and ratings of accessibility
Widecast Wider Caribbean Sea Turtle Conservation Network – Bonaire Track Your Turtles: The Great Migration Game and sea turtle monitoring in Bonaire
World Resources Institute Google Earth Tour of Reefs at Risk
World Wildlife Fund Eyes on the Forest: Interactive map on Sumatran Deforestation

 

These organizations were funded through the Google Inc. Charitable Giving Fund at the Tides Foundation.