Google Earth: Canada’s largest intact forest

In just three minutes, you can take a non-stop, coast-to-coast Google Earth narrated tour of Earth’s “green halo:” the boreal forest. The Pew Environment Group takes you over the vast northern forests and waterways and unveils an ecosystem that stores twice as much carbon per acre as tropical rainforests, holds more freshwater than any other continental-scale ecosystem and teems with wildlife. Watch the tour below or download the KML file to view in Google Earth.

The Pew Environment Group is the conservation arm of The Pew Charitable Trusts, a nongovernmental organization that works globally to protect our oceans, preserve wild lands and promote clean energy. Pew and its sister organization, the Canadian Boreal Initiative, developed this tour to illustrate the nature of the blue forest and its ability to store massive amounts of carbon, primarily in its soil and wetlands. The tour is featured at the launch of Google Earth Outreach in Canada, happening this week.

Viewers will see bears, wolves, and caribou that still roam this vast landscape, learn about aboriginal communities that depend on the boreal, view the Peace-Athabasca Delta, one of the most important wetlands in the world, and the last refuges for North American Atlantic salmon.

The Peace-Athabasca Delta viewed in the Pew Environmental Group’s new Google Earth tour.

Unfortunately, Canada’s boreal forest is increasingly affected by large-scale industrial activities. A rapidly expanding footprint of development already includes 180 million acres (728,000 km²) affected by forestry, road building, mining, oil and gas extraction, and hydropower.

Pew and CBI have worked with aboriginal communities, conservation groups, federal, provincial and territorial governments to protect the boreal, resulting in 185 million acres set aside from development to date, including key wetland and river areas. That total represents more than 12% of Canada’s 1.2 billion-acre (nearly 4.9 million km²) boreal forest.

Visit us online to learn more about the steps we can take together to protect this global treasure.

The Time Protocols

Have you ever had a watch that ran slow or fast, and that you’d correct every morning off your bedside clock? Computers have that same problem. Many computers, including some desktop and laptop computers, use a service called the “Network Time Protocol” (NTP), which does something very similar—it periodically checks the computers’ time against a more accurate server, which may be connected to an external source of time, such as an atomic clock. NTP also takes into account variable factors like how long the NTP server takes to reply, or the speed of the network between you and the server when setting a to-the-second or better time on the computer you’re using.

Soon after the advent of ticking clocks, scientists observed that the time told by them (and now, much more accurate clocks), and the time told by the Earth’s position were rarely exactly the same. It turns out that being on a revolving imperfect sphere floating in space, being reshaped by earthquakes and volcanic eruptions, and being dragged around by gravitational forces makes your rotation somewhat irregular. Who knew?

These fluctuations in Earth’s rotational speed mean that even very accurate clocks, like the atomic clocks used by global timekeeping services, occasionally have to be adjusted slightly to bring them in line with “solar time.” There have been 24 such adjustments, called “leap seconds,” since they were introduced in 1972. Their effect on technology has become more and more profound as people come to rely on fast, accurate and reliable technology.

Why time matters at Google

Having accurate time is critical to everything we do at Google. Keeping replicas of data up to date, correctly reporting the order of searches and clicks, and determining which data-affecting operation came last are all examples of why accurate time is crucial to our products and to our ability to keep your data safe.

Very large-scale distributed systems, like ours, demand that time be well-synchronized and expect that time always moves forwards. Computers traditionally accommodate leap seconds by setting their clock backwards by one second at the very end of the day. But this “repeated” second can be a problem. For example, what happens to write operations that happen during that second? Does email that comes in during that second get stored correctly? What about all the unforeseen problems that may come up with the massive number of systems and servers that we run? Our systems are engineered for data integrity, and some will refuse to work if their time is sufficiently “wrong.” We saw some of our clustered systems stop accepting work on a small scale during the leap second in 2005, and while it didn’t affect the site or any of our data, we wanted to fix such issues once and for all.

This was the problem that a group of our engineers identified during 2008, with a leap second scheduled for December 31. Given our observations in 2005, we wanted to be ready this time, and in the future. How could we make sure everything at Google stays running as if nothing happened, when all our server clocks suddenly see the same second happening twice? Also, how could we make this solution scale? Would we need to audit every line of code that cares about the time? (That’s a lot of code!)

The solution we came up with came to be known as the “leap smear.” We modified our internal NTP servers to gradually add a couple of milliseconds to every update, varying over a time window before the moment when the leap second actually happens. This meant that when it became time to add an extra second at midnight, our clocks had already taken this into account, by skewing the time over the course of the day. All of our servers were then able to continue as normal with the new year, blissfully unaware that a leap second had just occurred. We plan to use this “leap smear” technique again in the future, when new leap seconds are announced by the IERS.

Here’s the science bit

Usually when a leap second is almost due, the NTP protocol says a server must indicate this to its clients by setting the “Leap Indicator” (LI) field in its response. This indicates that the last minute of that day will have 61 seconds, or 59 seconds. (Leap seconds can, in theory, be used to shorten a day too, although that hasn’t happened to date.) Rather than doing this, we applied a patch to the NTP server software on our internal Stratum 2 NTP servers to not set LI, and tell a small “lie” about the time, modulating this “lie” over a time window w before midnight:

lie(t) = (1.0 – cos(pi * t / w)) / 2.0

What this did was make sure that the “lie” we were telling our servers about the time wouldn’t trigger any undesirable behavior in the NTP clients, such as causing them to suspect the time servers to be wrong and applying local corrections themselves. It also made sure the updates were sufficiently small so that any software running on the servers that were doing synchronization actions or had Chubby locks wouldn’t lose those locks or abandon any operations. It also meant this software didn’t necessarily have to be aware of or resilient to the leap second.

In an experiment, we performed two smears—one negative then one positive—and tested this setup using about 10,000 servers. We’d previously added monitoring to plot the skew between atomic time, our Stratum 2 servers and all those NTP clients, allowing us to constantly evaluate the performance of our time infrastructure. We were excited to see monitoring showing plots of those servers’ clocks tracking our model’s predictions, and that we were continuing to serve users’ requests without errors.

Following the successful test, we reconfigured all our production Stratum 2 NTP servers with details of the actual leap second, ready for New Year’s Eve, when they would automatically activate the smear for all production machines, without any further human intervention required. We had a “big red button” opt-out that allowed us to stop the smear in case anything went wrong.

What we learned

The leap smear is talked about internally in the Site Reliability Engineering group as one of our coolest workarounds, that took a lot of experimentation and verification, but paid off by ultimately saving us massive amounts of time and energy in inspecting and refactoring code. It meant that we didn’t have to sweep our entire (large) codebase, and Google engineers developing code don’t have to worry about leap seconds. The team involved in solving this issue was a handful of people, distributed around the world, who were able to work together without restriction in order to solve this problem.

The solution to this challenge drove a lot of thinking to develop better ways to implement locking and consistency, and synchronizing units of work between servers across the world. It also meant we thought more about the precision of our time systems, which have a knock-on effect on our ability to minimize resource wastage and run greener data centers by reducing the amount of time we must spend waiting for responses and rarely doing excess work.

By anticipating potential problems and developing solutions like these, the Site Reliability Engineering group informs and inspires the development of new technology for distributed systems—the systems that you use every day in Google’s products.

Maps API for Flash

 

In the launch of Google Maps API for Flash in May 2008 they were responding to strong demand from ActionScript developers for a way to integrate Google Maps into their applications and exploit the performance and cross-platform strengths of Flash.

However use of the Maps API for Flash remains a small percentage of overall Maps API traffic, with only a limited number of applications taking advantage of features unique to the Maps API for Flash. In addition, the performance and consistency of browser JavaScript implementations has progressed, making the JavaScript Maps API an increasingly suitable alternative.

Consequently they have decided to deprecate the Maps API for Flash in order to focus our attention on the JavaScript Maps API v3 going forward. This means that although Maps API for Flash applications will continue to function in accordance with the deprecation policy given in the Maps API Terms of Service, no new features will be developed, and only critical bugs, regressions, and security issues will be fixed. We will continue to provide support to existing Google Maps API Premier customers using the Maps API for Flash, but will wind down Developer Relations involvement in the Maps API for Flash forum.

They understand that this decision will be disappointing for Maps API for Flash developers. Google hope you will consider migrating your applications to the Maps API v3, which offers many additional benefits such as Street View, Fusion Tables integration, Places search, and full support for mobile browsers. Developer Relations team and many skilled members of the JavaScript Maps API community are available to assist you in doing so on the Google Maps JavaScript API v3 forum.

Google remains supportive of Flash as a development platform for Rich Internet Applications for Chrome, Android, and other devices. However by consolidating our development on the Maps API v3 we can focus all of our resources on delivering great new Maps API features for the benefit of as many developers as possible.