Searching closer

The global trends revealed in 2011 year-end Zeitgeist. In addition to looking broadly at topics from more than 40 countries around the world, the site included the most popular local terms from more than 30 major cities in the United States. Looking at these lists, I was surprised to see that, from city to city, almost all the terms across all the lists were related to local news, education, civic services or entertainment and activities. I wanted to take a minute to take a deeper dive into these specific local trends that you won’t see published in Zeitgeist.

In nearly every single U.S. city we looked at, the top ten local terms showed that people were using Google to find local news stations and learn more about educational organizations. Searches for school districts, universities and local libraries made the list in ten states, from the Hennepin County Library in Minneapolis, MN to the Chicago Public Library in Illinois. Pittsburgh, PA was the most media-hungry city in Zeitgeist this year, with all of their top 5 terms related to local news stations, while in New York City and Houston, TX, no news sources made the top ten lists.

The data also showed some interesting regional differences within single states as well. Orlando, FL was a leading city in terms of education, with related terms making up 40% of the list. But in Miami, less than 250 miles away, no education-related terms made the list. While local TV station WRAL came in at the top of the list in Raleigh, NC, people three hours away in Charlotte bumped the station to their #3 spot, behind the local Charlotte Observer, which came in at #1.

People in many U.S. cities were also using Google to connect with civic services, from the Maricopa County Assessor in Phoenix, AZ to the Cuyahoga County Auditor in Columbus, OH. We also saw many terms related to public transit and traffic, which was of particular interest to residents of Los Angeles, who put SigAlert, the California Highway Patrol’s traffic report site, as the #1 most popular local term. The city most concerned with getting out and about, though, was New York, where the top four search terms were all related to transportation, be it by subway, train or car.

With national unemployment in the United States above 9% for most of the year, there was a strong interest in unemployment and finding jobs locally. The term unemployment came up on lists in Kansas City, Philadelphia and Portland, OR, and in total more than ten of the 31 cities we looked at had a term related to unemployment or job-seeking on their list. In Detroit, MI and Houston, TX, the only two cities where two unemployment-related terms made the top ten, Google searchers were also taking initiative, looking at job posting sites like the Michigan Talent Bank or classifieds like Greensheets to try and find their next job online.

The local lists didn’t just reflect hard times. Nearly every city had a mall, fair, amusement park, sports team or other fun activity or destination in the top ten (the only holdout was Washington D.C.). Indianapolis, IN and Phoenix both had their respective lotteries in the number three spot—looks like lots of people there were feeling lucky! Across the country, Americans also looked for ways to relax, from restaurant week in Boston to the Lenox mall in Atlanta and Summerfest in Madison, WI.

The searches people make are a fantastic pulse on what is happening around the world and close to home. We pay attention to what you’re looking for so we can come up with new and better ways to help you find it, no matter where you’re searching from.

Open Street Maps –vs- Ordnance Survey

In this post I’m going to look at how OSM data compares in terms of quality, and what better benchmark to test it against than the Ordnance Survey (OS) – the historical and much-revered national executive mapping agency of Great Britain.

A UK company called ITO has produced a very useful OSM Analysis tool, that displays details of roads that are listed in the Ordnance Survey’s OS Locator dataset that are not present in the OSM dataset. The tool is provided as a service for OSM mappers, with the assumption being that the OS dataset is both complete and accurate, and therefore roads listed on the site are “missing” from the OSM dataset and need to be found and added.

Looking at the analysis for Norwich, there are a total of 38 roads (out of 1,619) of the OS Locator dataset for Norwich that are not present in the OSM dataset. This works out at a 97.65% match between OSM and OS, and ranks Norwich as 41st in terms of OSM completeness out of the listed UK areas. Being both a competitive person and proud of Norwich as my (adopted) home, I thought I’d set about tracking down those missing 38 roads and making the OSM map of Norwich 100% complete to the Ordnance Survey dataset.

But hang on, I thought that this post was about quality, not completeness? And there is the interesting thing…. it turns out that the “missing” roads are not due to incomplete data, but rather discrepancies in data quality. To explain, the list of roads currently shown on the OS Locator dataset but not on OSM is as follows:

  • ALKMAAR WAY
  • ATTHILL ROAD
  • BIRBECK ROAD
  • BLACK HORSE OPENING
  • BRIGHTY’S OPENING – minor error
  • BRUNEL WAY
  • BUMSTEDE COURT
  • CHARGE ROAD
  • CHURSTON CLOSE
  • CLOVELLY DRIVE
  • CLOVELLY DRIVE
  • CLYFFE COTTAGES
  • DRAYTON ROAD (A1067)
  • EBENEZER PLACE
  • EDEN CLOSE
  • EDWARD JORDELL PLAIN
  • GEORGE DANCE PLACE
  • GILDERS WAY
  • GUNTON LANE
  • HOOPER LANE
  • KETT’S HILL (B1140) – minor error
  • MIDDLETON’S LANE – minor error
  • NEWBRGIN ROAD
  • PLUMSTEAD ROAD
  • PRINCE ANDREW’S ROAD – minor error
  • QUEENS ROAD (A147)
  • RANDLE GREEN
  • ST FAITH’S ROAD – minor error
  • ST FAITH’S ROAD – minor error
  • ST FAITH’S ROAD – minor error
  • ST JAMES COURT
  • ST JOHN STREET
  • THE MONASTERY
  • THREE KING LANE
  • TRILITHORN CLOSE
  • UPPER ST GILES STREET
  • WALL LANE
  • WARRINGTON WAY

The first thing to note is that 7 of the errors are listed by the OSM Analysis tool itself as “minor” errors. In every case, the discrepancy is in the use of apostrophes in the road name. I’m not about to get into an argument on the use of English language as to which one is correct – but as far as I’m concerned if the only difference between the two datasets is in the use of an apostrophe, I’m happy to mark that down as not being relevant in terms of spatial data completeness/accuracy.

I was surprised to see some of the roads listed as apparently missing: Queen’s Road, for example, is a major part of the Norwich inner ring road – how could that be missing from OSM? The answer can be found by examining the map that shows where the “absent” Ordnance Survey record is supposed to lie:

image

Queen’s Road becomes Bracondale at a fairly non-descript location – it’s not a major intersection, it’s more of just a bend in the road. The OS Locator dataset contains one section of road that’s labelled “Queen’s Road”, which, in the OSM dataset, has been labelled “Bracondale” – essentially it’s just a discrepancy between exactly where the road changes name. Again, I’m happy to ignore that one.

Another road on the list that lies not far from me is “Birkbeck Road”. Or, at least, that’s what I thought it was called, and that’s what it’s called in OSM. But the OS dataset seems to think it’s called “Birbeck Road”. Could it be that I’ve been wrong? I went to check Google Maps:

image

Google Maps reckons it’s called “Birbeck Road”, so maybe the Ordnance Survey is right, and OSM is wrong. To double check, I went to Bing Maps:

image

Huh? So that’s two votes for “Birbeck”, and two for “Birkbeck”. I better carry on looking…

www.yell.com goes with “Birkbeck”, so that’s 3-2 to the OSM spelling.

image

But www.viamichelin.co.uk agrees with the Ordnance Survey. 3 all.

image

www.streetmap.co.uk agrees with OSM….

image

….until you zoom in, at which point it changes its mind and agrees with the Ordnance Survey!

image

And this is where Open Street Map really excels – it’s collaboratively created by people on the ground with local knowledge, and with the aid of my bicycle and a camera, I could demonstrate the ethos of Open Street Map and settle this dispute once and for all. So, I now proudly present….

image

Birkbeck Road. So it turns out that Open Street Maps is correct, and the Ordnance Survey is wrong (as are Google Maps, Via Michelin, and StreetMap.co.uk). Enthused by this, and still on my bike, I decided to check some of the other apparently missing roads. The following photograph explains the apparent missing EDWARD JORDELL PLAIN, which once again, OSM correctly has listed as Edward Jodrell plain.

image

Interestingly, once again Google Maps agrees with the Ordnance Survey, and gets it wrong.

image

Whereas Bing Maps agrees with Open Street Maps and gets the correct spelling:

image

Over the next week or so I’m hoping to work my way through the rest of the list, but I continue to be amazed by the data available through Open Street Maps. Not only does it contain more data than Bing or Google (in that it contains points of interest such as postboxes, bus stops, property boundaries, and any number of other amenities), but in many cases it has higher quality data than the Ordnance Survey, at least in terms of the very limited sample I’ve looked at here. In fact, whereas I originally set out to update Open Street Maps to match the data from the Ordnance Survey, I’m beginning to think that the folks at the Ordnance Survey ought to be looking at the OSM data very closely and checking their data against it. And did I mention that OSM is free…?

Android 3.0 Hardware Acceleration


One of the biggest changes we made to Android in this release is the addition of a new rendering pipeline so that applications can benefit from hardware accelerated 2D graphics. Hardware accelerated graphics is nothing new to the Android platform, it has always been used for windows composition or OpenGL games for instance, but with this new rendering pipeline applications can benefit from an extra boost in performance. On a Motorola Xoom device, all the standard applications like Browser and Calendar use hardware-accelerated 2D graphics.

In this article, I will show you how to enable the hardware accelerated 2D graphics pipeline in your application and give you a few tips on how to use it properly.

Go faster

To enable the hardware accelerated 2D graphics, open your AndroidManifest.xml file and add the following attribute to the tag:

    android:hardwareAccelerated="true"

If your application uses only standard widgets and drawables, this should be all you need to do. Once hardware acceleration is enabled, all drawing operations performed on a View’s Canvas are performed using the GPU.

If you have custom drawing code you might need to do a bit more, which is in part why hardware acceleration is not enabled by default. And it’s why you might want to read the rest of this article, to understand some of the important details of acceleration.

Controlling hardware acceleration

Because of the characteristics of the new rendering pipeline, you might run into issues with your application. Problems usually manifest themselves as invisible elements, exceptions or different-looking pixels. To help you, Android gives you 4 different ways to control hardware acceleration. You can enable or disable it on the following elements:

  • Application
  • Activity
  • Window
  • View

To enable or disable hardware acceleration at the application or activity level, use the XML attribute mentioned earlier. The following snippet enables hardware acceleration for the entire application but disables it for one activity:

    

    

If you need more fine-grained control, you can enable hardware acceleration for a given window at runtime:

    getWindow().setFlags(
        WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED,
        WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED);

Note that you currently cannot disable hardware acceleration at the window level. Finally, hardware acceleration can be disabled on individual views:

    view.setLayerType(View.LAYER_TYPE_SOFTWARE, null);

Layer types have many other usages that will be described later.

Am I hardware accelerated?

It is sometimes useful for an application, or more likely a custom view, to know whether it currently is hardware accelerated. This is particularly useful if your application does a lot of custom drawing and not all operations are properly supported by the new rendering pipeline.

There are two different ways to check whether the application is hardware accelerated:

If you must do this check in your drawing code, it is highly recommended to use Canvas.isHardwareAccelerated() instead of View.isHardwareAccelerated(). Indeed, even when a View is attached to a hardware accelerated window, it can be drawn using a non-hardware accelerated Canvas. This happens for instance when drawing a View into a bitmap for caching purpose.

What drawing operations are supported?

The current hardware accelerated 2D pipeline supports the most commonly used Canvas operations, and then some. We implemented all the operations needed to render the built-in applications, all the default widgets and layouts, and common advanced visual effects (reflections, tiled textures, etc.) There are however a few operations that are currently not supported, but might be in a future version of Android:

  • Canvas
    • clipPath
    • clipRegion
    • drawPicture
    • drawPoints
    • drawPosText
    • drawTextOnPath
    • drawVertices
  • Paint
    • setLinearText
    • setMaskFilter
    • setRasterizer

In addition, some operations behave differently when hardware acceleration enabled:

  • Canvas
    • clipRect: XOR, Difference and ReverseDifference clip modes are ignored; 3D transforms do not apply to the clip rectangle
    • drawBitmapMesh: colors array is ignored
    • drawLines: anti-aliasing is not supported
    • setDrawFilter: can be set, but ignored
  • Paint
    • setDither: ignored
    • setFilterBitmap: filtering is always on
    • setShadowLayer: works with text only
  • ComposeShader
    • A ComposeShader can only contain shaders of different types (a BitmapShader and a LinearGradientShader for instance, but not two instances of BitmapShader)
    • A ComposeShader cannot contain a ComposeShader

If drawing code in one of your views is affected by any of the missing features or limitations, you don’t have to miss out on the advantages of hardware acceleration for your overall application. Instead, consider rendering the problematic view into a bitmap or setting its layer type to LAYER_TYPE_SOFTWARE. In both cases, you will switch back to the software rendering pipeline.

Dos and don’ts

Switching to hardware accelerated 2D graphics is a great way to get smoother animations and faster rendering in your application but it is by no means a magic bullet. Your application should be designed and implemented to be GPU friendly. It is easier than you might think if you follow these recommendations:

  • Reduce the number of Views in your application: the more Views the system has to draw, the slower it will be. This applies to the software pipeline as well; it is one of the easiest ways to optimize your UI.
  • Avoid overdraw: always make sure that you are not drawing too many layers on top of each other. In particular, make sure to remove any Views that are completely obscured by other opaque views on top of it. If you need to draw several layers blended on top of each other consider merging them into a single one. A good rule of thumb with current hardware is to not draw more than 2.5 times the number of pixels on screen per frame (and transparent pixels in a bitmap count!)
  • Don’t create render objects in draw methods: a common mistake is to create a new Paint, or a new Path, every time a rendering method is invoked. This is not only wasteful, forcing the system to run the GC more often, it also bypasses caches and optimizations in the hardware pipeline.
  • Don’t modify shapes too often: complex shapes, paths and circles for instance, are rendered using texture masks. Every time you create or modify a Path, the hardware pipeline must create a new mask, which can be expensive.
  • Don’t modify bitmaps too often: every time you change the content of a bitmap, it needs to be uploaded again as a GPU texture the next time you draw it.
  • Use alpha with care: when a View is made translucent using View.setAlpha(), an AlphaAnimation or an ObjectAnimator animating the “alpha” property, it is rendered in an off-screen buffer which doubles the required fill-rate. When applying alpha on very large views, consider setting the View’s layer type to LAYER_TYPE_HARDWARE.

View layers

Since Android 1.0, Views have had the ability to render into off-screen buffers, either by using a View’s drawing cache, or by using Canvas.saveLayer(). Off-screen buffers, or layers, have several interesting usages. They can be used to get better performance when animating complex Views or to apply composition effects. For instance, fade effects are implemented by using Canvas.saveLayer() to temporarily render a View into a layer and then compositing it back on screen with an opacity factor.

Because layers are so useful, Android 3.0 gives you more control on how and when to use them. To to so, we have introduced a new API called View.setLayerType(int type, Paint p). This API takes two parameters: the type of layer you want to use and an optional Paint that describes how the layer should be composited. The paint parameter may be used to apply color filters, special blending modes or opacity to a layer. A View can use one of 3 layer types:

  • LAYER_TYPE_NONE: the View is rendered normally, and is not backed by an off-screen buffer.
  • LAYER_TYPE_HARDWARE: the View is rendered in hardware into a hardware texture if the application is hardware accelerated. If the application is not hardware accelerated, this layer type behaves the same as LAYER_TYPE_SOFTWARE.
  • LAYER_TYPE_SOFTWARE: the View is rendered in software into a bitmap

The type of layer you will use depends on your goal:

  • Performance: use a hardware layer type to render a View into a hardware texture. Once a View is rendered into a layer, its drawing code does not have to be executed until the View calls invalidate(). Some animations, for instance alpha animations, can then be applied directly onto the layer, which is very efficient for the GPU to do.
  • Visual effects: use a hardware or software layer type and a Paint to apply special visual treatments to a View. For instance, you can draw a View in black and white using a ColorMatrixColorFilter.
  • Compatibility: use a software layer type to force a View to be rendered in software. This is an easy way to work around limitations of the hardware rendering pipeline.

Layers and animations

Hardware-accelerated 2D graphics help deliver a faster and smoother user experience, especially when it comes to animations. Running an animation at 60 frames per second is not always possible when animating complex views that issue a lot of drawing operations. If you are running an animation in your application and do not obtain the smooth results you want, consider enabling hardware layers on your animated views.

When a View is backed by a hardware layer, some of its properties are handled by the way the layer is composited on screen. Setting these properties will be efficient because they do not require the view to be invalidated and redrawn. Here is the list of properties that will affect the way the layer is composited; calling the setter for any of these properties will result in optimal invalidation and no redraw of the targeted View:

  • alpha: to change the layer’s opacity
  • x, y, translationX, translationY: to change the layer’s position
  • scaleX, scaleY: to change the layer’s size
  • rotation, rotationX, rotationY: to change the layer’s orientation in 3D space
  • pivotX, pivotY: to change the layer’s transformations origin

These properties are the names used when animating a View with an ObjectAnimator. If you want to set/get these properties, call the appropriate setter or getter. For instance, to modify the alpha property, call setAlpha(). The following code snippet shows the most efficient way to rotate a View in 3D around the Y axis:

    view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
    ObjectAnimator.ofFloat(view, "rotationY", 180).start();

Since hardware layers consume video memory, it is highly recommended you enable them only for the duration of the animation. This can be achieved with animation listeners:

    view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
    ObjectAnimator animator = ObjectAnimator.ofFloat(
         view, "rotationY", 180);
    animator.addListener(new AnimatorListenerAdapter() {
        @Override
        public void onAnimationEnd(Animator animation) {
            view.setLayerType(View.LAYER_TYPE_NONE, null);
        }
    });
    animator.start();

New drawing model

Along with hardware-accelerated 2D graphics, Android 3.0 introduces another major change in the UI toolkit’s drawing model: display lists, which are only enabled when hardware acceleration is turned on. To fully understand display lists and how they may affect your application it is important to also understand how Views are drawn.

Whenever an application needs to update a part of its UI, it invokes invalidate() (or one of its variants) on any View whose content has changed. The invalidation messages are propagated all the way up the view hierarchy to compute the dirty region; the region of the screen that needs to be redrawn. The system then draws any View in the hierarchy that intersects with the dirty region. The drawing model is therefore made of two stages:

  1. Invalidate the hierarchy
  2. Draw the hierarchy

There are unfortunately two drawbacks to this approach. First, this drawing model requires execution of a lot of code on every draw pass. Imagine for instance your application calls invalidate() on a button and that button sits on top of a more complex View like a MapView. When it comes time to draw, the drawing code of the MapView will be executed even though the MapView itself has not changed.

The second issue with that approach is that it can hide bugs in your application. Since views are redrawn anytime they intersect with the dirty region, a View whose content you changed might be redrawn even though invalidate() was not called on it. When this happens, you are relying on another View getting invalidated to obtain the proper behavior. Needless to say, this behavior can change every time you modify your application ever so slightly. Remember this rule: always call invalidate() on a View whenever you modify data or state that affects this View’s drawing code. This applies only to custom code since setting standard properties, like the background color or the text in a TextView, will cause invalidate() to be called properly internally.

Android 3.0 still relies on invalidate() to request screen updates and draw() to render views. The difference is in how the drawing code is handled internally. Rather than executing the drawing commands immediately, the UI toolkit now records them inside display lists. This means that display lists do not contain any logic, but rather the output of the view hierarchy’s drawing code. Another interesting optimization is that the system only needs to record/update display lists for views marked dirty by an invalidate() call; views that have not been invalidated can be redrawn simply by re-issuing the previously recorded display list. The new drawing model now contains 3 stages:

  1. Invalidate the hierarchy
  2. Record/update display lists
  3. Draw the display lists

With this model, you cannot rely on a View intersecting the dirty region to have its draw() method executed anymore: to ensure that a View’s display list will be recorded, you must call invalidate(). This kind of bug becomes very obvious with hardware acceleration turned on and is easy to fix: you would see the previous content of a View after changing it.

Using display lists also benefits animation performance. In the previous section, we saw that setting specific properties (alpha, rotation, etc.) does not require invalidating the targeted View. This optimization also applies to views with display lists (any View when your application is hardware accelerated.) Let’s imagine you want to change the opacity of a ListView embedded inside a LinearLayout, above a Button. Here is what the (simplified) display list of the LinearLayout looks like before changing the list’s opacity:

    DrawDisplayList(ListView)
    DrawDisplayList(Button)

After invoking listView.setAlpha(0.5f) the display list now contains this:

    SaveLayerAlpha(0.5)
    DrawDisplayList(ListView)
    Restore
    DrawDisplayList(Button)

The complex drawing code of ListView was not executed. Instead the system only updated the display list of the much simpler LinearLayout. In previous versions of Android, or in an application without hardware acceleration enabled, the drawing code of both the list and its parent would have to be executed again.

It’s your turn

Enabling hardware accelerated 2D graphics in your application is easy, particularly if you rely solely on standard views and drawables. Just keep in mind the few limitations and potential issues described in this document and make sure to thoroughly test your application!