We consider the problem of labeling linear objects (such as streets)
in interactive maps where the user can pan, zoom, and rotate
continuously. Our labels contain text (such as street names). They
are embedded into the objects they label, i.e., they
follow the curvature of the objects,
they do not move with respect to the map background,
but they scale in order to maintain constant size on the screen.
To the best of our knowledge,
this is the first work that deals with curved labels
in interactive maps.
Our objective is to label as many streets as possible and to select label positions of high quality while forbidding labels to overlap at street crossings. We present a simple but effective algorithm that takes curvature and crossings into account and produces aesthetical labelings. On average over all interaction types, our implementation reaches interactive frame rates of more than 85 frames per second.
We have implemented our algorithm.
For our tests, we extracted a map of our hometown Würzburg from
an OpenStreetMap data set that is provided by
The set contains 620 polylines.
Moreover, we have implemented some camera paths. In the following two videos, we show a camera path where all types of interaction (panning, zooming in, zooming out, rotation) are executed. We only show a quite small part of the map as we intend to simulate a navigation system.
The first video gives an impression of how our heuristic works.
It shows a larger part of the map than
a navigation system would. The extension of the virtual display is indicated by a brighter rectangle.
The video shows the interactions in slow motion.
With these two tricks, we enable the viewer to better understand the behavior of our algorithm.
If the video does not work, you can download it (39 MB, DivX-compressed).
The second video reflects a natural speed of interaction; it shows only
the virtual display (e.g., of a navigation system).
If the video does not work, you can download it (24 MB, DivX-compressed).
→ The labelings of the two videos might differ a little. As the speed of the movement differs, the number of visible segments of a street differs and, in turn, the evaluation differs.
The third video shows the same camera path as before but in a bird view. Our labels have a perspective distortion.
If the video does not work, you can download it (48 MB).
AcknowledgmentsWe thank Dennis Zwiebler for implementing our algorithm and Ben Morgan for very helpful advice that greatly improved the running time of the implementation.
When using a digital map in navigation mode, labels along a user's route
are very important for the user. Consequently, these labels should
be very well legible. On that account,
PhD thesis (download),
Nadine Schwartges also combined
the algorithm for embedding curved labels into their streets
the algorithm for labeling streets along a route with billboards.
For the following video, we use the same map as before.
If the video does not work, you can download it (24 MB).