Plot maps with base mapping tools and ggmap in R

Plot maps with ‘base’ mapping tools in R

Understanding what kind of data you have (polygons or points?) and what you want to map is pivotal to start your mapping.

  1. First you need a shapefile of the area you want to plot, such as metropolitan France. There are various resources where to get them from: DIVA-GIS and EUROSTAT are those that I use the most. It’s always important to have a .prj file included, as your final map ‘should’ be projecte. I say “should” as sometimes it is just not possible, especially if you work with historical maps.
  2. Upload libraries

Load and prepare data

setwd(paste(mypath))
fr.prj <- readOGR(".", "FRA_adm2")
## OGR data source with driver: ESRI Shapefile
## Source: ".", layer: "FRA_adm2"
## with 96 features
## It has 18 fields
## NOTE: rgdal::checkCRSArgs: no proj_defs.dat in PROJ.4 shared files
map(fr.prj)
rplot
## Warning in SpatialPolygons2map(database, namefield = namefield): database
## does not (uniquely) contain the field 'name'.

head(fr.prj@data)
##   ID_0 ISO NAME_0 ID_1    NAME_1  ID_2         NAME_2   VARNAME_2
## 0   76 FRA France  989    Alsace 13755       Bas-Rhin  Unterelsaá
## 1   76 FRA France  989    Alsace 13756      Haut-Rhin   Oberelsaá
## 2   76 FRA France  990 Aquitaine 13757       Dordogne        <NA>
## 3   76 FRA France  990 Aquitaine 13758        Gironde Bec-D'Ambes
## 4   76 FRA France  990 Aquitaine 13759         Landes      Landas
## 5   76 FRA France  990 Aquitaine 13760 Lot-Et-Garonne        <NA>
##   NL_NAME_2 HASC_2 CC_2      TYPE_2  ENGTYPE_2 VALIDFR_2 VALIDTO_2
## 0      <NA>  FR.BR <NA> Département Department  17900226   Unknown
## 1      <NA>  FR.HR <NA> Département Department  17900226   Unknown
## 2      <NA>  FR.DD <NA> Département Department  17900226   Unknown
## 3      <NA>  FR.GI <NA> Département Department  17900226   Unknown
## 4      <NA>  FR.LD <NA> Département Department  17900226   Unknown
## 5      <NA>  FR.LG <NA> Département Department  17900226   Unknown
##   REMARKS_2 Shape_Leng Shape_Area
## 0      <NA>   4.538735  0.5840273
## 1      <NA>   3.214178  0.4198797
## 2      <NA>   5.012795  1.0389622
## 3      <NA>   9.200047  1.1489822
## 4      <NA>   5.531231  1.0372815
## 5      <NA>   4.489830  0.6062017
# load or create data
set.seed(100)
myvar <- rnorm(1:96)
# manipulate data for the plot
france.geodata  <- data.frame(id=rownames(fr.prj@data), mapvariable=myvar)
head(france.geodata)
##   id mapvariable
## 1  0  1.12200636
## 2  1  0.05912043
## 3  2 -1.05873510
## 4  3 -1.31513865
## 5  4  0.32392954
## 6  5  0.09152878

Use ggmap

# fortify prepares the shape data for ggplot
france.dataframe <- fortify(fr.prj) # convert to data frame for ggplot
## Regions defined for each Polygons
head(france.dataframe)
##       long      lat order  hole piece id group
## 1 7.847912 49.04728     1 FALSE     1  0   0.1
## 2 7.844539 49.04495     2 FALSE     1  0   0.1
## 3 7.852439 49.04510     3 FALSE     1  0   0.1
## 4 7.854333 49.04419     4 FALSE     1  0   0.1
## 5 7.855955 49.04431     5 FALSE     1  0   0.1
## 6 7.856299 49.03776     6 FALSE     1  0   0.1
#now combine the values by id values in both dataframes
france.dat <- join(france.geodata, france.dataframe, by="id")
head(france.dat)
##   id mapvariable     long      lat order  hole piece group
## 1  0    1.122006 7.847912 49.04728     1 FALSE     1   0.1
## 2  0    1.122006 7.844539 49.04495     2 FALSE     1   0.1
## 3  0    1.122006 7.852439 49.04510     3 FALSE     1   0.1
## 4  0    1.122006 7.854333 49.04419     4 FALSE     1   0.1
## 5  0    1.122006 7.855955 49.04431     5 FALSE     1   0.1
## 6  0    1.122006 7.856299 49.03776     6 FALSE     1   0.1
# Plot 3
p <- ggplot(data=france.dat, aes(x=long, y=lat, group=group))
p <- p + geom_polygon(aes(fill=mapvariable)) +
       geom_path(color="white",size=0.1) +
       coord_equal() +
       scale_fill_gradient(low = "#ffffcc", high = "#ff4444") +
       labs(title="Our map",fill="My variable")
# plot the map
p

image-22-02-2017-at-12-11

Use plot basic

nclassint <- 5 #number of colors to be used in the palette
cat <- classIntervals(myvar, nclassint,style = "jenks") #style refers to how the breaks are created
colpal <- brewer.pal(nclassint,"RdBu")
color <- findColours(cat,rev(colpal)) #sequential
bins <- cat$brks
lb <- length(bins)
plot(fr.prj, col=color,border=T)
legend("bottomleft",fill=rev(colpal),legend=paste(round(bins[-length(bins)],1),":",round(bins[-1],1)),cex=1, bg="white")

image-22-02-2017-at-12-23-copy

Find color breaks for mapping (fast)

I’ve stumbled upon a little trick to compute jenks breaks faster than with the classInt package, just be sure to use n+1 instead of n as the breaks are computed a little bit differently. That is to say, if you want 5 breaks, n=6, no biggie there.

For more on the Bayesian Analysis of Macroevolutionary Mixtures see BAMMtools library

install.packages("BAMMtools")
library(BAMMtools)
system.time(getJenksBreaks(mydata$myvar, 6))
> user system elapsed
> 0.970 0.001 0.971

On the other hand this takes way more time with large datasets
library(classInt)
system.time(classIntervals(mydata$myvar, n=5, style="jenks"))
> Timing stopped at: 1081.894 1.345 1083.511

1887 crude mortality rate in Spain using classInt package

TBM_1887 jenks
Crude Mortality Rate in Spain, 1887 Census

TBM_1887 quantile TBM_1887 bclust TBM_1887 fisher

>nclassint <- 5 #number of colors to be used in the palette
>cat <- classIntervals(dt$TBM, nclassint,style = "jenks")
>colpal <- brewer.pal(nclassint,"Reds")
>color <- findColours(cat,colpal) #sequential
>bins <- cat$brks
>lb <- length(bins)
>cat

style: jenks
[20.3,25.9] (25.9,30.5] (30.5,34.4] (34.4,38.4] (38.4,58.2]
68         114         130         115          35

Save the categories into a data.frame (dat)

type first second third fourth fifth
1 quantile    91     93    92     91    95
2       sd    10    202   244      5     0
3    equal   100    246   113      2     1
4   kmeans    68    115   142    118    19
5    jenks    68    114   130    115    35
6   hclust   100    174   153     34     1
7   bclust    53    120   275     13     1
8   fisher    68    114   130    115    35

and melt it into a long format (required by ggplot):

dat1 <- melt(dat,id.vars=c("type"),value.name="n.breaks")

ggplot(dat1,aes(x=variable,y=n.breaks,fill=type))+
geom_bar(stat="identity", position=position_dodge())

Rplot

A match made in R: checking the order of geographical areas in shape files and in your data frames

Not every shape file is as nice as those provided in libraries. Sometimes we have to deal with historical maps, which have been hand-drawn, re-touched and what not. To work with geo-referenced data it is essential to have a variable in both shape file and dataframe with unique coding that has exactly the same number of areas and the same ordering in both files.

A quick way to check if shapefile and dataframe have the same number of areas:

nrow(df) == length(shape.file$Code)

In the shapefile, one can also select a couple of areas big enough so that they can easily be located, and plot them as “control” areas.
For instance, I want to select the area with code “15078” in the shapefile:
>which(shape.file$Code=="15078",arr.Ind=T)
[1] 271

which is the area in the 271-th position (same way shape.file$Code[271] gives the code of area 271).
plot(shape.file)
plot(shape.file[c(271,898),],col="red",border="red",add=T)

this is an easy way to locate your “control” area(s).
Rplot
Ideally, you should have some variable that is identical to the one in the shapefile, a codification of some sort, providing a unique Code, the name of the area or some factors that allow you to locate the area in space.

An easy way to check if both shape file and data frame have the same ordering of geographical areas is to test it:
>code.sh <- cbind(c(1:length(shape.file$Code)),as.vector(shape.file$Code))
>code.df <- cbind(c(1:nrow(df)),df$Code)
>code.df==code
.sh
[,1]  [,2]
[1,] TRUE  TRUE
[2,] TRUE  TRUE
[3,] TRUE  TRUE

What if it’s not?
First option: the inelegant solution
Manually change the order of the areas in a csv file according to the exact order they have in the shape file. It’s easy as you can create an ordinal index for the shapefile codes, paste it in excel, and assign it with a vlookup function.
Second option: the smart R match
In R there is a function called match that returns a vector of the positions of first matches of the first argument in its second:
>my.match <- match(df$Code, shape.file$Code)
NB: to use match the two variables providing the code for the areas have to have the very same unique and identical codes, or else funny stuff happens. To check that everything is in its right place, you can plot the two “control” spatial polygons we chose in the beginning, using their position in the dataframe rather than in the shapefile:
>plot(shape.file)
>plot(shape.file[c(which(df$Code=="305"),which(df$Code=="15078")),],col="orange",add=T)

Game of Thrones maps in R…

The map of GOT world with rivers, roads, lakes, the Wall, and main cities:

GOT_map

Neighborhood relations according to Sphere of Influence pretty much coincide with roads and rivers (package spdep):

GOT_neigh

Paste some images to locate the (surviving) Stark family members, using rasterImage from the png library:

GOT_Stark

Creating neighborhood matrices for Spatial Polygons

One of the first steps in spatial analysis is to create a neighborhood matrix, that is to say create a relationship/connection between each and (ideally!) every polygon. Why? Well, given that the premise for spatial analysis is that neighboring locations are more similar than far away locations, we need to define what is “near”, a set of neighbors for each location capturing such dependence.

There are many ways to define neighbors, and usually, they are not interchangeable, meaning that one neighborhood definition will capture spatial autocorrelation differently from another.

In R the package spdep allows to create a neighbor matrix according to a wide range of definitions: contiguity, radial distance, graph based, and triangulation (and more). There are 3 main and most used neighbors: 1) Contiguity based of order 1 or higher, 2) Distance based, and 3) Graph based.

Install and load the maptools and spdep libraries shapefile from North Carolina counties:

>library(maptools)
>library(spdep)
>NC<- readShapePoly(system.file("shapes/sids.shp", package="maptools")[1], IDvar="FIPSNO", proj4string=CRS("+proj=longlat +ellps=clrk66"))

1. Contiguity based relations are the most used in the presence of irregular polygons  with varying shape and surface, since contiguity ignores distance and focuses instead on the location of an area. The function poly2nb allows to create 2 types of contiguity based relations:

  1. First Order Queen Contiguity defines a neighbor when at least one point on the boundary of one polygon is shared with at least one point of its neighbor (common border or corner);
    >nb.FOQ <- poly2nb(NC, queen=TRUE, row.names=NC$FIPSNO) #row.names refers to the unique names of each polygon
    Calling nb.FOQ you get a summary of the neighbor matrix, including the total number of areas/counties, and average number of links:

    Neighbour list object:
    Number of regions: 100
    Number of nonzero links: 490
    Percentage nonzero weights: 4.9
    Average number of links: 4.9
  2. First Order Rook Contiguity does not include corners, only borders, thus comprising only polygons sharing more than one boundary point;
    >nb.RK <- poly2nb(NC, queen=F,row.names=NC$FIPSNO)
    > nb.RK
    Neighbour list object:
    Number of regions: 100
    Number of nonzero links: 462
    Percentage nonzero weights: 4.62
    Average number of links: 4.62
    NB: if there is a region without any link, there will be a message like this:Neighbour list object:
    Number of regions: 910
    Number of nonzero links: 4906
    Percentage nonzero weights: 0.5924405
    Average number of links: 5.391209
    10 regions with no links:
    1014 3507 3801 8245 9018 10037 22125 30005 390299 390399

    where you can identify the regions with no links (1014, 3507,…), and in R it is possible to manually connect them or change the neighbor matrix so that they can be included in the neighbor matrix (such as graph based neighbors).
    Contiguity
  3. Higher order neighbors are useful when looking at the effect of lags on spatial autocorrelation and in spatial autoregressive models like SAR with a more global spatial autocorrelation:

>nb.FOQ <- poly2nb(NC, queen=TRUE, row.names=NC$FIPSNO) #first define the first order queen to get to further lags
# Second Order Queen
>nb.SOQ <- nblag(nb.FOQ,2) # 2 is the lag, if you want 6th order neighbors you'd have nblag(nb,6)
>nb.RK <- poly2nb(NC, queen=F,row.names=NC$FIPSNO) #same here
# Second Order Rook
>nb.SRC <- nblag(nb.RK,2)

Contiguity2
2. Distance based neighbors defines a set of connections between polygons either based on a (1) defined Euclidean distance between centroids dnearneigh or a certain (2) number of neighbors knn2nb (e.g. 5 nearest neighbors);

>coordNC <- coordinates(NC) #get centroids coordinates
d05m <- dnearneigh(coordNC, 0.5) #define the distance (here 1/2 mile)
>nb.5NN <- knn2nb(knearneigh(coordNC,k=5),row.names=NC$FIPSNO) #set the number of neighbors (here 5)

Distance

a little trick: if you want information on neighbor distances whatever the type of neighborhood may be:
>distance <- unlist(nbdists(nb.5NN, coordNC))
>summary(distance)
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
0.1197  0.3323  0.3956  0.4095  0.4716  0.9327

3. Graph based

  1. Delauney triangulation tri2nb constructs neighbors through Voronoi triangles such that each centroid is a triangle node. As a consequence, DT ensures that every polygon has a neighbor, even in presence of islands. The “problem” with this specification is that it treats our area of study as if it were an island itself, without any neighbors (as if North Carolina were an island with no Virginia or South Carolina)… Therefore, distant points that would not be neighbors (such as Cherokee and Brunswick counties) become such;
  2. Gabriel Graph gabrielneigh is a particular case of the DT, where a and b are two neighboring points/centroids if in the circles passing by  a and b with diameter ab does not lie any other point/centroid;
  3. Sphere of Influence soi.graph: twopoints a and b are SOI neighbors if the circles centered on a and b, of radius equal to the a and b nearest neighbour distances, intersect twice. It is a sort of Delauney triangulation without the longest connections;
  4. Relative Neighbors relativeneigh is a particular case of GG. A border belongs to RN if the intersection formed by the two circles centered in a and b with radius ab does not contain any other point.

>IDs <- row.names(as(NC, "data.frame")) #create a vector with the names of each polygon NC$FIPSNO
>delTrinb <- tri2nb(coordNC, row.names = IDs) #Delauney triangulation
>summary(distance)
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
0.1197  0.3473  0.4154  0.5673  0.5187  5.5830
>SOInb <- graph2nb(soi.graph(delTrinb, coordNC), row.names = IDs) #Sphere of influence
>summary(distance)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1197 0.3257 0.3919 0.3958 0.4629 0.6460
>GGnb <- graph2nb(gabrielneigh(coordNC), row.names = IDs) #Gabriel graph
>summary(distance)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1197 0.3191 0.3715 0.3813 0.4364 0.6777
>RNnb <- graph2nb(relativeneigh(coordNC), row.names = IDs) #Relative neighbor (or relative graph)
>summary(distance)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1197 0.2984 0.3464 0.3382 0.3815 0.5187

GraphBased

Mean Age at Childbearing in Spain 2011

TFR 2011 fixed