Plot maps with base mapping tools and ggmap in R

Plot maps with ‘base’ mapping tools in R

Understanding what kind of data you have (polygons or points?) and what you want to map is pivotal to start your mapping.

  1. First you need a shapefile of the area you want to plot, such as metropolitan France. There are various resources where to get them from: DIVA-GIS and EUROSTAT are those that I use the most. It’s always important to have a .prj file included, as your final map ‘should’ be projecte. I say “should” as sometimes it is just not possible, especially if you work with historical maps.
  2. Upload libraries

Load and prepare data

setwd(paste(mypath))
fr.prj <- readOGR(".", "FRA_adm2")
## OGR data source with driver: ESRI Shapefile
## Source: ".", layer: "FRA_adm2"
## with 96 features
## It has 18 fields
## NOTE: rgdal::checkCRSArgs: no proj_defs.dat in PROJ.4 shared files
map(fr.prj)
rplot
## Warning in SpatialPolygons2map(database, namefield = namefield): database
## does not (uniquely) contain the field 'name'.

head(fr.prj@data)
##   ID_0 ISO NAME_0 ID_1    NAME_1  ID_2         NAME_2   VARNAME_2
## 0   76 FRA France  989    Alsace 13755       Bas-Rhin  Unterelsaá
## 1   76 FRA France  989    Alsace 13756      Haut-Rhin   Oberelsaá
## 2   76 FRA France  990 Aquitaine 13757       Dordogne        <NA>
## 3   76 FRA France  990 Aquitaine 13758        Gironde Bec-D'Ambes
## 4   76 FRA France  990 Aquitaine 13759         Landes      Landas
## 5   76 FRA France  990 Aquitaine 13760 Lot-Et-Garonne        <NA>
##   NL_NAME_2 HASC_2 CC_2      TYPE_2  ENGTYPE_2 VALIDFR_2 VALIDTO_2
## 0      <NA>  FR.BR <NA> Département Department  17900226   Unknown
## 1      <NA>  FR.HR <NA> Département Department  17900226   Unknown
## 2      <NA>  FR.DD <NA> Département Department  17900226   Unknown
## 3      <NA>  FR.GI <NA> Département Department  17900226   Unknown
## 4      <NA>  FR.LD <NA> Département Department  17900226   Unknown
## 5      <NA>  FR.LG <NA> Département Department  17900226   Unknown
##   REMARKS_2 Shape_Leng Shape_Area
## 0      <NA>   4.538735  0.5840273
## 1      <NA>   3.214178  0.4198797
## 2      <NA>   5.012795  1.0389622
## 3      <NA>   9.200047  1.1489822
## 4      <NA>   5.531231  1.0372815
## 5      <NA>   4.489830  0.6062017
# load or create data
set.seed(100)
myvar <- rnorm(1:96)
# manipulate data for the plot
france.geodata  <- data.frame(id=rownames(fr.prj@data), mapvariable=myvar)
head(france.geodata)
##   id mapvariable
## 1  0  1.12200636
## 2  1  0.05912043
## 3  2 -1.05873510
## 4  3 -1.31513865
## 5  4  0.32392954
## 6  5  0.09152878

Use ggmap

# fortify prepares the shape data for ggplot
france.dataframe <- fortify(fr.prj) # convert to data frame for ggplot
## Regions defined for each Polygons
head(france.dataframe)
##       long      lat order  hole piece id group
## 1 7.847912 49.04728     1 FALSE     1  0   0.1
## 2 7.844539 49.04495     2 FALSE     1  0   0.1
## 3 7.852439 49.04510     3 FALSE     1  0   0.1
## 4 7.854333 49.04419     4 FALSE     1  0   0.1
## 5 7.855955 49.04431     5 FALSE     1  0   0.1
## 6 7.856299 49.03776     6 FALSE     1  0   0.1
#now combine the values by id values in both dataframes
france.dat <- join(france.geodata, france.dataframe, by="id")
head(france.dat)
##   id mapvariable     long      lat order  hole piece group
## 1  0    1.122006 7.847912 49.04728     1 FALSE     1   0.1
## 2  0    1.122006 7.844539 49.04495     2 FALSE     1   0.1
## 3  0    1.122006 7.852439 49.04510     3 FALSE     1   0.1
## 4  0    1.122006 7.854333 49.04419     4 FALSE     1   0.1
## 5  0    1.122006 7.855955 49.04431     5 FALSE     1   0.1
## 6  0    1.122006 7.856299 49.03776     6 FALSE     1   0.1
# Plot 3
p <- ggplot(data=france.dat, aes(x=long, y=lat, group=group))
p <- p + geom_polygon(aes(fill=mapvariable)) +
       geom_path(color="white",size=0.1) +
       coord_equal() +
       scale_fill_gradient(low = "#ffffcc", high = "#ff4444") +
       labs(title="Our map",fill="My variable")
# plot the map
p

image-22-02-2017-at-12-11

Use plot basic

nclassint <- 5 #number of colors to be used in the palette
cat <- classIntervals(myvar, nclassint,style = "jenks") #style refers to how the breaks are created
colpal <- brewer.pal(nclassint,"RdBu")
color <- findColours(cat,rev(colpal)) #sequential
bins <- cat$brks
lb <- length(bins)
plot(fr.prj, col=color,border=T)
legend("bottomleft",fill=rev(colpal),legend=paste(round(bins[-length(bins)],1),":",round(bins[-1],1)),cex=1, bg="white")

image-22-02-2017-at-12-23-copy

Advertisements

Find color breaks for mapping (fast)

I’ve stumbled upon a little trick to compute jenks breaks faster than with the classInt package, just be sure to use n+1 instead of n as the breaks are computed a little bit differently. That is to say, if you want 5 breaks, n=6, no biggie there.

For more on the Bayesian Analysis of Macroevolutionary Mixtures see BAMMtools library

install.packages("BAMMtools")
library(BAMMtools)
system.time(getJenksBreaks(mydata$myvar, 6))
> user system elapsed
> 0.970 0.001 0.971

On the other hand this takes way more time with large datasets
library(classInt)
system.time(classIntervals(mydata$myvar, n=5, style="jenks"))
> Timing stopped at: 1081.894 1.345 1083.511

1887 crude mortality rate in Spain using classInt package

TBM_1887 jenks
Crude Mortality Rate in Spain, 1887 Census

TBM_1887 quantile TBM_1887 bclust TBM_1887 fisher

>nclassint <- 5 #number of colors to be used in the palette
>cat <- classIntervals(dt$TBM, nclassint,style = "jenks")
>colpal <- brewer.pal(nclassint,"Reds")
>color <- findColours(cat,colpal) #sequential
>bins <- cat$brks
>lb <- length(bins)
>cat

style: jenks
[20.3,25.9] (25.9,30.5] (30.5,34.4] (34.4,38.4] (38.4,58.2]
68         114         130         115          35

Save the categories into a data.frame (dat)

type first second third fourth fifth
1 quantile    91     93    92     91    95
2       sd    10    202   244      5     0
3    equal   100    246   113      2     1
4   kmeans    68    115   142    118    19
5    jenks    68    114   130    115    35
6   hclust   100    174   153     34     1
7   bclust    53    120   275     13     1
8   fisher    68    114   130    115    35

and melt it into a long format (required by ggplot):

dat1 <- melt(dat,id.vars=c("type"),value.name="n.breaks")

ggplot(dat1,aes(x=variable,y=n.breaks,fill=type))+
geom_bar(stat="identity", position=position_dodge())

Rplot

A match made in R: checking the order of geographical areas in shape files and in your data frames

Not every shape file is as nice as those provided in libraries. Sometimes we have to deal with historical maps, which have been hand-drawn, re-touched and what not. To work with geo-referenced data it is essential to have a variable in both shape file and dataframe with unique coding that has exactly the same number of areas and the same ordering in both files.

A quick way to check if shapefile and dataframe have the same number of areas:

nrow(df) == length(shape.file$Code)

In the shapefile, one can also select a couple of areas big enough so that they can easily be located, and plot them as “control” areas.
For instance, I want to select the area with code “15078” in the shapefile:
>which(shape.file$Code=="15078",arr.Ind=T)
[1] 271

which is the area in the 271-th position (same way shape.file$Code[271] gives the code of area 271).
plot(shape.file)
plot(shape.file[c(271,898),],col="red",border="red",add=T)

this is an easy way to locate your “control” area(s).
Rplot
Ideally, you should have some variable that is identical to the one in the shapefile, a codification of some sort, providing a unique Code, the name of the area or some factors that allow you to locate the area in space.

An easy way to check if both shape file and data frame have the same ordering of geographical areas is to test it:
>code.sh <- cbind(c(1:length(shape.file$Code)),as.vector(shape.file$Code))
>code.df <- cbind(c(1:nrow(df)),df$Code)
>code.df==code
.sh
[,1]  [,2]
[1,] TRUE  TRUE
[2,] TRUE  TRUE
[3,] TRUE  TRUE

What if it’s not?
First option: the inelegant solution
Manually change the order of the areas in a csv file according to the exact order they have in the shape file. It’s easy as you can create an ordinal index for the shapefile codes, paste it in excel, and assign it with a vlookup function.
Second option: the smart R match
In R there is a function called match that returns a vector of the positions of first matches of the first argument in its second:
>my.match <- match(df$Code, shape.file$Code)
NB: to use match the two variables providing the code for the areas have to have the very same unique and identical codes, or else funny stuff happens. To check that everything is in its right place, you can plot the two “control” spatial polygons we chose in the beginning, using their position in the dataframe rather than in the shapefile:
>plot(shape.file)
>plot(shape.file[c(which(df$Code=="305"),which(df$Code=="15078")),],col="orange",add=T)

Game of Thrones maps in R…

The map of GOT world with rivers, roads, lakes, the Wall, and main cities:

GOT_map

Neighborhood relations according to Sphere of Influence pretty much coincide with roads and rivers (package spdep):

GOT_neigh

Paste some images to locate the (surviving) Stark family members, using rasterImage from the png library:

GOT_Stark

Creating neighborhood matrices for Spatial Polygons in R (updated)

One of the first steps in spatial analysis is to create a neighborhood matrix, that is to say create a relationship/connection between each and (ideally!) every polygon. Why? Well, given that the premise for spatial analysis is that neighboring locations are more similar than far away locations, we need to define what is “near”, a set of neighbors for each location capturing such dependence.

There are many ways to define neighbors, and usually, they are not interchangeable, meaning that one neighborhood definition will capture spatial autocorrelation differently from another.

In R the package spdep allows to create a neighbor matrix according to a wide range of definitions: contiguity, radial distance, graph based, and triangulation (and more). There are 3 main and most used neighbors:

A) Contiguity based of order 1 or higher (most used in social sciences)

B) Distance based

C) Graph based

Install and load the maptools and spdep libraries shapefile from North Carolina counties:

library(maptools)
library(spdep)
NC= readShapePoly(system.file("shapes/sids.shp", package="maptools")[1], IDvar="FIPSNO", proj4string=CRS("+proj=longlat +ellps=clrk66"))

A. Contiguity based relations

are the most used in the presence of irregular polygons with varying shape and surface, since contiguity ignores distance and focuses instead on the location of an area. The function poly2nb allows to create 2 types of contiguity based relations:

1. First Order Queen Contiguity

FOQ contiguity defines a neighbor when at least one point on the boundary of one polygon is shared with at least one point of its neighbor (common border or corner);

nb.FOQ = poly2nb(NC, queen=TRUE, row.names=NC$FIPSNO)
#row.names refers to the unique names of each polygon
nb.FOQ
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 490
## Percentage nonzero weights: 4.9
## Average number of links: 4.9

Calling nb.FOQ you get a summary of the neighbor matrix, including the total number of areas/counties, and average number of links.

2. First Order Rook Contiguity

FOR contiguity does not include corners, only borders, thus comprising only polygons sharing more than one boundary point;

nb.RK = poly2nb(NC, queen=FALSE, row.names=NC$FIPSNO)
nb.RK
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 462
## Percentage nonzero weights: 4.62
## Average number of links: 4.62

NB: if there is a region without any link, there will be a message like this:
Neighbour list object:
Number of regions: 910
Number of nonzero links: 4620
Percentage nonzero weights: 0.5924405
Average number of links: 5.391209
10 regions with no links:
1014 3507 3801 8245 9018 10037 22125 30005 390299 390399

where you can identify the regions with no links (1014, 3507,…) using which(…), and in R it is possible to “manually” connect them or change the neighbor matrix so that they can be included (such as graph or distance based neighbors).
Sometimes, it also happens that some polygons that have been retouched (sounds like a blasphemy but it happens a lot with historical maps) may not recognize shared borders. This is when manually setting up neighbors comes in handy (you can’t do that in Geoda).

Contiguity

Higher order neighbors are useful when looking at the effect of lags on spatial autocorrelation and in spatial autoregressive models like SAR with a more global spatial autocorrelation:

nb.SRC = nblag(nb.RK,2) #second order rook contiguity
nb.SRC
## [[1]]
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 490
## Percentage nonzero weights: 4.9
## Average number of links: 4.9
##
## [[2]]
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 868
## Percentage nonzero weights: 8.68
## Average number of links: 8.68
##
## attr(,"call")
## nblag(neighbours = nb.RK, maxlag = 2)

Contiguity2

B. Distance based neighbors

DBN defines a set of connections between polygons either based on a (1) defined Euclidean distance between centroids dnearneigh or a certain (2) number of neighbors knn2nb (e.g. 5 nearest neighbors);

coordNC = coordinates(NC) #get centroids coordinates
d05m = dnearneigh(coordNC, 0, 0.5, row.names=NC$FIPSNO)
nb.5NN = knn2nb(knearneigh(coordNC,k=5),row.names=NC$FIPSNO) #set the number of neighbors (here 5)
d05m
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 430
## Percentage nonzero weights: 4.3
## Average number of links: 4.3
nb.5NN
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 500
## Percentage nonzero weights: 5
## Average number of links: 5
## Non-symmetric neighbours list

a little trick: if you want information on neighbor distances whatever the type of neighborhood may be:

distance = unlist(nbdists(nb.5NN, coordNC))
distance
##   [1] 0.3613728 0.3693554 0.3864847 0.2766561 0.5168459 0.3709748 0.2607982
##   [8] 0.3232974 0.4376632 0.2862144 0.5773310 0.3778483 0.4463538 0.2914539
## ...
## [498] 0.3407192 0.3995114 0.1838115

Distance

C. Graph based (I’ve never used them, but it’s good to know that they exist)

Delauney triangulation tri2nb constructs neighbors through Voronoi triangles such that each centroid is a triangle node. As a consequence, DT ensures that every polygon has a neighbor, even in presence of islands. The “problem” with this specification is that it treats our area of study as if it were an island itself, without any neighbors (as if North Carolina were an island with no Virginia or South Carolina)… Therefore, distant points that would not be neighbors (such as Cherokee and Brunswick counties) become such;
Gabriel Graph gabrielneigh is a particular case of the DT, where a and b are two neighboring points/centroids if in the circles passing by a and b with diameter ab does not lie any other point/centroid;
Sphere of Influence soi.graph: twopoints a and b are SOI neighbors if the circles centered on a and b, of radius equal to the a and b nearest neighbour distances, intersect twice. It is a sort of Delauney triangulation without the longest connections;
Relative Neighbors relativeneigh is a particular case of GG. A border belongs to RN if the intersection formed by the two circles centered in a and b with radius ab does not contain any other point.

delTrinb = tri2nb(coordNC, row.names=NC$FIPSNO) #delauney triangulation
summary(delTrinb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 574
## Percentage nonzero weights: 5.74
## Average number of links: 5.74
## Link number distribution:
##
##  2  3  4  5  6  7  8  9 10
##  1  2 13 29 27 22  3  1  2
## 1 least connected region:
## 37039 with 2 links
## 2 most connected regions:
## 37005 37179 with 10 links
GGnb = graph2nb(gabrielneigh(coordNC), row.names=NC$FIPSNO) #gabriel graph
summary(GGnb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 204
## Percentage nonzero weights: 2.04
## Average number of links: 2.04
## 20 regions with no links:
## 37109 37131 37137 37141 37145 37147 37151 37159 37161 37165 37173 37175 37179 37183 37185 37187 37189 37195 37197 37199
## Non-symmetric neighbours list
## Link number distribution:
##
##  0  1  2  3  4  5  6  7
## 20 27 16 15 13  7  1  1
## 27 least connected regions:
## 37047 37053 37055 37075 37091 37105 37107 37113 37115 37117 37119 37121 37129 37133 37135 37139 37143 37149 37153 37155 37157 37163 37167 37177 37181 37191 37193 with 1 link
## 1 most connected region:
## 37057 with 7 links
SOInb = graph2nb(soi.graph(delTrinb, coordNC), row.names=NC$FIPSNO) #sphere of influence
summary(SOInb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 470
## Percentage nonzero weights: 4.7
## Average number of links: 4.7
## Link number distribution:
##
##  1  2  3  4  5  6  7  9
##  1  5 12 26 30 15 10  1
## 1 least connected region:
## 37031 with 1 link
## 1 most connected region:
## 37097 with 9 links
RNnb = graph2nb(relativeneigh(coordNC), row.names=NC$FIPSNO) #relative graph
summary(RNnb)
## Neighbour list object:
## Number of regions: 100
## Number of nonzero links: 133
## Percentage nonzero weights: 1.33
## Average number of links: 1.33
## 31 regions with no links:
## 37047 37053 37097 37107 37109 37115 37131 37137 37141 37143 37145 37147 37151 37155 37159 37161 37163 37165 37167 37173 37175 37179 37183 37185 37187 37189 37191 37193 37195 37197 37199
## Non-symmetric neighbours list
## Link number distribution:
##
##  0  1  2  3  4
## 31 30 18 17  4
## 30 least connected regions:
## 37009 37027 37031 37035 37037 37039 37055 37073 37075 37083 37091 37095 37105 37113 37117 37119 37121 37125 37127 37129 37133 37135 37139 37149 37153 37157 37169 37171 37177 37181 with 1 link
## 4 most connected regions:
## 37001 37003 37059 37079 with 4 links

GraphBased

What to do with all this stuff? …

compute and compare global Moran’s I
LISA maps
Variograms and correlograms
…?

Mean Age at Childbearing in Spain 2011

TFR 2011 fixed