Skip to content

Commit f92f6c0

Browse files
author
Sara Safavi
committed
add mosaicing exercise & key for SciPy class
1 parent c8b5b21 commit f92f6c0

File tree

7 files changed

+710
-0
lines changed

7 files changed

+710
-0
lines changed
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
{"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-119.16183471679688,37.82903964181452],[-119.14947509765626,37.83663205340172],[-119.13745880126953,37.846392577323286],[-119.13574218750001,37.856422880849514],[-119.13883209228514,37.86645181975611],[-119.12406921386719,37.86916210952103],[-119.12200927734376,37.875937397778614],[-119.1212688230194,37.90572368618133],[-119.13740499245301,37.930641295117404],[-119.16595458984376,37.92659678938742],[-119.18243408203126,37.9447389942697],[-119.2088161252655,37.95257263611974],[-119.25516469704283,37.92522514171301],[-119.2630611203827,37.88215253011582],[-119.25104482399598,37.84474832157969],[-119.18203695046083,37.82576791597315],[-119.16183471679688,37.82903964181452]]]}}]}
59 KB
Loading
2.35 MB
Loading
504 KB
Loading
6.35 MB
Loading
Lines changed: 345 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,345 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"## Creating a composite image from multiple PlanetScope scenes\n"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"In this exercise, you'll learn how to create a composite image (or mosaic) from multiple PlanetScope satellite images that cover an area of interest (AOI). We'll use `rasterio`, along with its vector-data counterpart `fiona`, to do this. "
15+
]
16+
},
17+
{
18+
"cell_type": "markdown",
19+
"metadata": {},
20+
"source": [
21+
"### Step 1. Aquiring Imagery"
22+
]
23+
},
24+
{
25+
"cell_type": "markdown",
26+
"metadata": {},
27+
"source": [
28+
"In order to visually search for imagery in our AOI, we'll use [Planet Explorer](https://www.planet.com/explorer/).\n",
29+
"\n",
30+
"For this exercise, we're going to visit Yosemite National Park. In the screenshot below you'll see an AOI drawn around [Mount Dana](https://en.wikipedia.org/wiki/Mount_Dana) on the eastern border of Yosemite. You can use [data/mt-dana-small.geojson](data/mt-dana-small.geojson) to search for this same AOI yourself.\n",
31+
"\n",
32+
"Here we want an image that depicts the mountain on a clear summer day, so for this data search in Planet Explorer we'll set the filters to show only scenes with less than 5% cloud cover, and narrow down the date range to images captured between July 1-July 31, 2017. Since we're only interested in PlanetScope data, and we're creating a visual - not analytic - product, we'll set the Source to `3-band PlanetScope Scene`. Finally, since we want to create a mosaic that includes our entire AOI, we'll set the Area coverage to full coverage."
33+
]
34+
},
35+
{
36+
"cell_type": "markdown",
37+
"metadata": {},
38+
"source": [
39+
"![Mount Dana in Planet Explorer](images/explorer-mt-dana.gif)"
40+
]
41+
},
42+
{
43+
"cell_type": "markdown",
44+
"metadata": {},
45+
"source": [
46+
"As you can see in the animated gif above, this search yields multiple days within July 2017 that match our filters. After previewing a few days, I decided I like the look of July 21, 2017.\n",
47+
"\n",
48+
"After selecting a single day, you can roll over the individual images to preview their coverage. In the gif above, you'll notice that it takes three individual images to completely cover our AOI. In this instance, as I roll over each item in Planet Explorer I can see that the scenes' rectangular footprints extend far beyond Mount Dana. All three scenes overlap slightly, and one scene touches only a small section at the bottom of the AOI. Still, taken together the images provide 100% coverage, so we'll go ahead and place an order for the Visual imagery products for these three scenes."
49+
]
50+
},
51+
{
52+
"cell_type": "markdown",
53+
"metadata": {},
54+
"source": [
55+
"![Download imagery Planet Explorer](images/explorer-data-order.png)"
56+
]
57+
},
58+
{
59+
"cell_type": "markdown",
60+
"metadata": {},
61+
"source": [
62+
"Once the order is ready, download the images, extract them from the .zip, and move them into the `data/` directory adjacent to this Notebook."
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"### Step 2. Inspecting Imagery"
70+
]
71+
},
72+
{
73+
"cell_type": "code",
74+
"execution_count": null,
75+
"metadata": {},
76+
"outputs": [],
77+
"source": [
78+
"# Load our 3 images using rasterio\n",
79+
"\n",
80+
"import rasterio\n",
81+
"\n",
82+
"img1 = rasterio.open('data/20170721_175836_103c/20170721_175836_103c_3B_Visual.tif')\n",
83+
"img2 = rasterio.open('data/20170721_175837_103c/20170721_175837_103c_3B_Visual.tif')\n",
84+
"img3 = rasterio.open('data/20170721_175838_103c/20170721_175838_103c_3B_Visual.tif')"
85+
]
86+
},
87+
{
88+
"cell_type": "markdown",
89+
"metadata": {},
90+
"source": [
91+
"At this point we can use `rasterio` to inspect the metadata of these three images. Specifically, in order to create a composite from these images, we want to verify that all three images have the same data type, the same coordinate reference systems and the same band count:"
92+
]
93+
},
94+
{
95+
"cell_type": "code",
96+
"execution_count": null,
97+
"metadata": {},
98+
"outputs": [],
99+
"source": [
100+
"print(img1.meta['dtype'], img1.meta['crs'], img1.meta['count'])\n",
101+
"print(img2.meta['dtype'], img2.meta['crs'], img2.meta['count'])\n",
102+
"print(img3.meta['dtype'], img3.meta['crs'], img3.meta['count'])"
103+
]
104+
},
105+
{
106+
"cell_type": "markdown",
107+
"metadata": {},
108+
"source": [
109+
"Success - they do! But wait, I thought we were using a \"Visual\" image, and expecting only 3 bands of information (RGB)?\n",
110+
"\n",
111+
"Let's take a closer look at what these bands contain:"
112+
]
113+
},
114+
{
115+
"cell_type": "code",
116+
"execution_count": null,
117+
"metadata": {},
118+
"outputs": [],
119+
"source": [
120+
"# Read in color interpretations of each band in img1 - here we'll assume img2 and img3 have the same values\n",
121+
"colors = [img1.colorinterp[band] for band in range(img1.count)]\n",
122+
"\n",
123+
"# take a look at img1's band types:\n",
124+
"for color in colors:\n",
125+
" print(color.name)"
126+
]
127+
},
128+
{
129+
"cell_type": "markdown",
130+
"metadata": {},
131+
"source": [
132+
"The fourth channel is actually a binary alpha mask: this is common in satellite color models, and can be confirmed in Planet's [documentation on the PSSCene3Band product](https://developers.planet.com/docs/api/psscene3band/).\n",
133+
"\n",
134+
"Now that we've verified all three satellite images have the same critical metadata, we can safely use `rasterio.merge` to stitch them together."
135+
]
136+
},
137+
{
138+
"cell_type": "markdown",
139+
"metadata": {},
140+
"source": [
141+
"### Step 3. Creating the Mosaic"
142+
]
143+
},
144+
{
145+
"cell_type": "code",
146+
"execution_count": null,
147+
"metadata": {},
148+
"outputs": [],
149+
"source": [
150+
"from rasterio.merge import merge\n",
151+
"\n",
152+
"# merge returns the mosaic & coordinate transformation information\n",
153+
"(mosaic, transform) = merge([img1, img2, img3])"
154+
]
155+
},
156+
{
157+
"cell_type": "markdown",
158+
"metadata": {},
159+
"source": [
160+
"Once that process is complete, take a moment to congratulate yourself. At this stage you've successfully acquired adjacent imagery, inspected metadata, and performed a compositing process in order to generate a new mosaic. Well done!\n",
161+
"\n",
162+
"Before we go further, let's use `rasterio.plot` (a matplotlib interface) to preview the results of our mosaic. This will just give us a quick-and-dirty visual representation of the results, but it can be useful to verify the compositing did what we expected."
163+
]
164+
},
165+
{
166+
"cell_type": "code",
167+
"execution_count": null,
168+
"metadata": {},
169+
"outputs": [],
170+
"source": [
171+
"from rasterio.plot import show\n",
172+
"\n",
173+
"show(mosaic)"
174+
]
175+
},
176+
{
177+
"cell_type": "markdown",
178+
"metadata": {},
179+
"source": [
180+
"At this point we're ready to write our mosaic out to a new GeoTIFF file. To do this, we'll want to grab the geospatial metadata from one of our original images (again, here we'll use img1 to represent the metadata of all 3 input images)."
181+
]
182+
},
183+
{
184+
"cell_type": "code",
185+
"execution_count": null,
186+
"metadata": {},
187+
"outputs": [],
188+
"source": [
189+
"# Grab a copy of our source metadata, using img1\n",
190+
"meta = img1.meta\n",
191+
"\n",
192+
"# Update the original metadata to reflect the specifics of our new mosaic\n",
193+
"meta.update({\"transform\": transform,\n",
194+
" \"height\":mosaic.shape[1],\n",
195+
" \"width\":mosaic.shape[2]})\n",
196+
"\n",
197+
"with rasterio.open('data/mosaic.tif', 'w', **meta) as dst:\n",
198+
" dst.write(mosaic)"
199+
]
200+
},
201+
{
202+
"cell_type": "markdown",
203+
"metadata": {},
204+
"source": [
205+
"### Step 4. Clip the Mosaic to AOI Boundaries"
206+
]
207+
},
208+
{
209+
"cell_type": "markdown",
210+
"metadata": {},
211+
"source": [
212+
"Now that we've successfully created a composite mosaic of three input images, the final step is to clip that mosaic to our area of interest. To do that, we'll create a mask for our mosaic based on the AOI boundaries, and crop the mosaic to the extents of that mask.\n",
213+
"\n",
214+
"You'll recall that we used Explorer to search for Mount Dana, in Yosemite National Park. The GeoJSON file we used for that search can also be used here, to provide a mask outline for our mosaic.\n",
215+
"\n",
216+
"For this step we're going to do a couple things: first, we'll use rasterio's sister-library `fiona` to read in the GeoJSON file. Just as `rasterio` is used to manipulate raster data, `fiona` works similarly on vector data. Where `rasterio` represents raster imagery as numpy arrays, `fiona` represents vector data as GeoJSON-like Python dicts. You can learn [more about fiona here](http://toblerity.org/fiona/manual.html).\n",
217+
"\n",
218+
"After reading in the GeoJSON you'll want to extract the _geometry_ of the AOI (_hint:_ `geometry` will be the dict key). \n",
219+
"\n",
220+
"#### A note about Coordinate Reference Systems\n",
221+
"\n",
222+
"If you attempt to apply the AOI to the mosaic imagery now, rasterio will throw errors, telling you that the two datasets do not overlap. This is because the Coordinate Reference System (CRS) used by each dataset do not match. You can verify this by reading the `crs` attribute of the Collection object generated by `fiona.open()`.\n",
223+
"\n",
224+
"_Hint: the CRS of mt-dana-small.geojson is:_ `'epsg:4326'`\n",
225+
"\n",
226+
"You'll recall that earlier we validated the metadata of the original input imagery, and learned it had a CRS of `'epsg:32611'`. \n",
227+
"\n",
228+
"Before the clip can be applied, you will need to to transform the geometry of the AOI to match the CRS of the imagery. Luckily, fiona is smart enough to apply the necessary mathematical transformation to a set of coordinates in order to convert them to new values: apply `fiona.transform.transform_geom` to the AOI geometry to do this, specifying the GeoJSON's CRS as the source CRS, and the imagery's CRS as the destination CRS."
229+
]
230+
},
231+
{
232+
"cell_type": "code",
233+
"execution_count": null,
234+
"metadata": {},
235+
"outputs": [],
236+
"source": [
237+
"# use rasterio's sister-library for working with vector data\n",
238+
"import fiona\n",
239+
"\n",
240+
"# use fiona to open our original AOI GeoJSON\n",
241+
"with fiona.open('data/mt-dana-small.geojson') as mt:\n",
242+
" aoi = [feature[\"geometry\"] for feature in mt]\n",
243+
" \n",
244+
"# transform AOI to match mosaic CRS\n",
245+
"from fiona.transform import transform_geom\n",
246+
"transformed_coords = transform_geom('EPSG:4326', 'EPSG:32611', aoi[0])\n",
247+
"\n",
248+
"aoi = [transformed_coords]"
249+
]
250+
},
251+
{
252+
"cell_type": "markdown",
253+
"metadata": {},
254+
"source": [
255+
"At this stage you have read in the AOI geometry and transformed its coordinates to match the mosaic. We're now ready to use `rasterio.mask.mask` to create a mask over our mosaic, using the AOI geometry as the mask line. \n",
256+
"\n",
257+
"Passing `crop=True` to the mask function will automatically crop the bits of our mosaic that fall outside the mask boundary: you can think of it as applying the AOI as a cookie cutter to the mosaic."
258+
]
259+
},
260+
{
261+
"cell_type": "code",
262+
"execution_count": null,
263+
"metadata": {},
264+
"outputs": [],
265+
"source": [
266+
"# import rasterio's mask tool\n",
267+
"from rasterio.mask import mask\n",
268+
"\n",
269+
"# apply mask with crop=True to cut to boundary\n",
270+
"with rasterio.open('data/mosaic.tif') as mosaic:\n",
271+
" clipped, transform = mask(mosaic, aoi, crop=True)\n",
272+
" \n",
273+
"# See the results!\n",
274+
"show(clipped)"
275+
]
276+
},
277+
{
278+
"cell_type": "markdown",
279+
"metadata": {},
280+
"source": [
281+
"Congratulations! You've created a clipped mosaic, showing only the imagery that falls within our area of interest.\n",
282+
"\n",
283+
"From here, the only thing left to do is save our results to a final output GeoTIFF."
284+
]
285+
},
286+
{
287+
"cell_type": "code",
288+
"execution_count": null,
289+
"metadata": {},
290+
"outputs": [],
291+
"source": [
292+
"# save the output to a final GeoTIFF\n",
293+
"\n",
294+
"# use the metadata from our original mosaic\n",
295+
"meta = mosaic.meta.copy()\n",
296+
"\n",
297+
"# update metadata with new, clipped mosaic's boundaries\n",
298+
"meta.update({\"transform\": transform,\n",
299+
" \"height\":clipped.shape[1],\n",
300+
" \"width\":clipped.shape[2]})\n",
301+
"\n",
302+
"# write the output to a GeoTIFF\n",
303+
"with rasterio.open('data/clipped_mosaic.tif', 'w', **meta) as dst:\n",
304+
" dst.write(masked)\n"
305+
]
306+
},
307+
{
308+
"cell_type": "markdown",
309+
"metadata": {},
310+
"source": [
311+
"### Final results\n",
312+
"\n",
313+
"If you like, you might take a closer look at the final clipped mosaic using QGIS:"
314+
]
315+
},
316+
{
317+
"cell_type": "markdown",
318+
"metadata": {},
319+
"source": [
320+
"![Final clipped mosaic viewed in QGIS](images/final_in_qgis.png)"
321+
]
322+
}
323+
],
324+
"metadata": {
325+
"kernelspec": {
326+
"display_name": "Python 3",
327+
"language": "python",
328+
"name": "python3"
329+
},
330+
"language_info": {
331+
"codemirror_mode": {
332+
"name": "ipython",
333+
"version": 3
334+
},
335+
"file_extension": ".py",
336+
"mimetype": "text/x-python",
337+
"name": "python",
338+
"nbconvert_exporter": "python",
339+
"pygments_lexer": "ipython3",
340+
"version": "3.6.3"
341+
}
342+
},
343+
"nbformat": 4,
344+
"nbformat_minor": 2
345+
}

0 commit comments

Comments
 (0)