ARGANS Summer Intern Project
This post follows the work completed by ARGANS summer intern, Francesca Willcocks, exploring the use of earth observation data for monitoring deforestation in the Amazon Rainforest.
Change Detection in the Amazon Rainforest
Known as the world's most bio-diverse ecosystem, the Amazon Rainforest comprises of approximately 60% of the world's rainforests and is home to around 30% of the world's species. Whilst it's importance for the Earth's climate is celebrated across the globe, the diversity of the rainforest makes it hugely susceptible to exploitation, specifically by deforestation. This post explores how ARGANS has used change detection on Sentinel 2 imagery to detect deforestation within the Brazilian Amazon alongside the Arrow Rainforest Foundation, as well as a comparison of results to current deforestation monitoring systems in Brazil; SAD and DETER.
Creating the land cover classifications:
To produce change detection, land cover classifications were created for each image. Firstly, different classes are specified, and pixels across the images are associated with each class, individually or in clumps. Before carrying out the classifications, we had to decide between two types:
Pixel-based classifications classify individual pixels in the image. Whilst this provides a lot of detail, the final images produced are very noisy, especially when dealing with larger areas like the Amazon.
Object-Orientated classifications clump spectrally similar pixels together, and it's these clumps that are classified. The classifications produced are less noisy and easier to interpret (a comparison between the two types can be seen below). For this project Object-Orientated classifications were used.
The Region of Interest:
When selecting a location, it was important to choose an area where deforestation has been more prominent, so the change could be visible through earth observation. Information provided by the Arrow Rainforest Foundation suggested the Cachoeira Seca basin situated in the State of Para in the Brazilian Amazon, as a location of interest with anecdotal reports from villagers in the Iriri village saying loggers can be heard in the forest. This basin (located between Ruropolis and Placas, tiles 21MYR, 21MZR and 21MYQ), was selected (see below).
How did we do it?
Collection of Data:
Our workflow commenced with the downloading of Level 2a (Bottom of Atmosphere) Sentinel 2 imagery from ESA's Scihub. As well as being freely available, Sentinel 2 provides a spatial resolution of down to 10m, later allowing detection of forest loss down to 0.01 hectares. This in combination with a spectral resolution of 13 bands (covering wavelengths from 443-2190nm), allows a balance between high detail imagery and efficient processing time. Cloud masks were then carried out to remove cloud and cloud shadow.
An annual temporal resolution was chosen for this project, using images from the Amazon's dry season (falling from April to November) to avoid large amounts of cloud cover. Two images were selected, one from July 2019 and another from July 2020, to carry out analysis. After collecting these images, we began producing classifications following the workflow seen below.
Deciding the classes that would be best in the final classifications was helped by unsupervised classifications. This was done using the KMeans algorithm, using 30 clusters before being grouped further into 4 classes. The final classes chosen were Forest, Bare Earth, Grass and Water.
After the classes were decided, supervised classifications were run. This involved training the computer to associate certain spectral responses with specific classes in the image. To do this, equal amounts of training data for each class were selected and inputted into a random forests machine learning algorithm, producing a land cover classification.
Once classifications were produced, accuracy assessments were carried out to ensure the classifications could be used in further analysis. To do this, 50 points per class were dropped randomly across the map. These points were compared to ground truth data (in this case Sentinel 2 imagery), to see if the class represented by that point represents that class on the ground. This was plotted into a confusion matrix to produce an overall accuracy for the classification, an example can be seen below.
After the classifications were created and accuracy assessments produced, a change detection was run to create a map of forest loss and forest gain across the area. This was done by using the 'Map to 'Map' change detection approach with classifications from 26th July 2019 and 30th July 2020 for each tile.
What did we find?
Both classifications showed very little noise, with all classifications producing an overall accuracy of 70-80%. The 'Water' and 'Forest' classes were classified best, whilst 'Bare Earth' and 'Grass' fluctuated in accuracy across both tiles. Varying spectral responses as a result of shadows and reflectance in the forest canopy meant some 'Forest' pixels appeared spectrally like 'Water' or 'Grass', producing some misclassification.
The change detection produced for these images showed a lot of forest loss occurring nearby roads. Additionally, small groups of pixels representing forest loss are sporadic across the forest. This could be due to some misclassifications in the classifications prior, however, could also be due to the 10m spatial resolution of Sentinel 2 being able to detect small areas of forest loss.
An interesting find in classifications for tile 21MZR, was the development of a road spanning just under one month. We detected this road in classifications from the 10th August 2019, 20th August 2019 and 9th of September 2019, see below. The road itself was identified as 'Bare Earth' within the classification, with the surrounding area being correctly classified as 'Forest'. The classifications for this time series were very accurate, with little noise occurring in the forest.
Further change was visible in a change detection from the 10th August 2019 and 9th September 2019 (see below). The algorithm picked up the expansion of the road accurately, with detected forest loss following the pattern of the road visible in the satellite image closely.
An Introduction to SAD and DETER:
Since 2007, the Brazilian Government and National Institute for Space Research (INPE) have been consistently using satellites to monitor deforestation, pinpointing hot spots for forest clearance. These systems are called Deforestation Alert System (SAD) and Real-Time System for Detection of Deforestation (DETER). Both SAD and DETER use the MODIS sensor on NASA's Terra and Aqua satellites providing a spatial resolution of 250m.
DETER is used as an early warning system for the Brazilian Amazon, sending alerts to IBAMA (a forest protection agency) who will travel to that area to prevent continuing deforestation through reinforcing public and governmental policies. SAD, however, is used to create monthly deforestation reports published by Imazon.
Whilst SAD and DETER have been effective in monitoring deforestation, with DETER helping to reduce illegal deforestation from 27,423km2 to 7,000km2 in 2010 by detecting where large areas of forest are being cleared, using imagery of higher spatial resolution may be more effective in the reduction of illegal deforestation, and this is tested in this project.
How do our results compare to SAD and DETER?
Comparing our results to that of SAD and DETER gave the opportunity to see how effective Sentinel 2 is in detecting forest loss in comparison. SAD and DETER data for 2019-2020 was provided by the Arrow Rainforest Foundation specifically for the Cachoeira Seca basin. This data was combined with our change detection from 2019-2020 for this basin, see below.
In some cases, SAD and DETER has provided validation that Sentinel 2 is effective in detecting deforestation, shown by the overlapping of forest loss from the change detection with that of SAD and DETER (above, right image). There were also areas across our change detection that detected more forest loss than SAD and DETER, overlapping with SAD and DETER data as well as detecting extra forest loss around this overlap, shown above, left image. There were, however, areas where SAD and DETER had detected forest loss that the change detection hadn't, shown above, center image. This could be due to the 70-80% overall accuracy of the classifications leaving room for misclassification in the final images, resulting in some areas of 'Grass' or 'Bare Earth' being detected as 'Forest' and masking an area of forest loss that has been detected by SAD and DETER. In future, a repeat of training data selection before processing could help reduce this misclassification.
In comparison to the annual temporal resolution used in our analysis, the SAD and DETER data only includes detected forest loss from the 1st August 2019 to the 25th March 2020, meaning SAD and DETER datasets have only recorded forest loss for 8 of the 12 months recorded in our analysis. This difference explains why there are some larger areas that have been detected in our change detection and not by SAD and DETER, as these areas may have expanded in the 3 months after detection by SAD and DETER. Additionally, these 3 months could also give time for some regrowth before the detection of forest loss in our final classification and change detection, which could account for the areas missing in our change detection but detected in SAD and DETER.
Overall, Sentinel 2 has detected more forest loss (0.095km2) in comparison to SAD and DETER, detecting 0.050km2 and 0.056km2 respectively (see above). This could be due to the difference in temporal resolution allowing Sentinel 2 more time for detection, however, could also be due to the higher spatial resolution of Sentinel 2 allowing the detection of smaller areas of loss. For a more comprehensive comparison in future, a larger dataset for SAD and DETER matching the temporal resolution used for our analysis would be used.
What have we learnt?
Throughout this project, we have seen that Sentinel 2 is effective in detecting forest loss, whether this is through the detection of road development in our classifications, or larger areas of forest loss detected in our change detection. The comparison with SAD and DETER has validated that Sentinel 2 can detect areas of forest loss that have also been detected by other monitoring systems, as well as detecting forest loss at a smaller spatial scale. This helps the Arrow Rainforest Foundation facilitate enhanced and targeted management strategies to pre-emptively combat the rates of deforestation. But why is this important?
With the Brazilian Amazon seeing an increase in deforestation since changes in environmental legislation in 2012, it is clear there is more to be done to reduce the rate of forest clearance in the Amazon. The use of Sentinel 2 imagery in the detection of deforestation would increase the spatial resolution of these monitoring systems to 10m, and allow for the detection of forest loss before it reaches the current minimum of 6.25 hectares needed for SAD and DETER detection. Whilst there is a long way to go in the decrease of deforestation, with environmental policies and climate change also playing huge roles in the rates of deforestation as well as mining, logging and agriculture, detecting forest loss earlier could help prevent further clearance, and hopefully be a small step towards reduction of deforestation long term.