PCI Geomatics Help Center

How can we help you today?

Object Analyst – Batch Classification - Banff

PCI Geomatics -

In Geomatica Banff the batch classification feature was added to the Object Analyst workflow. This new feature allows users to easily run an object-based classification on many datasets. For this workflow you will first run OA classification on an individual image. Once the classification results from that image are acceptable you can then run a batch classification on several images to apply the same classification to each. The batch classification applies a segmentation step, an attribute-calculation step, an SVM-classification step (based on the training-model file you create from the first individual image you classified), and any number of rule-based classification steps.

Run OA Classification on Initial Image

Segmentation

Attribute Calculation

Training Site Editing

Supervised Classification

Edit Class Representation (Optional)

Save Representation RST file

Rule-Based Classification (Optional)

Batch Classification

Apply Representation to Additional Vector Layers

Classification Editing

 

For this tutorial we will use Landsat-8 data acquired in June over four different years. The chart below shows the datasets.

Tutorial Data: https://pcigeomatics.sharefile.com/d-s34d5345eb8441d6a 

June 3, 2015 – Initial Image

June 21, 2016

LC08_L1TP_018030_20150603_20170226_01_T1

LC08_L1TP_018030_20160621_20180130_01_T1

 mceclip0.png  mceclip1.png

June 11, 2018

June 14, 2019

LC08_L1TP_018030_20180611_20180615_01_T1

LC08_L1TP_018030_20190614_20190620_01_T1

 mceclip2.png  mceclip3.png

Images should have similar land cover types in order for the batch classification to produce good results for each image. You will notice from the images above that two of the batch images include clouds while the initial image does not. As such, the clouds in those images will be incorrectly classified as there was no cloud class in the initial image classification. After running batch classification we will run rule-based classification on the two images with cloud to create a new cloud class.

Run OA Classification on Initial Image

The first step in the process is to run the Object Analyst (OA) classification once on the initial image. In this case we will use LC08_L1TP_018030_20150603_20170226_01_T1 (June 3, 2015) as the initial image. We will run Segmentation, Attribute Calculation and Supervised classification. Note that you can also run rule-based classification to edit the classification. The workflow for performing OA classification on a single image is outlined in the Object Analyst tutorial.

Make sure that each of the following steps is completed for the initial image and the Process Canvas includes each of the steps that you ran. You will need to reference each step when running the batch classification. 

Save your Focus Project - Make sure that you always save your Focus project file when working with Object Analyst. This will ensure that all of your OA steps are saved in the OA Process Canvas. 

Segmentation

The first step in object-based image analysis (OBIA) is segmentation.

  1. In Focus open the Object Analyst window from the Analysis drop-down menu.
  2. Change the Operation to Segmentation
  3. Click Select beside the Source Channels box.
  4. In this window open the initial image - June2015.pix and select channels 2-6.
  5. Click OK

 mceclip5.png

  1. In the main OA window change the scale to 35 – this will create larger segments then the default of 25
  2. Set an output file and layer name
  3. Click Add and Run

 mceclip14.png

  1. The new segmentation vector layer will be loaded into Focus.

mceclip7.png

Attribute Calculation

The next step is to calculate attributes for each of the segments.

  1. Switch the Operation to Attribute Calculation
  2. Keep the same channels checked off in the Source Channels list
  3. In the Attributes to Calculate section select Statistical: Mean and Vegetation Indices: NDVI

mceclip15.png

Training Site Editing

We can now collect the training sites used by the learning algorithm to generate the supervised classification model. More detailed training site collection steps are outlined in the Object Analyst – Manually Collect Training Sites tutorial.

  1. Change the Operation to Training Site Editing
  2. Make sure that the Segmentation layer is checked off under Training Vector Layer

 mceclip5.png

  1. Click on the Edit… button to open up the Training Sites Editing window
  2. Ensure that Training Field is set to Training
  3. Click on the Add Class button
  4. A new row will be added that reads Class 1 under the Class Name column
  5. Change the name of the Class to a meaningful value such as Urban_Bright.
  6. Click on the Individual Select tool mceclip1.png
  7. In the Focus viewer, hold the CTRL key down and select the segmentation polygons that match the current class – Urban_Bright.
  8. Continue to add new classes and collect training sites so that you have similar classes to the screenshot below.

mceclip0.png

mceclip7.png

Supervised Classification

Now that the training sites are collected we can run the supervised classification.

  1. Change the Operation to Supervised Classification
  2. Click on the Select…button in the Vector Layer and Fields section
  3. In the Vector Layer and Field Selector window, make sure that the Layer field is populated with the pix layer. Make sure the mean fields and NDVI are selected.

 mceclip8.png

  1. Click OK
  2. In the Training Field dropdown list, select Training
  3. In the Output Class Field, change the name to SupClass
  4. Check off Save training model and set an output location and name for the training model. This step uses the training samples and object attributes, stored in a segmentation attribute table, to create a series of hyperplanes, and then writes them to an output file containing a Support Vector Machine (SVM) training model. The SVM training model that is created is then used in the batch classification step later in the tutorial.

 mceclip10.png

Once the classification is complete you can view the result in Focus. The classification is loaded using the colours that you specified in the training site editor.

Edit Class Representation (Optional)

If required you can edit the classification representation for the segmentation layer.

  1. Open Object Analyst > Post Classification (Operation) > Class Edit (Type)
  1. Under Vector Layer, click Select, and then choose your segmentation file and layer
  2. Set the Class field as SupClass

 mceclip3.png

  1. Click Edit
  2. In the Class Editing window you can adjust the colours, opacity and check whether to Show Border.

 mceclip4.png

Save Representation RST file

In order to apply the same representation to various file you will need to first save the representation.

  1. In Focus > Maps tab > right-click the new vector layer > Representation Editor
  2. To save a representation click  mceclip1.png on the representation editor.

mceclip2.png

mceclip11.png

*This is a good time to save your Focus project file*

Rule-Based Classification (Optional)

If you need to further refine the classification on the initial image you can run rule-based classification. In this tutorial we will not run rule-based classification at this time, but instead will run in on specific images after batch classification. More information on Rule-Based Classification is available from the Object Analyst – Rule-Based Classification tutorial.

Batch Classification

The initial image has been successfully classified and now we can run Classification on the Additional Images.

  1. Change the Operation to Batch Classification.
  2. Set the Input Images Folder to the folder containing your additional images. If required you can change the search pattern but in our case we will search for .pix files.
  3. Set the Output Folder to the folder where you wish to save the output segmentation files containing the classification information.
  4. Select the Training Model that you saved in the Supervised Classification step.
  5. Add the Batch Classification to the Process Canvas box, click Add mceclip12.png

 mceclip16.png

  1. The Batch Classification process is added to the canvas. Under the new Batch Classification process, the Segmentation, Attribute Calculation, and SVM Classification items are each displayed in red. This is to indicate that you must complete the definition of each before you can run the batch classification.

 mceclip17.png

  1. In the Process Canvas box, right-click the segmentation process you want to use. In this case we will use the segmentation process at the top of the process canvas. Click Add To > Batch Classification.

 mceclip19.png

  1. The Batch Classification process is updated with the segmentation process you added

 mceclip20.png

  1. Follow these same steps to add the Attribute Calculation and SVM Classification processes to the Batch Classification process.

 mceclip21.png

  1. Now the Batch Classification process is ready to be run. Right-click the Batch Classification process in the Process Canvas and select Run.

A window is displayed, showing the progress of the segmentation, attribute-calculation, and classification steps for each image in the Input Images folder. Each of the new segmentation files is saved to the output folder

 mceclip23.png

  1. Once the batch processing is complete you can then open the various segmentation files in Focus to view the segments and classification information. The following section outlines how to apply the saved RST file to each of the new segmentation layers.

Apply Representation to Additional Vector Layers

When you load the additional classifications to Focus you want to ensure that the RST file you created is linked to the map in your project.

  1. In Focus > Maps tab > right-click on the Unnamed Map > Representation > Load.
  2. Select the RST file that you just created and click Open.
  3. The RST file will now be listed in the Maps tab

 mceclip5.png

  1. Open the additional segmentation files in Focus.
  2. In the Maps tab, right-click on one of the new vector layers > Representation Editor
  3. In the Representation Editor, change the Attribute drop-down menu to the SupClass field and click OK

 mceclip6.png

  1. The representation from the RST file will be applied.
  2. You can follow steps 5 & 6 for each of the additional segmentation vector layers.

2015

2016

 mceclip24.png  mceclip25.png

2018

2019

 mceclip26.png  mceclip27.png

Classification Editing

As previously mentioned, after completing the batch classification you can run rule-based classification to further edit the classification. In this case we will run rule-based classification on the 2016 image to reclassify the clouds that were incorrectly classified as bright urban. You can also perform similar steps to reclassify the cloud in the 2019 segmentation file.

Note: In order to better differentiate clouds from other features Attribute Calculation was re-ran on the 2016 image to calculate Max_CIR. This attribute is the max value from the Cirrus band that Landsat-8 provides.

  1. Change the operation to Rule-based Classification
  2. Make sure that the Segmentation layer from the pix file is checked off under Training Vector Layer

 mceclip29.png

  1. Keep the Class Edit as Assign
  2. Change the Class Field to SupClass
  3. Change the Class filter to Urban_Bright
  4. Change the New Class to Clouds – you can type this into that box
  5. Check off Specify Condition and click Attribute Visualization
  6. In the Attribute Visualization window change Class field to SupClass, Class filter to Urban_Bright and Range field to MAX_CIR
  7. Adjust the sliders to select segments which contain clouds.

 mceclip31.png

  1. Segments that contain MAX_CIR values within the specified range will be selected as yellow

 mceclip32.png

mceclip33.png

  1. Once you have the cloud segments collected you can click OK
  2. Run the Rule-based classification to reclassify those selected segment to the Cloud class.
  3. Once that is complete you can adjust the representation to include the new class.

mceclip35.png

Have more questions? Submit a request

Comments

Powered by Zendesk