PCI Geomatics Help Center

How can we help you today?

Object Analyst - Batch Classification in Python - Banff

PCI Geomatics -

In Geomatica Banff standalone Object Analyst algorithms were added in both Python and EASI. These new algorithms allow users to easily run an object-based classification on many datasets. The first step in the workflow is to segment, calculate attributes and collect training sites for the initial image. Once that is complete the attributes fields are exported to a text file, the training model is generated and finally SVM classification is run on that initial image. Each of the additional images is processed in a batch Python script, using the attribute field name text file and training model that were generated from the initial image. The diagram below shows the basic workflow.

Additional resources for creating Python scripts using Geomatica algorithms are available from the Developer Zone. Note that the script in this tutorial makes use of Python 3.5 as this is the new default version used in Geomatica Banff.

Process Initial Image - Focus

Segmentation

Attribute Calculation

Collect Training Sites

Run OA Algorithms - Python

Import necessary modules & setup inputs/outputs

Run OAFLDNMEXP, OASVMTRAIN and OASVMCLASS on initial image

OAFLDNMEXP

OASVMTRAIN

OASVMCLASS

Process additional images in batch

OASEG

OACALCATT

OASVMCLASS

Edit, Save and Apply Representation

Edit Classification Representation

Save Representation to RST file

Apply Representation to Additional Vector Layers

Full Script

mceclip0.png

For this tutorial we will use Landsat-8 data acquired in June over four different years. The chart below shows the datasets.

Download Tutorial Data: https://pcigeomatics.sharefile.com/d-s34d5345eb8441d6a 

June 3, 2015 – Initial Image

June 21, 2016

LC08_L1TP_018030_20150603_20170226_01_T1

LC08_L1TP_018030_20160621_20180130_01_T1

 mceclip1.png  mceclip2.png

June 11, 2018

June 14, 2019

LC08_L1TP_018030_20180611_20180615_01_T1

LC08_L1TP_018030_20190614_20190620_01_T1

 mceclip3.png  mceclip4.png

Images should have similar land cover types in order for the batch classification to produce good results for each image.

Process Initial Image - Focus

The first step in the process the initial image. In this case we will use LC08_L1TP_018030_20150603_20170226_01_T1 (June 3, 2015) as the initial image. The full workflow for processing an image in the OA Focus GUI is outlined in the Object Analyst tutorial. There are a couple options available for processing the initial image before performing the batch processing in Python:

  1. Run Segmentation, Attribute Calculation and Training Site Collection in Focus GUI – This is the option being used in this tutorial. We will run each of these steps in the Focus GUI before performing the initial classification and batch processing in Python. This will ensure that we only have to run a single Python script after the initial Focus Object Analyst (OA) processing.
  2. Fully process the intial image in Focus – For this option you would run Segmentation, Attribute Calculation, Training Site Collection and Supervised Classification in the Focus GUI. You would need to ensure that you save the training model from the Supervised Classification step in Focus to use in the Python script to process the additional images. You would also still need to run OAFLDNMEXP on the initial image in the Python script to create the attribute field text file. The benefit of processing the initial image fully in Focus is you can quality check the classification before running batch processing in Python on the additional images. If you run the classification in the Focus GUI and the results are not ideal, you can adjust the segmentation, recalculate attributes and/or refine the training sites to improve the classification. After producing the best possible classification of the initial image in Focus, you could then complete the batch classification in Python. In this case you would skip the calls to SVMTRAIN and SVMCLASS for the initial image in the provided script.
  3. Only collect training sites in Focus GUI - As noted in the workflow diagram above you could run OASEG and OACALCATT on the initial image before collecting training sites in Focus.

Segmentation

The first step in the Object Analyst workflow is segmentation.

  1. In Focus open the Object Analyst window from the Analysis drop-down menu.
  2. Change the Operation to Segmentation
  3. Click Select beside the Source Channels box.
  4. In this window open the initial image - June2015.pix and select channels 2-6.
  5. Click OK

 mceclip5.png

  1. In the main OA window change the scale to 35 – this will create larger segments then the default of 25
  2. Set an output file and layer name
  3. Click Add and Run

mceclip6.png

Attribute Calculation

The next step in the workflow is to calculate attributes for each of the segments created in the previous step.

  1. Switch the Operation to Attribute Calculation
  2. Keep the same channels checked off in the Source Channels list
  3. In the Attributes to Calculate section select Statistical: Max & Mean, Geometrical: Rectangularity and Vegetation Indices: NDVI

mceclip7.png

Collect Training Sites

For this tutorial ground truth points are supplied which will be imported and used as training sites. More information on collecting training sites is available from the Object Analyst Tutorial.

  1. Change the Operation to Training Site Editing
  2. Make sure that the Segmentation layer is checked off under Training Vector Layer

 mceclip8.png

  1. Click on the Import… button
  2. Select the ground truth file by clicking Browse
  3. In the Layer section select the vector segment GT:Training_Ground_Truth
  4. Select the Training field and Sample type: Training
  5. Click Import

 mceclip9.png

 mceclip10.png

  1. Click on the Edit…button to open up the Training Sites Editing window
  2. Ensure that Training Field is set to Training
  3. You can now view the imported training sites

mceclip11.png

mceclip12.png

Run OA Algorithms - Python

Now that the initial image includes training sites we can continue to export the names of attribute fields, create the training model and run the SVM classification on the initial image in Python. The attribute text file and training model will then be used to process all additional images in batch. As previously mentioned, if you are fully classifying the initial image in Focus you only need to run OAFLDNMEXP on the initial image in this section of the Python script.

Import necessary modules & setup inputs/outputs

The first step in the Python script is to import the required modules and set up the variables that will point to our input and output directories and files. You can import the PCI algo module which will be used to access the Geomatica algorithms. Also import the os module to access the operating system’s file structure and glob module to find files in a directory.

Create input and output variables as outlined below.

from pci import algo
import os
import glob

# Input Variables
# Initial Image - This image will be used to generate the training model
init_image = r"D:\Data\Tutorial\OBIA\Automated\Initial_Image\June2015.pix"
# Intial segmentation containing the training sites - *Created in Focus
init_seg = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications\June2015_seg.pix"
# Additional Images - The batch classification will be run on these images
add_images = r"D:\Data\Tutorial\OBIA\Automated\Additional_Images"

# Output Variables
# Output file location
output_folder = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications"
# Text file containing exported attribute names
fld = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications\att_fld.txt"
# Training model
training_model = os.path.join(output_folder, "training_model.txt")

Run OAFLDNMEXP, OASVMTRAIN and OASVMCLASS on initial image

In the first part of the script we will continue to process the initial image. The following section of code will export the names of the attribute fields (OAFLDNMEXP), create the training model (OASVMTRAIN) and run the supervised classification (OASVMCLASS). You can check the Geomatica Help for additional information on the required parameters for each algorithm.

OAFLDNMEXP

OAFLDNMEXP exports the names of attribute fields from an OA segmentation file to a text file. The input file (filv) is the initial segmentation file that we created in Focus. dbvs will be set to 2 as this is the segment number of the segmentation vector layer. In this tutorial we will export all attribute-field names from Object Analyst (“ALL_OA”) but fldnmflt can also be set to specific fields. The output text file (tfile) will be set to the fld variable that we created earlier in the script.

# Export fields, save training model and classify initial image
print("Processing initial image:", os.path.basename(init_image))
# OAFLDNMEXP - Export names of attribute fields from Focus Object Analyst to a text file
algo.oafldnmexp(filv=init_seg, dbvs=[2], fldnmflt="ALL_OA", tfile=fld)

OASVMTRAIN

OASVMTRAIN uses a set of training samples and object attributes, stored in a segmentation attribute table, to create a series of hyperplanes, and then writes them to an output file containing a Support Vector Machine (SVM) training model. The same segmentation file that we created in Focus will be used as the input (filv). The field name text file (fld) will be used as another input (tfile). The kernel function for the SVM classification (kernel) will be the default radial-basis function (RBF). Finally the training model (trnmodel) parameter will be set to the training_model variable we created earlier.

# OASVMTRAIN - Object-based SVM training
algo.oasvmtrain(filv=init_seg, dbvs=[2], tfile=fld, kernel="RBF", trnmodel=training_model)

OASVMCLASS

OASVMCLASS uses Support Vector Machine (SVM) technology to run a supervised classification based on an SVM training model you specify. This is the final algorithm that needs to be run on the initial image in order to complete the classification. When this algorithm is run, new fields are added to the output file (filo) which include the classification information. The initial segmentation (filv), field name file (tfile) and training model (trnmodel) from the earlier algorithms will be used as inputs. The output vector file (filo) will be set to the int_seg variable and the output vector segment number (dbov) will be 2, as we want the classification fields added to the initial segmentation vector segment. This will ensure that all object analyst fields (attributes, training, and classification) are contained in a single vector layer.

# OASVMCLASS - Object-based SVM classifier
algo.oasvmclass(filv=init_seg, dbvs=[2], tfile=fld, trnmodel=training_model, filo=init_seg, dbov=[2])

Process additional images in batch

Now that the initial image is classified, we can apply that same training model to the additional images.

The first step is to create a list of all valid additional images in the add_images folder. The glob module can be used to create a list of all .pix files in the add_images folder.

# Apply training model and classify additional image in batch
print("Processing additional images in batch...")
file_list = glob.glob(os.path.join(add_images, "*.pix"))

You can iterate through the file_list using a for loop. The algorithms within the loop will be run on each individual image.

A new segmentation pix file (add_seg) will need to be created for each input image. The os module is used to establish the naming for the new segmentation file:

os.path.join(output_folder, os.path.basename(os.path.splitext(init_image)[0]) + '_seg.pix')

  • os.path.splitext() splits the file pathname (…\Additional_Images\June2016.pix) at the period of the extension. The first section [0] of the split is kept -…\Additional_Images\June2016
  • os.path.basename() gets only the basename (not path) of the file - June2016.
  • + ‘_seg.pix” adds this suffix to the split basename - June2016_seg.pix
  • path.join() joins the two path components: directory: output folder and basename: June2016_seg.pix to create a full file pathname - …\Python_Classifications\June2016_seg.pix
for image in file_list:

print("Currently processing:", os.path.basename(image))
add_seg = os.path.join(output_folder, os.path.basename(os.path.splitext(image)[0]) + '_seg.pix')

OASEG

OASEG applies a hierarchical region-growing segmentation to image data and writes the resulting objects to a vector layer.

The input image (fili) will be the current image (image) that is being processed in the loop. The output image will be the new segmentation file (add_seg). The scale parameter (segscale) will be set to 35 – this will create larger segments then the default of 25.

    # OASEG (OASEGSAR) - Segment an image
print("OASEG: Segmenting Image")
algo.oaseg(fili=image, filo=add_seg, segscale=[35], segshape=[0.5], segcomp=[0.5])

OACALCATT

OACALCATT calculates attributes of objects (polygons) in a vector layer, and then writes the output values to the same or a new segmentation vector layer.

The input image (fili) will be set to the current image (image) that is being processed in the loop. Only channels 2-6 (B, G, R, NIR, SWIR) will be used in the attribute calculation (dbic). The attributes are calculated for each of the polygons in the segmentation layer that we created  in OASEG (filv, dbvs). The calculated attributes are then saved back to the same segmentation layer (filo, dbov). The Maximum and Mean statistical attributes (statatt) and Rectangularity geometrical attribute (geoatt) will be calculated. Additionally, NDVI vegetation index will be calculated (index) which will help to classify vegetation.

    # OACALCATT (OACALCATTSAR) - Calculate object attributes
print("OACALCATT: Calculating Attributes")
algo.oacalcatt(fili=image, dbic=[2,3,4,5,6], filv=add_seg, dbvs=[2],
filo=add_seg, dbov=[2], statatt="MAX, MEAN", geoatt="REC", index="NDVI")

OASVMCLASS

We will now run OASVMCLASS in a similar method to how we ran it on the initial image. The same training model from OASVMTRAIN and attribute field text file from OAFLDNMEXP will be used. The current segmentation file (add_seg) will be used as both the input and output (filv/dbvs, filo/dbov).

Once the batch processing is complete each segmentation file will include the classification fields – Label (integer class label), Class (string class label), Prob (class voting probability). You can then open the vector files in Focus and apply a representation to view the classification.

    # OASVMCLASS - Object-based SVM classifier
print("OASVMCLASS: Run Supervised Classification")
algo.oasvmclass(filv=add_seg, dbvs=[2], tfile=fld, trnmodel=training_model, filo=add_seg, dbov=[2])

Edit, Save and Apply Representation

When a segmentation vector layer is loaded into Focus the classification will not be automatically shown. You will need to adjust the representation of the vector layer to show the classification. This representation can then be saved to an RST file and that RST file can be used to load the same representation for the additional vector layers.

Edit Classification Representation

The following steps outline how to adjust the representation of the first segmentation vector file in Focus to display the classification results.

  1. Open the first segmentation pix file in Focus
  2. Right-click the segmentation vector layer in the Files tab and choose View
  3. In the Maps tab, right-click the new vector layer > Representation Editor
  4. In the representation editor, change the Attribute option to the supervised classification field
  5. Click More >>
  6. Make sure the method is Unique Values is selected
  7. Make sure the Generate new styles option is selected and that you choose the style that you want to use.
  8. Click Update Styles
  9. You can then change the colour of each class in the top section of the panel.

mceclip0.png

Save Representation to RST file

In order to apply the same representation to various files you will need to first save the representation.

  1. In Focus > Maps tab > right-click the new vector layer > Representation Editor
  2. To save a representation click mceclip1.png on the representation editor.

mceclip2.png

Apply Representation to Additional Vector Layers

When you load the additional classifications to Focus you want to ensure that the RST file you created is linked to the map in your project. This representation can then be easily applied to the additional images.

  1. In Focus > Maps tab > right-click on the Unnamed Map > Representation > Load.
  2. Select the RST file that you just created and click Open.
  3. The RST file will now be listed in the Maps tab

 mceclip3.png

  1. Open the additional segmentation files in Focus.
  2. In the Maps tab, right-click on one of the new vector layers > Representation Editor
  3. In the Representation Editor, change the Attribute drop-down menu to the SupClass field and click OK

 mceclip4.png

  1. The representation from the RST file will be applied.
  2. You can follow steps 5 & 6 for each of the additional segmentation vector layers.

Full Script

from pci import algo
import os
import glob

# Input Variables
# Initial Image - This image will be used to generate the training model
init_image = r"D:\Data\Tutorial\OBIA\Automated\Initial_Image\June2015.pix"
# Intial segmentation containing the training sites - *Created in Focus
init_seg = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications\June2015_seg.pix"
# Additional Images - The batch classification will be run on these images
add_images = r"D:\Data\Tutorial\OBIA\Automated\Additional_Images"

# Output Variables
# Output file location
output_folder = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications"
# Text file containing exported attribute names
fld = r"D:\Data\Tutorial\OBIA\Automated\Python_Classifications\att_fld.txt"
# Training model
training_model = os.path.join(output_folder, "training_model.txt")

# Export fields, save training model and classify initial image
print("Processing initial image:", os.path.basename(init_image))
# OAFLDNMEXP - Export names of attribute fields from Focus Object Analyst to a text file
algo.oafldnmexp(filv=init_seg, dbvs=[2], fldnmflt="ALL_OA", tfile=fld)

# OASVMTRAIN - Object-based SVM training
algo.oasvmtrain(filv=init_seg, dbvs=[2], tfile=fld, kernel="RBF", trnmodel=training_model)

# OASVMCLASS - Object-based SVM classifier
algo.oasvmclass(filv=init_seg, dbvs=[2], tfile=fld, trnmodel=training_model, filo=init_seg, dbov=[2])

# Apply training model and classify additional image in batch
print("Processing additional images in batch...")
file_list = glob.glob(os.path.join(add_images, "*.pix"))

for image in file_list:

print("Currently processing:", os.path.basename(image))
add_seg = os.path.join(output_folder, os.path.basename(os.path.splitext(image)[0]) + '_seg.pix')

# OASEG (OASEGSAR) - Segment an image
print("OASEG: Segmenting Image")
algo.oaseg(fili=image, filo=add_seg, segscale=[35], segshape=[0.5], segcomp=[0.5])

# OACALCATT (OACALCATTSAR) - Calculate object attributes
print("OACALCATT: Calculating Attributes")
algo.oacalcatt(fili=image, dbic=[2,3,4,5,6], filv=add_seg, dbvs=[2], filo=add_seg, dbov=[2], statatt="MAX, MEAN",
geoatt="REC", index="NDVI")

# OASVMCLASS - Object-based SVM classifier
print("OASVMCLASS: Run Supervised Classification")
algo.oasvmclass(filv=add_seg, dbvs=[2], tfile=fld, trnmodel=training_model, filo=add_seg, dbov=[2])

Have more questions? Submit a request

Comments

Powered by Zendesk