7
Event Segmentation
The performance of the model on event images is evaluated
using the labels from the associated color-based prediction.
This test set is comprised of 8324 images from 2 different
sections of the mine.
Ground-truth Label
Rock Not Rock
Rock 0.962 0.038
Not Rock 0.210 0.790
Accuracy 0.876
DISCUSSION
This study demonstrates the efficacy of using transfer learn-
ing techniques to execute the semantic segmentation task
in an active mining environment using event-based images.
Notably, the false positive rate, denoted as P(RP|N), is
approximately 3%. This finding is particularly encouraging
for the intended application, given that the primary fail-
ure mode involves drilling into areas other than rock. The
network exhibits a tendency to predict false negatives, i.e.,
P(RP|N), about three times more frequently when event
images are used as input compared to traditional color
images. This characteristic contributes to a more conser-
vative approach in the selection of suitable roof areas for
drilling. The higher likelihood of false negatives results in a
conservative strategy, emphasizing precision and minimiz-
ing the risk of drilling in inappropriate locations, thereby
enhancing the overall reliability and safety of the drilling
process in active mining scenarios.
The higher incidences of false negatives can be explained
by lower-quality ground truth as seen in Figure 11.
Statistically, the strap detected by the event image is clas-
sified as a false negative prediction. However, upon inspec-
tion, it can be seen that the lighting conditions adversely
affected the color image and the prediction missed the strap
altogether. The superior optical properties of the event cam-
era allowed for the prediction to label the strap accurately
and outperform a traditional camera. There were several
instances in the dataset where the color of the support strap
had asymptoted to that of the rock behind it. In each of
these cases, the event-based prediction was able to capture
more detail than the traditional color image prediction.
This shows that this work can be valuable to the applica-
tion of robotics in mining. Even in severe environmental
and lighting conditions, computer vision can be applied to
automate otherwise dangerous processes.
The training set includes labeled images of holes in the
support strap, and Figure 12 shows the continued accu-
rate prediction of these features even after transferring the
learning to event images. This observation holds significant
promise for several reasons. Firstly, it indicates the robust-
ness and adaptability of the model to recognize and predict
specific features, such as holes in the support strap, across
different data modalities. This adaptability is crucial for
practical applications where variations in imaging technol-
ogies might be encountered.
Moreover, the sustained accurate prediction of sup-
port strap holes in event images underscores the effective-
ness of the transfer learning approach. It suggests that the
knowledge gained from the initial training on traditional
color images successfully transfers and generalizes to event-
based images, showcasing the model’s capacity to extract
meaningful information and maintain predictive accuracy
across diverse datasets. This adaptability and transferability
are vital attributes for real-world scenarios where a model
trained on one set of conditions needs to perform reliably
in different, potentially challenging, environments.
Figure 13. Shows the result of predicting a semantic mask
using the network trained and validated on the color data set
Pr
Label
Previous Page Next Page