7
The general process for image stitching with a borehole
video includes image acquisition, feature detection and
extraction, feature matching and filtering, transformation
estimation, image warping, and image blending.
1. Image acquisition is the process of capturing a set
of overlapping frames from a video for stitching.
Two images are captured from a borehole video
(see Figure 7) for the following demonstration.
2. The process of feature detection and extraction
uses a feature detection algorithm, e.g., Scale-
Invariant Feature Transform (SIFT), Speeded-Up
Robust Features (SURF), Oriented FAST and
Rotated BRIEF (ORB), to identify distinctive key
points in each image and extracts the descriptors
associated with these key points, which capture the
local image information. The types of key points
can be detected depending on the algorithm and
method used for key point detection, and the com-
mon key points include corners, blob-like struc-
tures, edges, maxima, or minima. The key points
that were detected were with the ORB algorithm
(see Figure 8).
3. The key points and descriptors between pairs of
images are matched to establish correspondences
in the feature matching step. The purpose is to
find matching key points between the images
based on their descriptors. However, a key point in
one image may have multiple potential matches in
another one based on the descriptors (see Figure 9).
The filtering step potentially filters incorrect or
ambiguous matches. Figure 10 shows the matched
features after filtering, and we can see that, after
filtering, the overall accuracy of the feature match-
ing processing is significantly improved.
4. In the transformation estimation step, the trans-
formation (scaling, rotation, and shift) between
each pair of images is estimated to find the geo-
metric relationship between the images so that the
images can be aligned properly.
5. Based on the estimated transformation, the images
can be warped. It involves applying the calculated
transformation matrix to each image to align them
and create the stitched image.
6. The warped images can then be blended to cre-
ate a seamless transition between the overlapping
regions. The purpose is to ensure the visual con-
sistency of the stitched images without noticeable
seams. The image blending step is still under devel-
opment and is not used in this paper.
A series of frames can be captured from a borehole video.
For each of the two adjacent frames, the key points can
be detected, matched, and filtered, and the transformation
Figure 6. A geological model highlighting the Pittsburgh Sandstone thickness. The red areas
indicate thickness over 40 feet which could cause longwall caving issues for the mine.
The general process for image stitching with a borehole
video includes image acquisition, feature detection and
extraction, feature matching and filtering, transformation
estimation, image warping, and image blending.
1. Image acquisition is the process of capturing a set
of overlapping frames from a video for stitching.
Two images are captured from a borehole video
(see Figure 7) for the following demonstration.
2. The process of feature detection and extraction
uses a feature detection algorithm, e.g., Scale-
Invariant Feature Transform (SIFT), Speeded-Up
Robust Features (SURF), Oriented FAST and
Rotated BRIEF (ORB), to identify distinctive key
points in each image and extracts the descriptors
associated with these key points, which capture the
local image information. The types of key points
can be detected depending on the algorithm and
method used for key point detection, and the com-
mon key points include corners, blob-like struc-
tures, edges, maxima, or minima. The key points
that were detected were with the ORB algorithm
(see Figure 8).
3. The key points and descriptors between pairs of
images are matched to establish correspondences
in the feature matching step. The purpose is to
find matching key points between the images
based on their descriptors. However, a key point in
one image may have multiple potential matches in
another one based on the descriptors (see Figure 9).
The filtering step potentially filters incorrect or
ambiguous matches. Figure 10 shows the matched
features after filtering, and we can see that, after
filtering, the overall accuracy of the feature match-
ing processing is significantly improved.
4. In the transformation estimation step, the trans-
formation (scaling, rotation, and shift) between
each pair of images is estimated to find the geo-
metric relationship between the images so that the
images can be aligned properly.
5. Based on the estimated transformation, the images
can be warped. It involves applying the calculated
transformation matrix to each image to align them
and create the stitched image.
6. The warped images can then be blended to cre-
ate a seamless transition between the overlapping
regions. The purpose is to ensure the visual con-
sistency of the stitched images without noticeable
seams. The image blending step is still under devel-
opment and is not used in this paper.
A series of frames can be captured from a borehole video.
For each of the two adjacent frames, the key points can
be detected, matched, and filtered, and the transformation
Figure 6. A geological model highlighting the Pittsburgh Sandstone thickness. The red areas
indicate thickness over 40 feet which could cause longwall caving issues for the mine.