2
LiDAR, and AR, which aims to penetrate the smoke by
leveraging thermal imaging while having real-time world
construction with LiDAR and visualizing it on an AR
device. By having this hybrid system, we aimed to provide
an AR image of the surrounding environment when pitch-
black or visually occluded by smoke, dust, or other small
particulates. This system was successfully built around an
AR device, namely, Microsoft HoloLens, which can display
a real-time image to the user that updates at 5 hz. However,
the proposed hardware was never tested in an underground
mine environment as a scenario-based solution, such as
underground tunnel evacuation, wayfinding and recogni-
tion, or search and rescue missions.
As a continuation of our study that aims to fill the
necessity of having a vision in pitch-black conditions, we
took the prototype into the Edgar Mine at Idoha Springs,
Colorado. After finding the optimum hardware and soft-
ware combination and building a prototype, the last step is
to find how to visualize the data stream, as it only comprises
numbers and letters. To solve this problem, we ask, “What
are the best ways and visual variables for first responders
to visualize reconstructed data in augmented reality?” This
question hypothesizes that AR interface with multi- col-
ored reconstruction with depth data might help decrease
response and assessment time, hence cognitive load. This
will result in faster search and rescue operations.
Since emergency response is a time-critical mission,
appropriate visualization needs to be employed. Our over-
arching goal is to investigate and benchmark visual vari-
ables for pitch-black augmented reality assistance.
RELATED RESEARCH
Visual variables are fundamental components of data visual-
ization. In 1967, Bertin proposed the first categorization of
visual parameters, including size, color, orientation, texture,
shape value (brightness), and position (dimensions on the
plane) [10][11]. Later, MacEachren suggested three more
visual parameters: crispness, resolution, and transparency
[12]. In 1994, Wolfe discussed the effects of complexity
(referred to as cluttering) in visual search, which increases
cognitive load [13]. In 2014, Zhang et al. examined the
compatibility and limitations of 2D visual variables in 3D
visuals [14][15].
Similar to studies on 2D and 3D compatibility of
visual parameters, virtual and augmented reality compat-
ibility also needs to be investigated. Computer Human
Interaction (CHI) (one of the most prestigious [16] confer-
ences in human-computer interaction) mentions the most
significant gaps and challenges in the field [17]. Designing
guidelines for visualization and understanding human
senses and cognition in situated contexts are two challenges
related to spatially situated data visualization. Improper
utilization of visual elements can lead to the misinterpreta-
tion of visualization by the users [18], which might cost
human lives in underground mining emergencies.
Laha et al. studied the effects of visual variables of head
tracking, field of regard, and stereoscopic rendering for
volumetric data interpretation. They found that the field of
regard has a positive impact on tasks, but other parameters
have mixed effects [19]. Adams et al. studied shadow vs.
shadowless and position (floating vs. grounded) variables
for the depth perception of users in augmented reality. They
found that current devices make users underestimate the
distance regarding the visual variable but the position of the
object influences users’ decisions [20]. Arjun et al. evalu-
ated the effects of color, size, orientation, opacity, and shape
for chart data understanding. They found that accuracy is
affected the most by size and color. Further, cognitive load
is affected the most by color, size, opacity and brightness
[21].
APPROACH
The resulting system integrates a LiDAR, a thermal camera,
and a Microsoft HoloLens onto a wearable platform (e.g.,
hardhat, belt, or backpack. The processing and power stor-
age are integrated into a waist-belt mounted package with
a cap-lamp style cable connecting the two. This enables the
user to look around with minimal impact on their motion.
However, which visual variable to use to visualize the
sensor data needs to be tested. Therefore, the research team
internally tested the initial demo version of visualization
varieties. Color is selected as a visual variable due to its pre-
attentive attribute. However, failure to develop the correct
color might result in increased cognitive load, distraction,
and overlooking critical data [22][23][24][25].
The visual variable color is tested as in three different set-
tings: i) single color mapping, ii) thermal sensor mapping,
iii) depth mapping. These settings are tested for navigation
in a tunnel and detecting a person together and seperately.
RESULTS AND DISCUSSION
Initial results yield possible visualization varieties. These
visualization options are given in Figure 1 as a) single color
mapping, b) thermal mapping, c) depth mapping.
Among these visualization options, depth mapping
with a thermal camera yields the most robust results. Color
schemes to provide depth perception information are added
as a user option, where red schemes indicate closer objects
and green schemes indicate farther objects.
LiDAR, and AR, which aims to penetrate the smoke by
leveraging thermal imaging while having real-time world
construction with LiDAR and visualizing it on an AR
device. By having this hybrid system, we aimed to provide
an AR image of the surrounding environment when pitch-
black or visually occluded by smoke, dust, or other small
particulates. This system was successfully built around an
AR device, namely, Microsoft HoloLens, which can display
a real-time image to the user that updates at 5 hz. However,
the proposed hardware was never tested in an underground
mine environment as a scenario-based solution, such as
underground tunnel evacuation, wayfinding and recogni-
tion, or search and rescue missions.
As a continuation of our study that aims to fill the
necessity of having a vision in pitch-black conditions, we
took the prototype into the Edgar Mine at Idoha Springs,
Colorado. After finding the optimum hardware and soft-
ware combination and building a prototype, the last step is
to find how to visualize the data stream, as it only comprises
numbers and letters. To solve this problem, we ask, “What
are the best ways and visual variables for first responders
to visualize reconstructed data in augmented reality?” This
question hypothesizes that AR interface with multi- col-
ored reconstruction with depth data might help decrease
response and assessment time, hence cognitive load. This
will result in faster search and rescue operations.
Since emergency response is a time-critical mission,
appropriate visualization needs to be employed. Our over-
arching goal is to investigate and benchmark visual vari-
ables for pitch-black augmented reality assistance.
RELATED RESEARCH
Visual variables are fundamental components of data visual-
ization. In 1967, Bertin proposed the first categorization of
visual parameters, including size, color, orientation, texture,
shape value (brightness), and position (dimensions on the
plane) [10][11]. Later, MacEachren suggested three more
visual parameters: crispness, resolution, and transparency
[12]. In 1994, Wolfe discussed the effects of complexity
(referred to as cluttering) in visual search, which increases
cognitive load [13]. In 2014, Zhang et al. examined the
compatibility and limitations of 2D visual variables in 3D
visuals [14][15].
Similar to studies on 2D and 3D compatibility of
visual parameters, virtual and augmented reality compat-
ibility also needs to be investigated. Computer Human
Interaction (CHI) (one of the most prestigious [16] confer-
ences in human-computer interaction) mentions the most
significant gaps and challenges in the field [17]. Designing
guidelines for visualization and understanding human
senses and cognition in situated contexts are two challenges
related to spatially situated data visualization. Improper
utilization of visual elements can lead to the misinterpreta-
tion of visualization by the users [18], which might cost
human lives in underground mining emergencies.
Laha et al. studied the effects of visual variables of head
tracking, field of regard, and stereoscopic rendering for
volumetric data interpretation. They found that the field of
regard has a positive impact on tasks, but other parameters
have mixed effects [19]. Adams et al. studied shadow vs.
shadowless and position (floating vs. grounded) variables
for the depth perception of users in augmented reality. They
found that current devices make users underestimate the
distance regarding the visual variable but the position of the
object influences users’ decisions [20]. Arjun et al. evalu-
ated the effects of color, size, orientation, opacity, and shape
for chart data understanding. They found that accuracy is
affected the most by size and color. Further, cognitive load
is affected the most by color, size, opacity and brightness
[21].
APPROACH
The resulting system integrates a LiDAR, a thermal camera,
and a Microsoft HoloLens onto a wearable platform (e.g.,
hardhat, belt, or backpack. The processing and power stor-
age are integrated into a waist-belt mounted package with
a cap-lamp style cable connecting the two. This enables the
user to look around with minimal impact on their motion.
However, which visual variable to use to visualize the
sensor data needs to be tested. Therefore, the research team
internally tested the initial demo version of visualization
varieties. Color is selected as a visual variable due to its pre-
attentive attribute. However, failure to develop the correct
color might result in increased cognitive load, distraction,
and overlooking critical data [22][23][24][25].
The visual variable color is tested as in three different set-
tings: i) single color mapping, ii) thermal sensor mapping,
iii) depth mapping. These settings are tested for navigation
in a tunnel and detecting a person together and seperately.
RESULTS AND DISCUSSION
Initial results yield possible visualization varieties. These
visualization options are given in Figure 1 as a) single color
mapping, b) thermal mapping, c) depth mapping.
Among these visualization options, depth mapping
with a thermal camera yields the most robust results. Color
schemes to provide depth perception information are added
as a user option, where red schemes indicate closer objects
and green schemes indicate farther objects.