5
REFERENCES
[1] Chen J, Li S, Liu D, Li X. Airobsim: Simulating a
multisensor aerial robot for urban search and res-
cue operation and training. Sensors (Switzerland).
2020 20(18):1–20.
[2] Beerbower D, Energy P, Biggerstaff R, Coal A,
Blackwell WK, Energy C, et al. Mine Rescue
Handbook.
[3] Bertrand JWM, Fransoo JC. Modelling and simula-
tion. Research Methods for Operations Management.
2016. 290–330 p.
[4] Zlot R, Bosse M. Efficient Large-Scale 3D Mobile
Mapping and Surface Reconstruction of an Underground
Mine. In: Yoshida K, Tadokoro S, editors. Field and
Service Robotics: Results of the 8th International
Conference [Internet]. Berlin, Heidelberg: Springer
Berlin Heidelberg 2014. p. 479–93. Available from:
doi.org/10.1007/978-3-642-40686-7_32.
[5] Conti RS, Chasko LL, Wiehagen WJ, Lazzara CP. Fire
Response Preparedness for Underground Mines. Inf
Circ 9481 [Internet]. 2005 25. Available from: www
.cdc.gov/niosh/mining/UserFiles/works/pdfs/2006
-105.pdf.
[6] Zimroz P, Trybała P, Wróblewski A, Góralczyk M,
Szrek J, Wójcik A, et al. Application of UAV in search
and rescue actions in underground mine—A specific
sound detection in noisy acoustic signal. Energies.
2021 14(13):1–21.
[7] Willette K. First Responder. NFPA J [Internet]. 2018
Available from: www.nfpa.org/News-and-Research
/Publications-and-media/NFPA-Journal/2018
/January-February-2018/Columns/First-Responder.
[8] Demirkan DC, Duzgun S. An Evaluation of
AR-Assisted Navigation for Search and Rescue in
Underground Spaces. In: 2020 IEEE International
Symposium on Mixed and Augmented Reality
Adjunct (ISMAR- Adjunct). 2020. p. 1–2.
[9] Demirkan DC, Segal A, Malik A, Duzgun HS,
Petruska AJ. Real-time perception enhancement in
obscured environments for underground mine search
and rescue teams. [Manuscript Submitt Publ. 2023 .
[10] Bertin J. Semiologie graphique. Paris 1967.
[11] Bertin J. Semiology of Graphics: Diagrams, Networks,
Maps. UMI Research Press 1983. 429 p.
[12] MacEachren AM. Some Truth With Maps: A
Primer on Symbolization and Design. Association of
American Geographers 1994. 129 p.
[13] Wolfe JM. Guided Search 2.0 A revised model of
visual search. Psychon Bull Rev. 1994 1(2):202–38.
[14] Zhang ZP, Liu J. Research on the symbol vision vari-
able of the three-dimension virtual battle environ-
ment. Geomatics Spat Inf Technol. 2014 37(9):7–9.
[15] Hong S, Mao B, Li B. Preliminary Exploration of
Three-Dimensional Visual Variables in Virtual Reality.
In: 2018 International Conference on Virtual Reality
and Visualization (ICVRV). IEEE 2018. p. 28–34.
[16] Conferance Ranks. No Title [Internet]. Available
from: www.conferenceranks.com/index.html? search
all=Human+Factors+in+Computing+S ystems#data.
[17] Ens B, Bach B, Cordeil M, Engelke U, Serrano M,
Willett W, et al. Grand Challenges in Immersive
Analytics. 2021 17:1–17. Available from: hdl.handle
.net/1880/112984.
[18] Dasgupta A, Poco J, Wei Y, Cook R, Bertini E, Silva
CT. Bridging Theory with Practice: An Exploratory
Study of Visualization Use and Design for Climate
Model Comparison. IEEE Trans Vis Comput Graph.
2015 21(9):996–1014.
[19] Laha B, Sensharma K, Schiffbauer JD, Bowman DA.
Effects of immersion on visual analysis of volume data.
IEEE Trans Vis Comput Graph. 2012 18(4):597–606.
[20] Adams H, Stefanucci J, Creem-Regehr S, Bodenheimer
B. Depth Perception in Augmented Reality: The
Effects of Display, Shadow, and Position. Proc -2022
IEEE Conf Virtual Real 3D User Interfaces, VR
2022. 2022 792–801.
[21] Arjun S, Reddy GSR, Mukhopadhyay A, Vinod S,
Biswas P. Evaluating Visual Variables in a Virtual
Reality Environment. 34th Br Hum Comput Interact
Conf Interact Conf BCS HCI 2021. 2021 133–8.
[22] Hosseinkhani J, Joslin C. Significance of Bottom-Up
Attributes in Video Saliency Detection without
Cognitive Bias. In: 2018 IEEE 17th International
Conference on Cognitive Informatics &Cognitive
Computing (ICCI*CC). 2018. p. 606–13.
[23] Wang Y, Su H, Zhang B, Hu X. Learning Reliable
Visual Saliency For Model Explanations. IEEE Trans
Multimed. 2020 22(7):1796–807.
[24] Bruce NDB, Tsotsos JK. Saliency based on infor-
mation maximization. Adv Neural Inf Process Syst.
2005 155–62.
[25] Duzgun S, Isleyen E, Demirkan D ~C., Orsvuran
R, Bozdag E, Pugmire D. Virtual and Augmented
Reality for Visualization of Big Data: Examples from
Deep Earth to Subsurface. In: AGU Fall Meeting
Abstracts. 2019. p. IN21B-05.
[26] Rusu RB, Cousins S. 3D is here: Point Cloud Library
(PCL). In: IEEE International Conference on Robotics
and Automation (ICRA). Shanghai, China 2011.
REFERENCES
[1] Chen J, Li S, Liu D, Li X. Airobsim: Simulating a
multisensor aerial robot for urban search and res-
cue operation and training. Sensors (Switzerland).
2020 20(18):1–20.
[2] Beerbower D, Energy P, Biggerstaff R, Coal A,
Blackwell WK, Energy C, et al. Mine Rescue
Handbook.
[3] Bertrand JWM, Fransoo JC. Modelling and simula-
tion. Research Methods for Operations Management.
2016. 290–330 p.
[4] Zlot R, Bosse M. Efficient Large-Scale 3D Mobile
Mapping and Surface Reconstruction of an Underground
Mine. In: Yoshida K, Tadokoro S, editors. Field and
Service Robotics: Results of the 8th International
Conference [Internet]. Berlin, Heidelberg: Springer
Berlin Heidelberg 2014. p. 479–93. Available from:
doi.org/10.1007/978-3-642-40686-7_32.
[5] Conti RS, Chasko LL, Wiehagen WJ, Lazzara CP. Fire
Response Preparedness for Underground Mines. Inf
Circ 9481 [Internet]. 2005 25. Available from: www
.cdc.gov/niosh/mining/UserFiles/works/pdfs/2006
-105.pdf.
[6] Zimroz P, Trybała P, Wróblewski A, Góralczyk M,
Szrek J, Wójcik A, et al. Application of UAV in search
and rescue actions in underground mine—A specific
sound detection in noisy acoustic signal. Energies.
2021 14(13):1–21.
[7] Willette K. First Responder. NFPA J [Internet]. 2018
Available from: www.nfpa.org/News-and-Research
/Publications-and-media/NFPA-Journal/2018
/January-February-2018/Columns/First-Responder.
[8] Demirkan DC, Duzgun S. An Evaluation of
AR-Assisted Navigation for Search and Rescue in
Underground Spaces. In: 2020 IEEE International
Symposium on Mixed and Augmented Reality
Adjunct (ISMAR- Adjunct). 2020. p. 1–2.
[9] Demirkan DC, Segal A, Malik A, Duzgun HS,
Petruska AJ. Real-time perception enhancement in
obscured environments for underground mine search
and rescue teams. [Manuscript Submitt Publ. 2023 .
[10] Bertin J. Semiologie graphique. Paris 1967.
[11] Bertin J. Semiology of Graphics: Diagrams, Networks,
Maps. UMI Research Press 1983. 429 p.
[12] MacEachren AM. Some Truth With Maps: A
Primer on Symbolization and Design. Association of
American Geographers 1994. 129 p.
[13] Wolfe JM. Guided Search 2.0 A revised model of
visual search. Psychon Bull Rev. 1994 1(2):202–38.
[14] Zhang ZP, Liu J. Research on the symbol vision vari-
able of the three-dimension virtual battle environ-
ment. Geomatics Spat Inf Technol. 2014 37(9):7–9.
[15] Hong S, Mao B, Li B. Preliminary Exploration of
Three-Dimensional Visual Variables in Virtual Reality.
In: 2018 International Conference on Virtual Reality
and Visualization (ICVRV). IEEE 2018. p. 28–34.
[16] Conferance Ranks. No Title [Internet]. Available
from: www.conferenceranks.com/index.html? search
all=Human+Factors+in+Computing+S ystems#data.
[17] Ens B, Bach B, Cordeil M, Engelke U, Serrano M,
Willett W, et al. Grand Challenges in Immersive
Analytics. 2021 17:1–17. Available from: hdl.handle
.net/1880/112984.
[18] Dasgupta A, Poco J, Wei Y, Cook R, Bertini E, Silva
CT. Bridging Theory with Practice: An Exploratory
Study of Visualization Use and Design for Climate
Model Comparison. IEEE Trans Vis Comput Graph.
2015 21(9):996–1014.
[19] Laha B, Sensharma K, Schiffbauer JD, Bowman DA.
Effects of immersion on visual analysis of volume data.
IEEE Trans Vis Comput Graph. 2012 18(4):597–606.
[20] Adams H, Stefanucci J, Creem-Regehr S, Bodenheimer
B. Depth Perception in Augmented Reality: The
Effects of Display, Shadow, and Position. Proc -2022
IEEE Conf Virtual Real 3D User Interfaces, VR
2022. 2022 792–801.
[21] Arjun S, Reddy GSR, Mukhopadhyay A, Vinod S,
Biswas P. Evaluating Visual Variables in a Virtual
Reality Environment. 34th Br Hum Comput Interact
Conf Interact Conf BCS HCI 2021. 2021 133–8.
[22] Hosseinkhani J, Joslin C. Significance of Bottom-Up
Attributes in Video Saliency Detection without
Cognitive Bias. In: 2018 IEEE 17th International
Conference on Cognitive Informatics &Cognitive
Computing (ICCI*CC). 2018. p. 606–13.
[23] Wang Y, Su H, Zhang B, Hu X. Learning Reliable
Visual Saliency For Model Explanations. IEEE Trans
Multimed. 2020 22(7):1796–807.
[24] Bruce NDB, Tsotsos JK. Saliency based on infor-
mation maximization. Adv Neural Inf Process Syst.
2005 155–62.
[25] Duzgun S, Isleyen E, Demirkan D ~C., Orsvuran
R, Bozdag E, Pugmire D. Virtual and Augmented
Reality for Visualization of Big Data: Examples from
Deep Earth to Subsurface. In: AGU Fall Meeting
Abstracts. 2019. p. IN21B-05.
[26] Rusu RB, Cousins S. 3D is here: Point Cloud Library
(PCL). In: IEEE International Conference on Robotics
and Automation (ICRA). Shanghai, China 2011.