6
examine the effects of sharing specific details, such as the
severity level and health implications of the gases, on both
safety perceptions and decision-making.
REFERENCES
[1] Merritt, S. M. 2011. Affective processes in human–
automation interactions. Human Factors 53, 4
(2011), 356–370.
[2] Khavas, Z. R. (2021). A Review on Trust in Human-
Robot Interaction. arXiv preprint arXiv:2105.10045.
[3] Parasuraman, R., and Riley, V. (1997). Humans and
automation: misuse, disuse, abuse. Human Factors
39, 230–253. Doi: 10.1518/001872097778543886.
[4] Lewis, M., Walker, P., and Sycara, K., (2018). The
Role of Trust in Human-Robot Interaction. H. A.
Abbass et al. (eds.), Foundations of Trusted Autonomy,
Studies in Systems, Decision and Control 117,
https://doi.org/10.1007/978-3-319-64816-3_8.
[5] Ur Rehman, A., Lychea, T., Kwame Awuah-Offei,
K., and Nadendla, V. S. S., 2020. Effect of text mes-
sage alerts on miners’ evacuation decisions. Safety
Science. 130 (2020) 104875. https://doi.org/10.1016
/j.ssci.2020.104875.
[6] Merritt, S. M. and Ilgen, D. R. (2008). Not all trust
is created equal: Dispositional and history-based trust
in human-automation interactions. Human Factors:
The Journal of the Human Factors and Ergonomics
Society, 50(2):194–210.
[7] He, C. Hu, Z. Shen, Y. Wu, C. (2023a). Effects of
Demographic Characteristics on Safety Climate and
Construction Worker Safety Behavior. Sustainability,
15, 10985. https://doi.org/10.3390/su151410985.
[8] Fishbein, M., &Ajzen, I. (1975). Belief, attitude,
intention and behaviour: An introduction to theory
and research. Reading, MA: Addison-Wesley.
[9] Mayer, R. C., Davis, J. H., &Schoorman, F. D.
(1995). An integrative model of organizational trust.
Acad Manag Rev 20(3):709–734.
[10] Lee, J. D., &See, K. A. (2004). Trust in Automation:
Designing for Appropriate Reliance. Human Factors,
46(1), 50–80. https://doiorg.libaccess.sjlibrary
.org/10.1518/hfes.46.1.50.30392.
[11] Lerch, F. J., Prietula, M. J., and Kulik, C. T. (1997).
The turing effect: The nature of trust in expert systems
advice. In Expertise in context. 417–448. MIT Press.
[12] Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf,
S., and Guang-Zhong Y. (2019). XAI—Explainable
artificial intelligence. Science Robotics, 4 (37). DOI:
10.1126/scirobotics.aay. 7120.
[13] Krafft, P. M., Young, M., Katell, M., Huang, K., &
Bugingo, G. (2020). Defining AI in policy versus
practice. In Proceedings of the AAAI/ACM Conference
on AI, Ethics, and Society (pp. 72–78).
[14] Shin, D. (2020). User perceptions of algorithmic
decisions in the personalized AI system: Perceptual
evaluation of fairness, accountability, transparency,
and explainability. Journal of Broadcasting &Electronic
Media, 64(4), 541–565.
[15] Glikson, E., &Woolley, A. W. (2020). Human
trust in artificial intelligence: Review of empiri-
cal research. Academy of Management Annals, 14(2),
627–660.
[16] Venkatesh, V., Morris, M. G., Davis, G. B., &Davis,
F. D. (2003). User acceptance of information technol-
ogy: Toward a unified view. MIS quarterly, 425–478.
[17] McKnight, H. D., Choudhury, V., and Kacmar,
C. (2002). The impact of initial consumer trust on
intentions to transact with a web site: a trust building
model. The journal of strategic information systems, 11,
3–4 (2002), 297–323.
[18] Lupton, D. (2016). Digital risk society. In Routledge
handbook of risk studies (pp. 301–309). Routledge.
[19] Slovic, P. (1987). Perception of risk. Science 236
(4799), 280–285.
[20] Adadi, A., &Berrada, M. (2018). Peeking inside the
black-box: a survey on explainable artificial intelli-
gence (XAI). IEEE access, 6, 52138–52160.
[21] Grosz, B. J., &Kraus, S. (1996). Collaborative plans
for complex group action. Artificial Intelligence, 86(2),
269–357.
[22] Shneiderman, B. (2020). Human-centered arti-
ficial intelligence: Reliable, safe &trustwor-
thy. International Journal of Human–Computer
Interaction, 36(6), 495–504.
[23] Hoff, K. A., and Bashir, M. (2015). Trust in auto-
mation: Integrating empirical evidence on factors
that influence trust. Human factors, 57, 3 (2015),
407–434.
[24] Lewicki, R. J., McAllister, D. J., Bies, R. J. (1998).
Trust and distrust: New relationships and realities.
Acad. Manag. Rev. 23(3), 438–458.
[25] Johnson, D. and Grayson, K., (2005). Cognitive
and affective trust in service relationships. Journal of
Business research 58, 4, 500–507.
[26] Tyler, T. R. (2003). Trust within organisa-
tions. Personnel review, 32(5), 556–568.
[27] Gefen, D., Karahanna, E., and Straub. D. W. (2003).
Trust and TAM in online shopping: an integrated
model. MIS quarterly, 27, 1, 51–90.
examine the effects of sharing specific details, such as the
severity level and health implications of the gases, on both
safety perceptions and decision-making.
REFERENCES
[1] Merritt, S. M. 2011. Affective processes in human–
automation interactions. Human Factors 53, 4
(2011), 356–370.
[2] Khavas, Z. R. (2021). A Review on Trust in Human-
Robot Interaction. arXiv preprint arXiv:2105.10045.
[3] Parasuraman, R., and Riley, V. (1997). Humans and
automation: misuse, disuse, abuse. Human Factors
39, 230–253. Doi: 10.1518/001872097778543886.
[4] Lewis, M., Walker, P., and Sycara, K., (2018). The
Role of Trust in Human-Robot Interaction. H. A.
Abbass et al. (eds.), Foundations of Trusted Autonomy,
Studies in Systems, Decision and Control 117,
https://doi.org/10.1007/978-3-319-64816-3_8.
[5] Ur Rehman, A., Lychea, T., Kwame Awuah-Offei,
K., and Nadendla, V. S. S., 2020. Effect of text mes-
sage alerts on miners’ evacuation decisions. Safety
Science. 130 (2020) 104875. https://doi.org/10.1016
/j.ssci.2020.104875.
[6] Merritt, S. M. and Ilgen, D. R. (2008). Not all trust
is created equal: Dispositional and history-based trust
in human-automation interactions. Human Factors:
The Journal of the Human Factors and Ergonomics
Society, 50(2):194–210.
[7] He, C. Hu, Z. Shen, Y. Wu, C. (2023a). Effects of
Demographic Characteristics on Safety Climate and
Construction Worker Safety Behavior. Sustainability,
15, 10985. https://doi.org/10.3390/su151410985.
[8] Fishbein, M., &Ajzen, I. (1975). Belief, attitude,
intention and behaviour: An introduction to theory
and research. Reading, MA: Addison-Wesley.
[9] Mayer, R. C., Davis, J. H., &Schoorman, F. D.
(1995). An integrative model of organizational trust.
Acad Manag Rev 20(3):709–734.
[10] Lee, J. D., &See, K. A. (2004). Trust in Automation:
Designing for Appropriate Reliance. Human Factors,
46(1), 50–80. https://doiorg.libaccess.sjlibrary
.org/10.1518/hfes.46.1.50.30392.
[11] Lerch, F. J., Prietula, M. J., and Kulik, C. T. (1997).
The turing effect: The nature of trust in expert systems
advice. In Expertise in context. 417–448. MIT Press.
[12] Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf,
S., and Guang-Zhong Y. (2019). XAI—Explainable
artificial intelligence. Science Robotics, 4 (37). DOI:
10.1126/scirobotics.aay. 7120.
[13] Krafft, P. M., Young, M., Katell, M., Huang, K., &
Bugingo, G. (2020). Defining AI in policy versus
practice. In Proceedings of the AAAI/ACM Conference
on AI, Ethics, and Society (pp. 72–78).
[14] Shin, D. (2020). User perceptions of algorithmic
decisions in the personalized AI system: Perceptual
evaluation of fairness, accountability, transparency,
and explainability. Journal of Broadcasting &Electronic
Media, 64(4), 541–565.
[15] Glikson, E., &Woolley, A. W. (2020). Human
trust in artificial intelligence: Review of empiri-
cal research. Academy of Management Annals, 14(2),
627–660.
[16] Venkatesh, V., Morris, M. G., Davis, G. B., &Davis,
F. D. (2003). User acceptance of information technol-
ogy: Toward a unified view. MIS quarterly, 425–478.
[17] McKnight, H. D., Choudhury, V., and Kacmar,
C. (2002). The impact of initial consumer trust on
intentions to transact with a web site: a trust building
model. The journal of strategic information systems, 11,
3–4 (2002), 297–323.
[18] Lupton, D. (2016). Digital risk society. In Routledge
handbook of risk studies (pp. 301–309). Routledge.
[19] Slovic, P. (1987). Perception of risk. Science 236
(4799), 280–285.
[20] Adadi, A., &Berrada, M. (2018). Peeking inside the
black-box: a survey on explainable artificial intelli-
gence (XAI). IEEE access, 6, 52138–52160.
[21] Grosz, B. J., &Kraus, S. (1996). Collaborative plans
for complex group action. Artificial Intelligence, 86(2),
269–357.
[22] Shneiderman, B. (2020). Human-centered arti-
ficial intelligence: Reliable, safe &trustwor-
thy. International Journal of Human–Computer
Interaction, 36(6), 495–504.
[23] Hoff, K. A., and Bashir, M. (2015). Trust in auto-
mation: Integrating empirical evidence on factors
that influence trust. Human factors, 57, 3 (2015),
407–434.
[24] Lewicki, R. J., McAllister, D. J., Bies, R. J. (1998).
Trust and distrust: New relationships and realities.
Acad. Manag. Rev. 23(3), 438–458.
[25] Johnson, D. and Grayson, K., (2005). Cognitive
and affective trust in service relationships. Journal of
Business research 58, 4, 500–507.
[26] Tyler, T. R. (2003). Trust within organisa-
tions. Personnel review, 32(5), 556–568.
[27] Gefen, D., Karahanna, E., and Straub. D. W. (2003).
Trust and TAM in online shopping: an integrated
model. MIS quarterly, 27, 1, 51–90.