Promouvoir l’adoption de l’IA dans les milieux d’emploi par l’entremise de l’explicabilité et de la confiance : une étude empirique
DOI :
https://doi.org/10.1522/radm.no8.1840Mots-clés :
Intelligence artificielle, confiance, explicabilité, travail, intention d'utilisationRésumé
L’intelligence artificielle (IA) est associée à plusieurs bénéfices pour les travailleurs et les organisations. Toutefois, ses capacités inédites sont propices à engendrer chez les humains de la crainte pour la pérennité de leur emploi, et de la réticence à utiliser l’IA. Dans la présente étude, nous explorons le rôle de la confiance envers l’utilisation de l’IA chez les travailleurs, ainsi que la capacité de l’explicabilité de l’algorithme à promouvoir la confiance. À cet effet, un devis expérimental à répartition aléatoire a été utilisé. Les résultats révèlent que la confiance favorise l’intention d’utiliser l’IA, mais que l’explicabilité ne contribue pas au développement de la confiance. De plus, l’explicabilité a eu un effet inattendu délétère sur l’intention d’utiliser l’IA.
Références
Agrawal, A., Gans, J. S. et Goldfarb, A. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy, 47, 1-6. https://doi.org/10.1016/j.infoecopol.2019.05.001
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R. et Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115. https://doi.org/10.1016/j.inffus.2019.12.012
Asan, O., Bayrak, A. E. et Choudhury, A. (2020). Artificial intelligence and human trust in healthcare: focus on clinicians. Journal of Medical Internet Research, 22(6). https://doi.org/10.2196/15154
Ashoori, M. et Weisz, J. D. (2019). In AI we trust? Factors that influence trustworthiness of AI-infused decision-making processes. arXiv, 1-10. https://doi.org/10.48550/arXiv.1912.02675
Atzmüller, C. et Steiner, P. M. (2010). Experimental vignette studies in survey research. Methodology European Journal of Research Methods for the Behavioral and Social Sciences, 6(3), 128-138. https://doi.org/10.1027/1614-2241/a000014
Bedué, P. et Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530-549. https://doi.org/10.1108/JEIM-06-2020-0233
Benbasat, I. et Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 4. https://doi.org/10.17705/1jais.00065
Black, J. S. et van Esch, P. (2020). AI-enabled recruiting: What is it and how should a manager use it?. Business Horizons, 63(2), 215-226. https://doi.org/10.1016/j.bushor.2019.12.001
Boon, S. et Holmes, J. (1991). The dynamics of interpersonal trust: Resolving uncertainity in the face of risk. Dans R. Hinde et J. Gorebel (dir.), Cooperation and prosocial behaviour (190-211). Cambridge University Press.
Braun, M., Bleher, H. et Hummel, P. (2021). A leap of faith: is there a formula for “Trustworthy” AI?. Hastings Center Report, 51(3), 17-22. https://doi.org/10.1002/hast.1207
Braganza, A., Chen, W., Canhoto, A. et Sap, S. (2021). Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. Journal of Business Research, 131, 485-494. https://doi.org/10.1016/j.jbusres.2020.08.018
Bruhn, J. et Anderer, M. (2019). Implementing artificial intelligence in organizations and the special role of trust. Media Trust in a Digital World: Communication at Crossroads, 191-205. https://doi.org/10.1007/978-3-030-30774-5_14
Burton, J. W., Stein, M. K. et Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239. https://doi.org/10.1002/bdm.2155
Cabiddu, F., Moi, L., Patriotta, G. et Allen, D. G. (2022). Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. European Management Journal, 40(5), 685-706. https://doi.org/10.1016/j.emj.2022.06.001
Carton, S., Mei, Q. et Resnick, P. (2020). Feature-based explanations don't help people detect misclassifications of online toxicity. Proceedings of the International AAAI Conference on Web and Social Media, 14, 95-106. https://doi.org/10.1609/icwsm.v14i1.7282
Castelvecchi, D. (2016). Can we open the black box of AI?. Nature News, 538(7623), 20-23. https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731
Choung, H., David, P. et Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(3), 1–13. https://doi.org/10.1080/10447318.2022.2050543
Chaudhry, I. S., Paquibut, R. Y. et Chabchoub, H. (2022). Factors influencing employees trust in AI & its adoption at work: Evidence from United Arab Emirates. Proceedings of the International Arab Conference on Information Technology, 1-7. https://doi.org/10.1109/acit57182.2022.9994226
Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A. et Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1). https://doi.org/10.1016/j.hrmr.2022.100899
Confalonieri, R., Coba, L., Wagner, B et Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1). https://doi.org/10.1002/widm.1391
Costello, A. B. et Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(7). https://doi.org/10.7275/jyj1-4868
Custers, B. (2022). AI in criminal law: An overview of AI applications in substantive and procedural criminal law. Dans B. Custers et E. Fosch-Villaronga (dir.), Law and artificial intelligence. Information technology and law series (vol. 35). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-523-2_11
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
de Vet, H. C., Mokkink, L. B., Mosmuller, D. G. et Terwee, C. B. (2017). Spearman–Brown prophecy formula and Cronbach's alpha: different faces of reliability and opportunities for new applications. Journal of Clinical Epidemiology, 85, 45-49. https://doi.org/10.1016/j.jclinepi.2017.01.013
Dietz, G. et Hartog, D. N. (2006). Measuring trust inside organisations. Personnel Review, 35(5), 557-588. https://doi.org/10.1108/00483480610682299
Dimensional Research. (2019, mai). Artificial intelligence and machine learning projects are obstructed by data issues global survey of data scientists, global survey of data scientists, AI experts and stakeholders. https://t.ly/FGo54
Eisinga, R., Grotenhuis, M. T. et Pelzer, B. (2013). The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?. International Journal of Public Health, 58(4), 637-642. https://doi.org/10.1007/s00038-012-0416-3
Emons, W. H. M., Sijtsma, K. et Meteijer, R. R. (2007). On the consistency of individual classification using short scales. Psychological Methods, 12(1), 105-120. https://doi.org/10.1037/1082-989X.12.1.105
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C. et Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. https://doi.org/10.1037/1082-989X.4.3.272
Fan, W., Liu, J., Zhu, S. et Pardalos, P. M. (2020). Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research, 294, 567-592. https://doi.org/10.1007/s10479-018-2818-y
Ferrario, A. et Loi, M. (2022). The robustness of counterfactual explanations over time. IEEE Access, 10, 82736-82750. https://doi.org/10.1109/ACCESS.2022.3196917
Gefen, D. (2000). E-commerce: the role of familiarity and trust. Omega, 28(6), 725-737. https://doi.org/10.1016/S0305-0483(00)00021-9
Gefen, D., Karahanna, E. et Straub D. W. (2003). Trust and TAM in online shopping: an integrated model. MIS Quarterly, 27(1), 51-90. https://doi.org/10.2307/30036519
Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U. et Redzepi, A. (2022). The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems, 31(3), 410-435. https://doi.org/10.1080/0960085X.2021.1927213
Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B. et Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115. https://doi.org/10.1016/j.chb.2020.106607
Glikson, E. et Woolley, A. W. (2020). Human trust in artificial intelligence: review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F. et Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1-42. https://doi.org/10.1145/3236009
Gulati, R. (1995). Does familiarity breed trust? The implications of repeated ties for contractual choice in alliances. Academy of Management Journal, 38(1), 85-112. https://doi.org/10.5465/256729
Gulati, S., Sousa, S. et Lamas, D. (2019). Towards an empirically developed scale for measuring trust. Proceedings of the 31st European Conference on Cognitive Ergonomics, 45-48. https://doi.org/10.1145/3335082.3335116
Gunning, D. et Aha, D. W. (2019). DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine, 40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850
Hair, J. F., Black, W., Babin, B., Andeson, R. et Tatham, R. (2006). Multivariate data analysis (6e éd.). Pearson-Prentice Hall.
Hair, J. F., Page, M. et Brunsveld, N. (2019). Essentials of business research methods (4e éd.). Routeledge-Taylor & Francis.
Hasija, A. et Esper, T. L. (2022). In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance. Journal of Business Logistics, 43(3), 388-412. https://doi.org/10.1111/jbl.12301
Hengstler, M., Enkel, E. et Duelli, S. (2016). Applied artificial intelligence and trust: The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105-120. https://doi.org/10.1016/j.techfore.2015.12.014
Hmoud, B. et Laszlo, V. (2019). Will artificial intelligence take over human resources recruitment and selection. Network Intelligence Studies, 7(13), 21-30. https://dea.lib.unideb.hu/server/api/core/bitstreams/cff8e1b9-7db6-47e9-9642-6fe8e19f565d/content
Hmoud, B. I. et Várallyai, L. (2020). Artificial intelligence in human resources information systems: Investigating its trust and adoption determinants. International Journal of Engineering and Management Sciences, 5(1), 749-765. https://doi.org/10.21791/IJEMS.2020.1.65
Höddinghaus, M., Sondern, D. et Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms?. Computers in Human Behavior, 116. https://doi.org/10.1016/j.chb.2020.106635
Hoff, K. A. et Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. https://doi.org/10.1177/0018720814547570
Hughes, C., Robert, L., Frady, K. et Arroyos, A. (2019). Artificial intelligence, employee engagement, fairness, and job outcomes: Managing technology and middle and low skilled employees. Dans The changing context of managing people (p. 61-68). Emerald Publishing Limited. https://doi.org/10.1108/978-1-78973-077-720191005
Kaiser, H. F. (1960). The Application of Electronic Computers to Factor Analysis. Educational and Psychological Measurement, 20(1). https://doi.org/10.1177/001316446002000116
Kaplan, A. D., Kessler, T. T., Brill, J. C. et Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337-359. https://doi.org/10.1177/00187208211013988
Kim, J. Y. et Heo, W. (2022). Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians. Information Technology & People, 35(3), 861-878. https://doi.org/10.1108/ITP-04-2019-0173
Knowles, B. et Hanson, V. L. (2018). The wisdom of older technology (non) users. Communications of the ACM, 61(3), 72-77. https://eprints.lancs.ac.uk/id/eprint/88296/1/CACM_Pure.pdf
Koufaris, M. et Hampton-Sosa, W. (2004). The development of initial trust in an online company by new customers. Information & Management, 41(3), 377-397. https://doi.org/10.1016/ j.im.2003.08.004
Lakens, D. (2022). Sample size justification. Collabra: Psychology, 8(1). https://doi.org/10.1525/collabra.33267
Lane, M., Williams, M. et Broecke, S. (2023). The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers (publication no 288). https://doi.org/10.1787/ea0a0fe1-en
Lankton, N. K., McKnight, D. H. et Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10). https://doi.org/10.17705/1jais.00411
Langer, M. et König, C. J. (2023). Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. Human Resource Management Review, 33(1). https://doi.org/10.1016/j.hrmr.2021.100881
Lapatin, S., Gonçalves, M., Nillni, A., Chavez, L., Quinn, R. L., Green, A. et Alegría, M. (2012). Lessons from the use of vignettes in the study of mental health service disparities. Health Services Research, 47(3.2), 1345-1362. https://doi.org/10.1111/j.1475-6773.2011.01360.x
Larasati, R., De Liddo, A. et Motta, E. (2020, mars). The effect of explanation styles on user's trust. Dans Workshop on explainable smart systems for algorithmic transparency in emerging technologies. https://oro.open.ac.uk/70421/1/70421.pdf
Lee, P., Fyffe, S., Son, M., Jia, Z. et Yao, Z. (2023). A paradigm shift from “human writing” to “machine generation” in personality test development: An application of state-of-the-art natural language processing. Journal of Business and Psychology, 38(1), 163-190. https://doi.org/10.1007/s10869-022-09864-6
Leppink, J. (2018). Analysis of covariance (ANCOVA) vs. moderated regression (MODREG): Why the interaction matters. Health Professions Education, 4(3), 225-232. https://doi.org/10.1016/j.hpe.2018.04.001
Liao, Q. V. et Sundar, S. S. (2022). Designing for responsible trust in AI systems: a communication perspective. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1257-1268. https://doi.org/10.1145/3531146.3533182
Lichtenthaler, U. (2020). Five maturity levels of managing AI: from isolated ignorance to integrated intelligence. Journal of Innovation Management, 8(1), 39-50. https://doi.org/10.24840/2183-0606_008.001_0005
Lin, H. C., Ho, C. F. et Yang, H. (2022). Understanding adoption of artificial intelligence-enabled language e-learning system: An empirical study of UTAUT model. International Journal of Mobile Learning and Organisation, 16(1), 74-94. https://doi.org/10.1504/IJMLO.2022.119966
Liu, X., He, X., Wang, M. et Shen, H. (2022). What influences patients' continuance intention to use AI-powered service robots at hospitals? The role of individual characteristics. Technology in Society, 70. https://doi.org/10.1016/j.techsoc.2022.101996
Lu, L., Cai, R. et Gursoy, D. (2019). Developing and validating a service robot integration willingness scale. International Journal of Hospitality Management, 80, 36-51. https://doi.org/10.1016/j.ijhm.2019.01.005
Madsen, M. et Gregor, S. (2000). Measuring human-computer trust. Proceedings of the 11th Australasian conference on information systems, 6-8. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=b8eda9593fbcb63b7ced1866853d9622737533a2
Mahmud, H., Islam, A. N., Ahmed, S. I. et Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175. https://doi.org/10.1016/j.techfore.2021.121390
Makarius, E. E., Mukherjee, D., Fox, J. D. et Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262-273. https://doi.org/10.1016/j.jbusres.2020.07.045
Mayer, R. C., Davis, J. H. et Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.5465/amr.1995.9508080335
Mcknight, D. H., Carter, M., Thatcher, J. B. et Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1-25. https://doi.org/10.1145/1985347.1985353
McKnight, D. H., Choudhury, V. et Kacmar, C. (2002). The impact of initial consumer trust on intentions to transact with a web site: a trust building model. The journal of strategic information systems, 11(3-4), 297-323. https://doi.org/10.1016/S0963-8687(02)00020-3
McKnight, D. H., Cummings, L. L. et Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473-490. https://doi.org/10.5465/amr.1998.926622
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
Mirbabaie, M., Brünker, F., Möllmann, N. R. et Stieglitz, S. (2022). The rise of artificial intelligence–understanding the AI identity threat at the workplace. Electronic Markets, 32, 73-99. https://doi.org/10.1007/s12525-021-00496-x
Misztal, B. A. (2001). Trust and cooperation: the democratic public sphere. Journal of Sociology, 37(4), 371-386. https://doi.org/10.1177/144078301128756409
Moray, N. et Inagaki, T. (1999). Laboratory studies of trust between humans and machines in automated systems. Transactions of the Institute of Measurement and Control, 21(4-5), 203-211. https://doi.org/10.1177/0142331299021004
Ore, O. et Sposato, M. (2022). Opportunities and risks of artificial intelligence in recruitment and selection. International Journal of Organizational Analysis, 30(6), 1771-1782. https://doi.org/10.1108/IJOA-07-2020-2291
Pasmore, W., Winby, S., Mohrman, S. A. et Vanasse, R. (2019). Reflections: sociotechnical systems design and organization change. Journal of Change Management, 19(2), 67-85. https://doi.org/10.1080/14697017.2018.1553761
Pereira, V., Hadjielias, E., Christofi, M. et Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 33(1). https://doi.org/10.1016/j.hrmr.2021.100857
Pett, M. A., Lackey, N. R. et Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Sage.
Pettersen, L. (2019). Why artificial intelligence will not outsmart complex knowledge work. Work, Employment and Society, 33(6), 1058-1067. https://doi.org/10.1177/0950017018817489
Rajpurkar, P., Chen, E., Banerjee, O. et Topol, E. J. (2022). AI in health and medicine. Nature medicine, 28(1), 31-38. https://doi.org/10.1038/s41591-021-01614-0
Ramachandran, K. K., Mary, A. A. S., Hawladar, S., Asokk, D., Bhaskar, B. et Pitroda, J. R. (2022). Machine learning and role of artificial intelligence in optimizing work performance and employee behavior. Materials Today: Proceedings, 51(8), 2327-2331. https://doi.org/10.1016/j.matpr.2021.11.544
Ransbotham, S., Kiron, D., Candelon, F., Khodabandeh, S. et Chu, M. (2022). Achieving individual and organizational value with AI. MIT Sloan Management Review. https://sloanreview.mit.edu/projects/achieving-individual-and-organizational-value-with-ai/
Robinson, M. D. et Clore, G. L. (2001). Simulation, scenarios, and emotional appraisal: Testing the convergence of real and imagined reactions to emotional stimuli. Personality and Social Psychology Bulletin, 27(11), 1520-1532. https://doi.org/10.1177/01461672012711012
Rossi, F. (2018). Building trust in artificial intelligence. Journal of International Affairs, 72(1), 127-134. https://www.jstor.org/stable/26588348
Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35(4), 651–665. https://doi.org/10.1111/j.1467-6494.1967.tb01454.x
Rotter, J. B. (1971). Generalized expectancies for interpersonal trust. American Psychologist, 26(5), 443–452. https://doi.org/10.1037/h0031464
Russel, S. et Susskind, D. (2021, octobre). Positive AI Economic Futures. World Economic Forum. https://www3.weforum.org/docs/WEF_Positive_AI_Economic_Futures_2021.pdf
Ryan, M. (2020). In AI we trust: ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749-2767. https://doi.org/10.1007/s11948-020-00228-y
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. https://doi.org/10.1037/1040-3590.8.4.350
Schneider, B. A., Avivi-Reich, M. et Mozuraitis, M. (2015). A cautionary note on the use of the Analysis of Covariance (ANCOVA) in classification designs with and without within-subject factors. Frontiers in Psychology, 6(474), 1-12. https://doi.org/10.3389/fpsyg.2015.00474
Shi, S., Gong, Y. et Gursoy, D. (2021). Antecedents of trust and adoption intention toward artificially intelligent recommendation systems in travel planning: a heuristic–systematic model. Journal of Travel Research, 60(8), 1714-1734. https://doi.org/10.1177/0047287520966395
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. International Journal of Human-Computer Studies, 146. https://doi.org/10.1016/j.ijhcs.2020.102551
Shulner-Tal, A., Kuflik, T. et Kliger, D. (2023). Enhancing fairness perception–Towards human-centred AI and personalized explanations understanding the factors influencing laypeople’s fairness perceptions of algorithmic decisions. International Journal of Human–Computer Interaction, 39(7), 1455-1482. https://doi.org/10.1080/10447318.2022.2095705
Siau, K. et Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47-53. https://www.researchgate.net/publication/324006061_Building_Trust_in_Artificial_Intelligence_Machine_Learning_and_Robotics
Söllner, M., Hoffmann, A., Hoffmann, H., Wacker, A. et Leimeister, J. M. (2012). Understanding the formation of trust in IT artifacts. Proceedings of the International Conference on Information Systems (ICIS 2012). https://doi.org/10.1007/978-3-319-05044-7__3
Söllner, M., Hoffmann, A. et Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274-287. https://doi.org/10.1057/ejis.2015.17
Stevens, J. P. (2009). Applied multivariate statistics for the social sciences (5e éd.). Routeledge-Taylor & Francis.
Streiner, D. L. (2003). Starting at the beginning: an introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80(1), 99-103. https://doi.org/10.1207/S15327752JPA8001_18
Tabachnick, B. G. et Fidell, L. S. (2019). Using multivariate statistics (7e éd.). Pearson Education.
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(1), 1273-1296. https://doi.org/10.1007/s11165-016-9602-2
Tarafdar, M., Tu, Q. et Ragu-Nathan, T. S. (2010). Impact of technostress on end-user satisfaction and performance. Journal of Management Information Systems, 27(3), 303-334. https://doi.org/10.2753/MIS0742-1222270311
Tran, A. Q., Nguyen, L. H., Nguyen, H. S. A., Nguyen, C. T., Vu, L. G., Zhang, M., Vu, T. M. T., Nguyen, S. H., Tran, B. X., Latkin, C. A., Ho, R. C. M. et Ho, C. S. (2021). Determinants of intention to use artificial intelligence-based diagnosis support system among prospective physicians. Frontiers in Public Health, 9. https://doi.org/10.3389/fpubh.2021.755644
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G. et Van Moorsel, A. (2020). The relationship between trust in AI and trustworthy machine learning technologies. Proceedings of the 2020 conference on fairness, accountability, and transparency (272-283). https://doi.org/10.1145/3351095.3372834
Trist, E. L. et Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting: an examination of the psychological situation and defences of a work group in relation to the social structure and technological content of the work system. Human Relations, 4(1), 3-38. https://doi.org/10.1177/001872675100400101
Tsiakas, K. et Murray-Rust, D. (2022). Using human-in-the-loop and explainable AI to envisage new future work practices. Proceedings of the 15th International Conference on Pervasive Technologies Related to Assistive Environments. https://doi.org/10.1145/3529190.3534779
Venkatesh, V. (2022). Adoption and use of AI tools: a research agenda grounded in UTAUT. Annals of Operations Research, 308, 1-12. https://doi.org/10.1007/s10479-020-03918-9
Venkatesh, V., Morris, M. G. et Ackerman, P. L. (2000). A longitudinal field investigation of gender differences in individual technology adoption decision-making processes. Organizational Behavior and Human Decision Processes, 83(1), 33-60. https://doi.org/10.1006/obhd.2000.2896
Venkatesh, V., Morris, M. G., Davis, G. B. et Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540
Vereschak, O., Bailly, G. et Caramiaux, B. (2021). How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-39. https://doi.org/10.1145/3476068
von Eschenbach, W. J. (2021). Transparency and the black box problem: why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622. https://doi.org/10.1007/s13347-021-00477-0
Wadden, J. J. (2022). Defining the undefinable: the black box problem in healthcare artificial intelligence. Journal of Medical Ethics, 48(10), 764-768. https://doi.org/10.1136/medethics-2021-107529
Wang, X. et Yin, M. (2021). Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. Proceeding of the 26th International Conference on Intelligent User Interfaces. https://doi.org/10.1145/3397481.3450650
Waung, M., McAuslan, P. et Lakshmanan, S. (2021). Trust and intention to use autonomous vehicles: manufacturer focus and passenger control. Transportation Research Part F: Traffic Psychology and Behaviour, 80, 328-340. https://doi.org/10.1016/j.trf.2021.05.004
Westphal, M., Vössing, M., Satzger, G., Yom-Tov, G. B. et Rafaeli, A. (2023). Decision control and explanations in human-AI collaboration: improving user perceptions and compliance. Computers in Human Behavior, 144. https://doi.org/10.1016/j.chb.2023.107714
Wright, S. A. et Schultz, A. E. (2018). The rising tide of artificial intelligence and business automation: developing an ethical framework. Business Horizons, 61(6), 823-832. https://doi.org/10.1016/j.bushor.2018.07.001
Xiang, Y., Zhao, L., Liu, Z., Wu, X., Chen, J., Long, E., Lin, D., Chen, C., Lin, Z. et Lin, H. (2020). Implementation of artificial intelligence in medicine: status analysis and development suggestions. Artificial Intelligence in Medicine, 102. https://doi.org/10.1016/j.artmed.2019.101780
Yang, R. et Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electronic Markets, 32(4), 2053-2077. https://doi.org/10.1007/s12525-022-00592-6
Yu, X., Xu, S. et Ashton, M. (2023). Antecedents and outcomes of artificial intelligence adoption and application in the workplace: the socio-technical system theory perspective. Information Technology & People, 36(1), 454-474. https://doi.org/10.1108/ITP-04-2021-0254
Zhang, B., Zhu, Y., Deng, J., Zheng, W., Liu, Y., Wang, C. et Zeng, R. (2023). “I am here to assist your tourism”: predicting continuance intention to use AI-based chatbots for tourism. Does gender really matter?. International Journal of Human–Computer Interaction, 39(9), 1887-1903. https://doi.org/10.1080/10447318.2022.2124345
Zielonka, J. T. (2022). The impact of trust in technology on the appraisal of technostress creators in a work-related context. Proceedings of the 55th Hawaii International Conference on System Sciences. http://hdl.handle.net/10125/80055
Téléchargements
Publié-e
Numéro
Rubrique
Licence
© Viviane Masciotra, Jean-Sébastien Boudrias 2024

Cette œuvre est sous licence Creative Commons Attribution - Pas d'Utilisation Commerciale 4.0 International.