Special Issue | Call for Papers | Is Our Future Colleague Even Human? Advancing Human-AI Teamwork from An Organizational Perspective
05.07.2023
Special Issue
Call for papers
Is Our Future Colleague Even Human? Advancing Human-AI Teamwork from An Organizational Perspective
Guest Editors: Gudela Grote (ETH Zurich, Switzerland); Anna-Sophie Ulfert (Eindhoven University of Technology, The Netherlands); Eleni Georganta (University of Amsterdam, The Netherlands)
Background and Rationale for the Special Issue
Artificial intelligence (AI) is developing into one of the most impactful trends in the workplace, as AI systems can optimize work processes and extend human capabilities (Kaplan & Haenlein, 2020; Parker & Grote, 2022). With increasingly complex tasks being delegated to AI systems, the role of AI in work collaborations is changing rapidly. AI is evolving from a technology used as a tool to becoming our teammate (Larson & DeChurch, 2020). For example, in healthcare, AI systems provide treatment alongside human physicians, assisting with clinical decisions and making medical judgments. These organizational and technological developments are expected to promote a new workforce, namely human-AI teams (O'Neill et al., 2022; Zhang et al., 2021). Following definitions of human teamwork, human-AI teaming could be defined as (a) a collection of human individuals and one or more agents as autonomous team members “who (b) socially interact (face-to-face or, increasingly, virtually); (c) possess one or more common goals; (d) are brought together to perform (organizationally) relevant tasks; (e) exhibit interdependencies with respect to workflow, goals, and outcomes; (f) have different roles and responsibilities; and (g) are together embedded in an encompassing organizational system, with boundaries and linkages to the broader system context and task environment” (Kozlowski & Ilgen, 2006, p. 79). But, can conceptualizations and operationalizations of human team research be readably applied to the context of human-AI teams, or do we require new theories and methodological approaches?
AI technologies are moving away from being a technology used simply as a tool and are becoming autonomous team members operating alongside humans (Seeber et al., 2020). Nevertheless, it remains unclear how future AI teammates should be defined to reflect an active part of the team and whether they should engage in problem-solving processes, propose and evaluate suggestions, plan, act, and learn like human teammates. Additionally, evidence is missing on how human and AI teammates will understand each other and interact in routine and non-routine situations. Further, it is unclear what characteristics an AI teammate requires for human-AI teams to work effectively. Although research has clearly demonstrated the impact of human team member characteristics (Mathieu et al., 2019) or AI technologies (Araujo, 2018), little is known about how AI teammates should appear, interact and behave to be perceived as members of a team.
At the same time, although some researchers have proposed that human-AI team interaction should be governed by the same principles that underlie human collaboration, others have argued that simply transferring what we already know about human teams is insufficient (Ulfert & Georganta, 2020). It remains unclear which aspects found in high- performing human teams are also required for successful collaboration and outcomes in human-AI teams (O'Neill et al., 2022). A better understanding of team member activities (e.g., team learning) and team properties (e.g., team trust) is urgently needed to support human-AI collaboration at work. Thus, we need to explore team processes (transition, action, interpersonal) and emergent states (affective, cognitive, motivational; Ilgen et al., 2005) in this new team context and investigate whether and how these might differ from human-only teams.
Finally, many questions remain unanswered regarding the development and implementation of AI as teammates and how this new role of AI will shape the organizational structures, culture, and decision-making (e.g., personnel selection).
Aims and Scope of the Special Issue
To contribute to a deeper, comprehensive, and more nuanced understanding of AI as teammates, human-AI team processes and emergent states, and human-AI team effectiveness, this special issue seeks to publish original empirical research, theory development, and conceptual papers that advance existing evidence and theory in organizational behavior. It also serves as a catalyst to promote innovative, multidisciplinary, and cutting-edge research on human-AI teams. For example, the special issue highlights central and unanswered questions of human-AI teams, such as: how should AI teammates be designed and developed to be perceived as members of a team? What are the most essential components that shape human-AI compared to human-only collaboration? What are methodological and interdisciplinary approaches that allow an investigation of human-AI teamwork? And to what extent should organizational researchers be involved in developing better AI team members? Example topics and research questions that address the aims and scope of this Special Issue include the following:
AI as teammates
- How should an AI teammate appear to be perceived as a teammate?
- How should an AI teammate behave (act and react) to be understood and accepted as part of
the team?
- What information should an AI teammate understand and communicate to work
interdependently with human teammates?
Understanding team processes and emergent states in human-AI teams
- How do emergence and development of emergent states, such as trust, differ between human-AI and human-only teams?
- What is the role of interpersonal, action, and transition processes for outcomes when collaborating with human compared to AI teammates?
- What are the unique challenges for human-AI team interaction, and how are they shaping team effectiveness?
Introduction of human-AI teaming in organizations
- What are possible strategies to shape realistic expectations for human-AI collaboration and prepare human teammates for this organizational change?
- Which skills and attitudes of human teammates are required (or need to be trained) for effective human-AI teamwork?
- How can AI teammates impact how organizations operate at the micro-, meso- and macro- level and how might this change over time?
Submission Instructions
This call is open and competitive. We are interested in original cutting-edge submissions. Submissions must not be considered by another journal or outlet. Papers to be considered for this Special Issue should be submitted electronically via JOB’s online submission system. Manuscripts will be handled by the Special Issue guest editors and reviewed by at least two anonymous reviewers, who will be blind to the identity of the author(s). The Special Issue Editors are happy to discuss initial ideas for papers and can be contacted directly: e.georganta@uva.nl, a.s.ulfert.blank@tue.nl .
Full manuscript submissions should be made electronically through the Submission System: https://submission.wiley.com/journal/job. Please refer to the Author Guidelines at https://onlinelibrary.wiley.com/page/journal/10991379/homepage/forauthors.html before submission. Please select the ‘Research Article’ as the article type on submission. On the Additional Information page during submission, select ‘Yes, this is for a Special Issue’ and the relevant Special Issue title from the dropdown list.
For questions about the submission system please contact the Editorial Office at JOBedoffice@wiley.com.
For full references, please consult this link: https://onlinelibrary.wiley.com/pb-assets/assets/10991379/JOB_SI_CfP_Human_AI_Teaming.pdf