Stephen,
Thanks so much for your participation! I'm not getting sick of it at all. I wanted to try to generate some collegial conversation on topics of shared interest (and learn stuff along the way!), so this has been fabulous.
I liked the ideas you shared because they:
1. Focus attention on differently enabled teammates/co-actors/partners and what it means to co-operate with them (it makes me think of Nancy's comments on human-canine teams), and
2. Suggests that in addition to improving the operation of agents, we can make progress by improving the "team mental models" of human participants through training. Perhaps a broader collection of team skills might be useful to allow individuals to function at high levels across a variety of joint activity scenarios (e.g., all human teams, co-operation with highly capable agents, joint activity with less capable agents, and so on).
Thanks again!
Jim
-----------------------------------------------------------
James E. McCarthy, Ph.D.
Vice President, Instructional Systems

2900 Presidential Drive; Suite #190
Fairborn, OH 45324
937-429-9711, Ext 100
mccarthy@sonalysts.com
Original Message:
Sent: 7/13/2023 7:57:00 AM
From: Stephen Gilbert
Subject: RE: Tools or Teammates?
Jim,
You're probably feeling like you're about done with this topic by now, but I throw out one other idea based on my work with Michael Dorneich.
I suggest that tool vs teammate is not really a helpful labeling system, and that the issue is really about the mental models or schemas that we have of the agent or tool or other person.
E.g., you have a typical mental model of how adults behave in social settings. If I introduce you to my uncle and say, "He had a stroke, so he can understand you fine, but sometimes he can't find the exact words he wants to use to talk," then you'll adjust your mental model of him appropriately and be more patient when he speaks.
If you use Alexa or Siri, you develop a mental model of what those agents can do and act accordingly. Then, if they get a software update, there's a question of how to update your own mental model.
If you point to a guitar and say, "What's that?" I'll say "That's an electric guitar" or "That's an acoustic guitar." A non-musician might say, "That's a guitar." My son the musician would say "That's a Gibson ES-335. It's built for blues. BB King used one." The mental model differs based on what usage the person can envision doing with it.
This draws on the Alva Noë _Action in Perception_ concept that people's perception is not about perceiving truth per se, but about perceiving the world as related to the actions they can take on the world.
So if you want to build agents that support teams, you need to make sure all the teammates know what the agents can do for them and to them (accurate mental models). And per Jessie Chen, the humans should know then have user interfaces to communicate enough info about:
- The agent's current actions and plans (goals)
- The agent's assumptions
- The agent's reasoning process and constraints being used for current plan
that the human can reasonably agent predict outcomes and levels of uncertainty.
That sounds like a lot, but we do that with our human teammates all the time. "If I email Sarah a yes/no question, she'll reply immediately." "If you ask Gonzalo for help, he'll be more willing after he's gotten to know you." And we sometimes treat human teammates like tools, esp. under time pressure.
I'm interested in when the agents get more complex and sources like CNET or Consumer Reports start offering agent reviews for comparison shopping. What features will they use? Best use of your personal information without being creepy...Best understanding of what you really mean when you speak...Most buddy-like...
Stephen
==
Stephen B. Gilbert, Ph.D.
Director, Human Computer Interaction
Associate Director, VRAC Visualize • Reason • Analyze • Collaborate
Associate Professor, Industrial and Manufacturing Systems Engineering
Iowa State University
515-294-6782
Original Message:
Sent: 7/12/2023 4:03:00 PM
From: James McCarthy
Subject: RE: Tools or Teammates?
Güliz,
Thank you so much! It has been a treat to read the thoughtful comments of so many researchers. Thanks for the references – I've downloaded them and I am looking forward to reading them.
Jim
-----------------------------------------------------------
James E. McCarthy, Ph.D.
Vice President, Instructional Systems

2900 Presidential Drive; Suite #190
Fairborn, OH 45324
937-429-9711, Ext 100
mccarthy@sonalysts.com
Original Message:
Sent: 7/12/2023 2:56:00 PM
From: Güliz Tokadli
Subject: RE: Tools or Teammates?
Thanks for throwing this discussion question out here, James! We have been always getting peer-reviewer comments about the distinction of tool vs teammate. I really like reading this thread and gaining different perspectives.
I think that we usually tend to focus on "human-likeness" but I recently changed my focus to "teammate-likeness" rather than the question of "what makes a system human-like". Because humans look for social and team-like skills to be able to perceive the non-human system as a teammate. I think that in the tool vs teammate distinction, the social interaction between human and non-human agents, and the level of autonomy per teammate are very important factors for human teammates.
In this sense, the early versions of human-autonomy teams could be similar models of human-animal teaming. And in time, I think, the autonomous system's capabilities and characteristics can be improved to adapt to all-human teaming.
Some of the papers I really like about human-animal teaming vs human-autonomy teaming:
- Phillips, E., Schaefer, K. E., Billings, D. R., Jentsch, F., & Hancock, P. A. (2016). Human-animal teams as an analog for future human-robot teams: Influencing design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100-125.
- Kapalo, K. A., Phillips, E., & Fiore, S. M. (2016, September). The Application and Extension of the Human-Animal Team Model to Better Understand Human-Robot Interaction: Recommendations for Further Research. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting(Vol. 60, No. 1, pp. 1225-1229). Sage CA: Los Angeles, CA: SAGE Publications.
- Zhao, F., Henrichs, C., & Mutlu, B. (2020, August). Task interdependence in human-robot teaming. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1143-1149). IEEE.
Güliz Tokadlı
Original Message:
Sent: 7/12/2023 2:19:00 PM
From: James McCarthy
Subject: RE: Tools or Teammates?
Thanks, Chris.
I think that your perspective is similar to my own and your message provides a good opportunity to integrate some of what we have heard on this topic so far. Humans have roughly 200,000 years of experience working in teams. Given this long history, patterns of human teamwork tend to be well established. As you and Stephanie point out, humans tend to anthropomorphize agents (in fact a broad collection of technologies) and imbue them with human characteristics. Therefore, it may be wise to develop autonomous systems with the same types of "team skills" as their human counterparts. However, the reframing that Mike suggested implies that, more generally, what is really needed is a set of theoretical models that explicitly postulate the characteristics of autonomous agents (and conceivably their human teammates) that will maximize the effectiveness of Human-AI teams. As Nancy pointed out, these models can use the lessons learned from research on all-human teams as their foundation and/or they can be based on other research traditions.
Are there other perspectives out there?
Jim
P.S. Chris – I think that our paths crossed briefly when we were both interns at Honeywell's Systems Research Center outside the twin cities. If I recall correctly, you were working with the Pilot's Associate team while I was working to Tom Ploucher and Robert North on workload modeling and automation on a project for the Army.
-----------------------------------------------------------
James E. McCarthy, Ph.D.
Vice President, Instructional Systems

2900 Presidential Drive; Suite #190
Fairborn, OH 45324
937-429-9711, Ext 100
mccarthy@sonalysts.com
Original Message:
Sent: 7/12/2023 1:51:00 PM
From: Christopher Miller
Subject: RE: Tools or Teammates?
Nancy makes a fine point about why machines CAN be teammates, and I completely agree. But machines can ALSO be tools. As I learned in my Intro to Physics class, a lever is a "machine"-- but I don't tend to think of it as a teammate, even though it is independent (of me) and we can work together in relationships with a common goal and different roles and responsibilities.
This probably hinges on WHEN it is useful to think of machines in one way vs. another and that, in turn, rests on complexity (both of mechanism and of behavior), independence of action and, thus, autonomy. A hammer (or a lever) doesn't have a lot of behavioral complexity or autonomy-- so when I hit my thumb with it, I blame myself ("I hit...") but if a computer or a robot does something adverse for our team interactions, I'm more likely to blame it.
I am convinced this is all related to the phenomenon of anthropomorphism-- people's propensity, in certain circumstances, to regard complex, autonomously-acting agents/systems/things (e.g., weather, "the gods") using the same set of expectations and interpretations that they have learned for human-human interactions. This is behind Dennett's Intentional Stance (1989) and Nass's "Computers As Social Actors" paradigm (Reeve's and Nass, 1996) and my own work on human-machine "etiquette" (Hayes and Miller, 2011, Miller, 2004).
But that ends up being a final, strong reason (IMHO) for HF researchers to at least permit the idea of treating machines as teammates-- because human users do! Pretty regularly. And user's interactions are functionally similar (even identical) to how they behave with human teammates. It's worth looking at the Nass work (and later work in that tradition) for factors and conditions that do and don't encourage anthropomorphic responses-- and maybe designing accordingly.
In the end, I suspect this is a spectrum of perceptions and mental models that maps onto a range of behavioral interactions. The agents themselves (humans vs. machines vs. animals) may be different and provide different affordances (e.g., greater range of behaviors and understanding for humans, but also intentionality that can be explicitly at odds and opposed to that of the user), but the interactions are similar across a wide range of what we might call "teaming" behaviors.
--Chris
------------------------------
Christopher Miller
Chief Scientist -- Smart Information Flow Technologies
Minneapolis MN
------------------------------
Original Message:
Sent: 07-11-2023 12:51
From: Nancy Cooke
Subject: Tools or Teammates?
I have done some thinking on this question. I say teammate as defined in the literature - member of a heterogeneous and interdependent team in which members work toward a common goal and have different roles and responsibilities. I can add the following from a recent article published in Human Factors "As indicated in Question 1 of Table 1, intelligent machines can be considered teammates just as animals can be considered teammates (Phillips et al., 2016). In many ways, we may do better to consider machine teammates as members of another species. Considering a machine as a teammate does not mean that the machines are in control, that machines are human or human-like, or that the machine is not human-centered. In fact, designing a machine to work well with humans as a teammate can increase human-centeredness. In addition, this design can draw on what we know from the team literature (e.g., team composition, team process, team development, and team measurement) to do so." Cooke, N. J., Cohen, M. C., Fazio, W. C., Inderberg, L. H., Johnson, C. J., Lematta, G. J., Peel, M., & Teo, A. (2023). From Teams to Teamness: Future Directions in the Science of Team Cognition. Human Factors, 0(0). https://doi.org/10.1177/00187208231162449
------------------------------
Nancy Cooke
Professor, Arizona State University
Mesa AZ
Original Message:
Sent: 07-10-2023 09:38
From: James McCarthy
Subject: Tools or Teammates?
My colleague, Dr. Lillian Asiala, and I have recently begun to explore methods to integrate AI-based agents into human teams. One point of contention in the literature is whether we should consider autonomous agents as tools or as teammates. Some researchers argue strongly that these systems lack the capacity to serve as teammates, largely because they lack robust mental models and have limited self-awareness. Others hold that just because an entity lacks human cognitive abilities, it is not automatically disqualified as a potential teammate. As an example, they often point to search and rescue teams that pair humans and canines.
What do you think?
For our work, Lillian and I make a distinction between automation and autonomy. We define automation as technology that requires human supervision/control. That is, automation provides a tool that humans can use to improve their productivity, speed, accuracy, etc. On the other hand, autonomous systems use AI or similar technologies to make decisions and/or take actions within a specified domain. These systems have the ability to respond to novel performance challenges. If the autonomous system is also capable of coordination and cooperation, we can begin to view the system as an autonomous teammate that is capable of not only independent, but also interdependent action.
Part of our work will explore what capabilities an autonomous agent must have to function as a teammate and how best to structure and train these hybrid teams.
I would welcome your thoughts on the tool vs. teammate distinction and which capabilities agents must have if they are to be considered teammates.
------------------------------
James E. McCarthy, Ph.D.
Vice President, Instructional Systems
Sonalysts, Inc.
Fairborn, OH 45324
937-429-9711 Ext. 100
------------------------------