Human-AI-Robot Teaming Technical Group

 View Only
Expand all | Collapse all

Tools or Teammates?

  • 1.  Tools or Teammates?

    Posted 07-10-2023 09:39

    My colleague, Dr. Lillian Asiala, and I have recently begun to explore methods to integrate AI-based agents into human teams.  One point of contention in the literature is whether we should consider autonomous agents as tools or as teammates.  Some researchers argue strongly that these systems lack the capacity to serve as teammates, largely because they lack robust mental models and have limited self-awareness.  Others hold that just because an entity lacks human cognitive abilities, it is not automatically disqualified as a potential teammate.  As an example, they often point to search and rescue teams that pair humans and canines.

    What do you think?

    For our work, Lillian and I make a distinction between automation and autonomy.  We define automation as technology that requires human supervision/control.  That is, automation provides a tool that humans can use to improve their productivity, speed, accuracy, etc.  On the other hand, autonomous systems use AI or similar technologies to make decisions and/or take actions within a specified domain.  These systems have the ability to respond to novel performance challenges.  If the autonomous system is also capable of coordination and cooperation, we can begin to view the system as an autonomous teammate that is capable of not only independent, but also interdependent action.

    Part of our work will explore what capabilities an autonomous agent must have to function as a teammate and how best to structure and train these hybrid teams.

    I would welcome your thoughts on the tool vs. teammate distinction and which capabilities agents must have if they are to be considered teammates.



    ------------------------------
    James E. McCarthy, Ph.D.
    Vice President, Instructional Systems
    Sonalysts, Inc.
    Fairborn, OH 45324
    937-429-9711 Ext. 100
    ------------------------------


  • 2.  RE: Tools or Teammates?

    Posted 07-11-2023 12:51

    I have done some thinking on this question.  I say teammate as defined in the literature - member of a heterogeneous and interdependent team in which members work toward a common goal and have different roles and responsibilities.                                                                                                                                        I can add the following from a recent article published in Human Factors  "As indicated in Question 1 of Table 1, intelligent machines can be considered teammates just as animals can be considered teammates (Phillips et al., 2016). In many ways, we may do better to consider machine teammates as members of another species. Considering a machine as a teammate does not mean that the machines are in control, that machines are human or human-like, or that the machine is not human-centered. In fact, designing a machine to work well with humans as a teammate can increase human-centeredness. In addition, this design can draw on what we know from the team literature (e.g., team composition, team process, team development, and team measurement) to do so."                                                                                                                                Cooke, N. J., Cohen, M. C., Fazio, W. C., Inderberg, L. H., Johnson, C. J., Lematta, G. J., Peel, M., & Teo, A. (2023). From Teams to Teamness: Future Directions in the Science of Team Cognition. Human Factors, 0(0). https://doi.org/10.1177/00187208231162449                                                                                                                                    



    ------------------------------
    Nancy Cooke
    Professor, Arizona State University
    Mesa AZ
    ------------------------------



  • 3.  RE: Tools or Teammates?

    Posted 07-12-2023 08:55

    Nancy,

    Thanks for your reply.  I especially like your idea that, "we may do better to consider machine teammates as members of another species."  It provides an interesting way to frame the problem.

    Jim



    ------------------------------
    James E. McCarthy, Ph.D.
    Vice President, Instructional Systems
    Sonalysts, Inc.
    Fairborn, OH 45324
    937-429-9711 Ext. 100
    ------------------------------



  • 4.  RE: Tools or Teammates?

    Posted 07-12-2023 10:58

    Hi there!

    To add to your initial interest in "tool vs. teammate" discussion, I wanted to suggest that you look into literature on mind perception/humanlikeness/anthropomorphism (terms used somewhat interchangeably), and social robotics. Nancy and Mike have brought up the concept of humanlikeness in terms of the design/capabilities of the agent, but I wanted to add that the extent to which an agent is perceived as humanlike is highly subjective and has a lot of consequences for human-AI interactions.

    Humans have a natural tendency to endow objects with humanlike "minds of their own", capable of having their own beliefs, desires, intentions, etc., and this is (somewhat) independent of the actual capabilities. Merely giving a self-driving car a human name and voice can increase trust and people's perception of the underlying competence (check out Waytz, Heafner & Eply, 2014). Anthropomorphism/ mind perception is complex (see Epley, Waytz, Cacioppo, 2007) and the extent to which it is applied varies a lot and deeply affects social perceptions (even on the level of processing sensory information) and treatment of agents (e.g., humans are punished for "unfairness", but the same actions from computers are not seen as unfair; see Sanfey et al., 2003). Some folks in the social robotics field believe that designing agents which can activate the same social brain regions used in human-human interactions is the key to effective HRI / teamwork with agents (see Wiese, Metta, Wykowska, 2017). To date, there are no AI-agents capable of maintaining a high degree of perceived humanlikeness over longitudinal interactions in the real-world without deception (disagreements welcomed!). Everyone loves to reference LLMs like ChatGPT, but it doesn't take long before it starts to "hallucinate" or for people to jail-break it. As soon as the agent does something "mindless", the illusion is shattered and it's just another dumb bot.

    Something I (and others) worry about in this context is the potential to harm human-human interactions once an AI-agent is introduced in the real-world. "Humanization" mechanisms share the same spectrum as "dehumanization" of other humans (see Waytz, Gray, Epley, Wegner, 2010). It is possible that adding anthropomorphic agents (even during one-off interactions) will create unintended opportunities to act unfairly or rudely towards other human teammates.

    Hope that adds something meaningful. Happy to chat more if desired 😊

    -Stephanie



    ------------------------------
    Stephanie Tulk Jesso, PhD
    Assistant Professor, Systems Science and Industrial Engineering
    SUNY Binghamton

    ------------------------------



  • 5.  RE: Tools or Teammates?

    Posted 07-12-2023 11:17

    Stephanie,

     

    Oh, man; that's an entirely different thing to think about.  Thanks for mentioning it.  I'm particularly fascinated by the idea that "humanizing" agents may lead to "dehumanizing" people.  What a rich and important area of research!

     

    Thanks for your contribution.

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 6.  RE: Tools or Teammates?

    Posted 07-12-2023 13:51

    Nancy makes a fine point about why machines CAN be teammates, and I completely agree.  But machines can ALSO be tools.  As I learned in my Intro to Physics class, a lever is a "machine"-- but I don't tend to think of it as a teammate, even though it is independent (of me) and we can work together in relationships with a common goal and different roles and responsibilities.

    This probably hinges on WHEN it is useful to think of machines in one way vs. another and that, in turn, rests on complexity (both of mechanism and of behavior), independence of action and, thus, autonomy.  A hammer (or a lever) doesn't have a lot of behavioral complexity or autonomy-- so when I hit my thumb with it, I blame myself ("I hit...") but if a computer or a robot does something adverse for our team interactions, I'm more likely to blame it.  

    I am convinced this is all related to the phenomenon of anthropomorphism-- people's propensity, in certain circumstances, to regard complex, autonomously-acting agents/systems/things (e.g., weather, "the gods") using the same set of expectations and interpretations that they have learned for human-human interactions.  This is behind Dennett's Intentional Stance (1989) and Nass's "Computers As Social Actors" paradigm (Reeve's and Nass, 1996) and my own work on human-machine "etiquette" (Hayes and Miller, 2011, Miller, 2004).  

    But that ends up being a final, strong reason (IMHO) for HF researchers to at least permit the idea of treating machines as teammates-- because human users do!  Pretty regularly.  And user's interactions are functionally similar (even identical) to how they behave with human teammates.  It's worth looking at the Nass work (and later work in that tradition) for factors and conditions that do and don't encourage anthropomorphic responses-- and maybe designing accordingly.  

    In the end, I suspect this is a spectrum of perceptions and mental models that maps onto a range of behavioral interactions.  The agents themselves (humans vs. machines vs. animals) may be different and provide different affordances (e.g., greater range of behaviors and understanding for humans, but also intentionality that can be explicitly at odds and opposed to that of the user), but the interactions are similar across a wide range of what we might call "teaming" behaviors.  

    --Chris



    ------------------------------
    Christopher Miller
    Chief Scientist -- Smart Information Flow Technologies
    Minneapolis MN
    ------------------------------



  • 7.  RE: Tools or Teammates?

    Posted 07-12-2023 14:19

    Thanks, Chris. 

     

    I think that your perspective is similar to my own and your message provides a good opportunity to integrate some of what we have heard on this topic so far.  Humans have roughly 200,000 years of experience working in teams. Given this long history, patterns of human teamwork tend to be well established.  As you and Stephanie point out, humans tend to anthropomorphize agents (in fact a broad collection of technologies) and imbue them with human characteristics.  Therefore, it may be wise to develop autonomous systems with the same types of "team skills" as their human counterparts.  However, the reframing that Mike suggested implies that, more generally, what is really needed is a set of theoretical models that explicitly postulate the characteristics of autonomous agents (and conceivably their human teammates) that will maximize the effectiveness of Human-AI teams. As Nancy pointed out, these models can use the lessons learned from research on all-human teams as their foundation and/or they can be based on other research traditions.

     

    Are there other perspectives out there?

     

    Jim

     

    P.S. Chris – I think that our paths crossed briefly when we were both interns at Honeywell's Systems Research Center outside the twin cities.  If I recall correctly, you were working with the Pilot's Associate team while I was working to Tom Ploucher and Robert North on workload modeling and automation on a project for the Army.

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 8.  RE: Tools or Teammates?

    Posted 07-12-2023 14:56
    Thanks for throwing this discussion question out here, James! We have been always getting peer-reviewer comments about the distinction of tool vs teammate. I really like reading this thread and gaining different perspectives.

    I think that we usually tend to focus on "human-likeness" but I recently changed my focus to "teammate-likeness" rather than the question of "what makes a system human-like". Because humans look for social and team-like skills to be able to perceive the non-human system as a teammate. I think that in the tool vs teammate distinction, the social interaction between human and non-human agents, and the level of autonomy per teammate are very important factors for human teammates. 

    In this sense, the early versions of human-autonomy teams could be similar models of human-animal teaming. And in time, I think, the autonomous system's capabilities and characteristics can be improved to adapt to all-human teaming. 
    Some of the papers I really like about human-animal teaming vs human-autonomy teaming:
    - Phillips, E., Schaefer, K. E., Billings, D. R., Jentsch, F., & Hancock, P. A. (2016). Human-animal teams as an analog for future human-robot teams: Influencing design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100-125.
    - Kapalo, K. A., Phillips, E., & Fiore, S. M. (2016, September). The Application and Extension of the Human-Animal Team Model to Better Understand Human-Robot Interaction: Recommendations for Further Research. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting(Vol. 60, No. 1, pp. 1225-1229). Sage CA: Los Angeles, CA: SAGE Publications.
    Zhao, F., Henrichs, C., & Mutlu, B. (2020, August). Task interdependence in human-robot teaming. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1143-1149). IEEE.

    Güliz Tokadlı
     






  • 9.  RE: Tools or Teammates?

    Posted 07-12-2023 16:03

    Güliz,

     

    Thank you so much!  It has been a treat to read the thoughtful comments of so many researchers.  Thanks for the references – I've downloaded them and I am looking forward to reading them.

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 10.  RE: Tools or Teammates?

    Posted 07-13-2023 07:57
    Jim, 
    You're probably feeling like you're about done with this topic by now, but I throw out one other idea based on my work with Michael Dorneich. 

    I suggest that tool vs teammate is not really a helpful labeling system, and that the issue is really about the mental models or schemas that we have of the agent or tool or other person. 

    E.g., you have a typical mental model of how adults behave in social settings. If I introduce you to my uncle and say, "He had a stroke, so he can understand you fine, but sometimes he can't find the exact words he wants to use to talk," then you'll adjust your mental model of him appropriately and be more patient when he speaks. 

    If you use Alexa or Siri, you develop a mental model of what those agents can do and act accordingly. Then, if they get a software update, there's a question of how to update your own mental model. 

    If you point to a guitar and say, "What's that?" I'll say "That's an electric guitar" or "That's an acoustic guitar."  A non-musician might say, "That's a guitar." My son the musician would say "That's a Gibson ES-335. It's built for blues. BB King used one." The mental model differs based on what usage the person can envision doing with it. 

    This draws on the Alva Noë _Action in Perception_ concept that people's perception is not about perceiving truth per se, but about perceiving the world as related to the actions they can take on the world. 

    So if you want to build agents that support teams, you need to make sure all the teammates know what the agents can do for them and to them (accurate mental models). And per Jessie Chen, the humans should know then have user interfaces to communicate enough info about: 
    - The agent's current actions and plans (goals)
    - The agent's assumptions 
    - The agent's reasoning process and constraints being used for current plan 
    that the human can reasonably agent predict outcomes and levels of uncertainty. 

    That sounds like a lot, but we do that with our human teammates all the time. "If I email Sarah a yes/no question, she'll reply immediately." "If you ask Gonzalo for help, he'll be more willing after he's gotten to know you." And we sometimes treat human teammates like tools, esp. under time pressure. 

    I'm interested in when the agents get more complex and sources like CNET or Consumer Reports start offering agent reviews for comparison shopping. What features will they use?  Best use of your personal information without being creepy...Best understanding of what you really mean when you speak...Most buddy-like...


    Stephen

    ==
    Stephen B. Gilbert, Ph.D.
    Director, Human Computer Interaction
    Associate Director, VRAC Visualize • Reason • Analyze • Collaborate
    Associate Professor, Industrial and Manufacturing Systems Engineering
    Iowa State University
    515-294-6782






  • 11.  RE: Tools or Teammates?

    Posted 07-13-2023 10:10

    Stephen,

     

    Thanks so much for your participation!  I'm not getting sick of it at all.  I wanted to try to generate some collegial conversation on topics of shared interest (and learn stuff along the way!), so this has been fabulous.

     

    I liked the ideas you shared because they:

     

    1.       Focus attention on differently enabled teammates/co-actors/partners and what it means to co-operate with them (it makes me think of Nancy's comments on human-canine teams), and

    2.       Suggests that in addition to improving the operation of agents, we can make progress by improving the "team mental models" of human participants through training.  Perhaps a broader collection of team skills might be useful to allow individuals to function at high levels across a variety of joint activity scenarios (e.g., all human teams, co-operation with highly capable agents, joint activity with less capable agents, and so on).

     

    Thanks again!

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 12.  RE: Tools or Teammates?

    Posted 07-11-2023 17:19

    Hi James,

    Those that advise against conceptualizing technologies as teammates seem to be most focused on reminding us that machines are not humans, and should not be conceptualized as HUMAN teammates. You also make a number of good points about what machines are (at least currently) unable to do. Some make a distinction that these machines can be team players, but not teammates, and some eschew the team moniker entirely.

    It is also dangerous to conceptualize these as tools, as this designation masks the effect that these technologies have (again, at the very least) on the cognitive processes of their human counterparts. This is true of what you termed automation AND autonomous technologies. Once a machine processes input data and presents an output (sometimes referred to as making an inference), it is in a different league than technologies that do not, and design considerations should change accordingly. 

    I have found that the most reliable perspective in directing my design is that of joint human-machine activity. Designing for joint activity focuses on how all of the machine and human agents come together to jointly perform the activities necessary to perform a task, or a mission. It also implies that there are numerous joint activity architectures that can be enabled in the design directions that are chosen. I have found that most activities map onto the standard set of described macrocognitive functions, or can be decomposed into them (e.g., detecting events, sensemaking, responding, coordination). When thinking about how machines contribute to joint activity, we can more readily bring everything we know about how these machines influence joint cognition, including how they can induce framing effects, how they enable or hobble detection, or how they induce increased coordination or stymie it. 

    To answer what capabilities a technology that supports joint activity must have, it somewhat depends on the activity. At the most basic, though, I'll borrow from one of the more read "team player" papers: they must observe the Basic Compact, facilitate (or at least not obscure) Shared Common Ground, and must be observable, predictable, and directable. We are developing 20 more detailed Joint Activity Design (JAD) heuristics right now, which I would be happy to share when we're a little further along.

    Is that helpful?
    Mike



    ------------------------------
    Michael Rayo
    Associate Professor, Cognitive Systems Engineering
    The Ohio State University
    Columbus OH
    ------------------------------



  • 13.  RE: Tools or Teammates?

    Posted 07-12-2023 09:00

    Mike,

     

    Thanks for your input.  Like Nancy's comments, yours provide a useful way to frame the problem.  I'd like to get smarter on the concept of joint activity design.  Can you share some references that might help me learn more?

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 14.  RE: Tools or Teammates?

    Posted 07-14-2023 09:07

    Hi Jim, 

    It's been fun to listen in on the perspectives from the HART TG on this very interesting topic. I wanted to jump in and offer two things: 1) Given the amount of interest on this topic I would recommend that this could be a highly attended discussion panel event or symposium at the next HFES conference (as the HART TG Program Chair there's a shameless plug). 2) My colleague Kevin Wynne and I wrote a brief conceptual paper on this topic in 2018, I've shared the citation below. I believe machines "can be" teammates particularly when you endorse the idea that Nancy has spoken about that they need not be the same kind of teammate as your human counterparts. I also believe that many of our machines are tools but may, at times, be "teammate-like" (pulling on a thread that Guliz mentioned earlier). Very interesting discussion thread, thank you for initiating it. Reference below:

    Wynne, K.T., & Lyons, J.B. (2018). An Integrative Model of Autonomous Agent Teammate-Likeness. Theoretical Issues in Ergonomic Science, 19, 353-374.  



    ------------------------------
    Joseph Lyons
    Centerville OH
    ------------------------------



  • 15.  RE: Tools or Teammates?

    Posted 07-14-2023 10:44

    Joseph,

     

    Thanks for the reference!  I've downloaded it and I'll check it out.

     

    I like the idea of a panel; I see if I can put something together for 2024.

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 16.  RE: Tools or Teammates?

    Posted 07-14-2023 14:48
      |   view attached

    Hi Jim,

    We debated this issue on the recent NAS study on human-AI teaming.  We decided that the framework of a "team" was appropriate for several reasons. First, as Nancy said, because it meets the definition of a team. Secondly, because the concept of team was important for providing improved adaptability and capacity for mutual support and backup that is important. And thirdly because the need for coordination with people increases as the capability of technology increases. More details can be found in that report.  

    National Academies of Sciences Engineering and Medicine. (2021). Human-AI teaming: State-of-the-art and research needs. Washington, DC: National Academies Press.

    In addition, you may be interested in a discussion I had with Ben Shneiderman on this topic (attached) that explores the ins and outs of considering them as a team. As you know, he is very much of the "AI cannot be a teammate, only a tool" opinion. And in fairness, there is very little in the way of AI that acts like a teammate at present. He published this discussion in his Human-Centered AI email posting that goes out once a week to the community. 

    best wishes,

    Mica

     



    ------------------------------
    Mica Endsley CPE
    HFES Government Relations Committee Chair
    ------------------------------

    Attachment(s)



  • 17.  RE: Tools or Teammates?

    Posted 07-17-2023 09:27

    Mica,

     

    That you very much!  The National Academies report is great and was very influential when Lillian and I were putting together our initial literature review and research plan.  It has served as a touchstone of sorts as our work has continued.  (I would be happy to share that research plan and the subsequent technical reports with you, if you are interested.)  I'm really looking forward to reading the design metaphor discussion that you attached.

     

    All the best,

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 18.  RE: Tools or Teammates?

    Posted 07-15-2023 02:40

    Hi! Jim,

    This is a great topic. I recently submitted a small article to the journal "ACM Interactions" titled " Applying human-centered AI in developing effective human-AI teaming:  A perspective of human-AI joint cognitive systems". Please note that the audience of this journal is CS/HCI professionals. With a limit of 2800 words, I cannot go too deep, it is not an academic paper for the HFES community. The key points I tried to make:

    •    AI has led the emergence of a new form of human-machine relationship: human-AI teaming (HAT), a paradigmatic shift in human-AI systems   
    •    We must follow a human-centered AI (HCAI) approach when applying HAT as a new design paradigm 
    •    We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT for developing effective human-AI teaming  

    The pre-print can be accessed via https://www.researchgate.net/publication/372246925_Applying_human-centered_AI_in_developing_effective_human-AI_teaming_A_perspective_of_human-AI_joint_cognitive_systems

    Because I am heavily involved in the HCI area, so I regularly wrote papers there. I had talked to Shneiderman a couple of times. As context, I wrote one of the first papers for Human-Centered AI (HCAI) in the journal back in 2019 (widely downloaded/cited), and Prof. Shneiderman and I had some collaborative work to promote HCAI. As Mica mentioned, he is strongly against the teaming metaphor (instead, he prefers using super tools...), concerning that the teaming metaphor is against HCAI. In my small article, I intend to argue that AI as a teammate does not mean AI can be a leader to control humans (there are diff roles on a team), and let's apply HCAI for developing human-AI team systems. It is an attempt, but not sure if that works -:)  I reference the NAS report, which Mica mentioned and she led the effort, to the CS/HCI community. 

    I feel the CS/HCI community needs to hear the voice of the human factors community, which is my motivation to write papers there. 

    Also, it is good to know that the National AI Research and Strategic Plan 2023 Update specifically calls out research needs for human-AI teaming in a section. The report can be found on the Internet.            

    Thanks

    Wei



    ------------------------------
    Wei Xu, Ph.D.
    Sr. Researcher, Professor

    ------------------------------



  • 19.  RE: Tools or Teammates?

    Posted 07-17-2023 09:37

    Wei,

     

    That is fabulous input.  Thank you so much and thanks for pre-print!  (As a personal aside, I see that you do a lot of work with Marv Dainoff.  Marv was one of my advisors when he chaired the Psych Department and ran the ergonomics lab at Miami University and I was a grad student.  Please send my regards.)

     

    I'm very interested in learning more about the human-centered AI approach and your joint cognitive systems work!  I'll reach out if I have any questions.

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 20.  RE: Tools or Teammates?

    Posted 07-24-2023 21:21

    I've been following this thread and find it very stimulating. Joseph's suggestion of a panel discussion prompts me to make a request along those lines.  I am interested in having a few of you comprise a panel and discuss "AI-based Agents: Tools or Teammates" during a session of the upcoming meeting held by the User Experience Design Consortium.  This is a group of HF/UX managers from various companies that I founded many years ago and continue to lead.  On the Consortium website you can see specifically who the members are and what we are about.  We meet twice a year and the next meeting is a virtual one in October about 10 days before the HFES Annual Meeting.

     

    If you might be interested in participating in the panel, please respond to me and I will arrange a Zoom meeting where we can discuss this possibility further. 

     

    Thanks for giving this opportunity your consideration.

     

    Stan

     

    Stan Caplan

    Usability Associates, LLC

    585-478-7757 (M)

     

    You don't stop laughing because you grow old,  
    You grow old because you stop laughing!

     






  • 21.  RE: Tools or Teammates?

    Posted 07-25-2023 10:45

    I have been following this discussion since the beginning and find it incredibly fascinating. I have been spending my summer at Wright-Patterson Air Force Base doing research on how and in what contexts might it be more appropriate to model human-AI robot teams on human-animal teams. One particular question throughout my discussions with those on base is this idea of when a non-human entity (like an animal or a robot) transitions from being a tool to being a teammate. I am in the process of writing up my thoughts on this and related matters but I am excited by this continued discussion here! If you would like to talk to me offline, you can always email me. 



    ------------------------------
    Heather C. Lum, Ph.D.
    Associate Professor
    Embry-Riddle Aeronautical University
    Prescott AZ
    lumh@erau.edu
    ------------------------------