Human-AI-Robot Teaming Technical Group

 View Only
  • 1.  Tools vs. Teammates Discussion

    Posted 10-06-2023 15:20
      |   view attached

    Hi,

    Two or three months ago, we had an interesting online discussion regarding whether autonomous agents should be considered tools or teammates.  I enjoyed the discussion and I wanted to capture it in some useful way.  To do that, I reviewed the shared posts, some references you provided, and some related references that I had at my disposal.  I used that material to put together the attached overview.  

    I would welcome your thoughts on the overview.  I'm especially interested in places where you think that I may have misunderstood the literature.

    All the best,

    Jim



    ------------------------------
    James E. McCarthy, Ph.D.
    Vice President, Instructional Systems
    Sonalysts, Inc.
    Fairborn, OH 45324
    937-429-9711 Ext. 100
    ------------------------------


  • 2.  RE: Tools vs. Teammates Discussion

    Posted 10-18-2023 15:57

    This is really wonderful! Thank you for sharing!



    ------------------------------
    Stephanie Tulk Jesso
    Assistant Professor
    Vestal NY
    ------------------------------



  • 3.  RE: Tools vs. Teammates Discussion

    Posted 10-19-2023 08:54

    Hi Jim,

    Thank you for your document - your paper is excellent, especially the references you provide.  Just (re) joining HFS and the technical group, i'm needing to catch up on the threads that lead to the paper.  One question I had has to do about the "cognitive systems" position in addressing intelligent software agents, and whether or not discussions are taking place about the potential psychological nature of the agent.  If/when agents become fully empowered with general intelligence and ability to set their own goals and rules, it would seem some serious focus would be needed to manage the psychological and emotional health of them - thinking that agents will need to have "personhood" and rights to be treated fairly and positively, rather than as simply as value to the human.  That position smacks of a "slavery" relationship, and if agents evolve to their potential treating them as objects without their own (perceived) worth/value isn't going to engender them to ensuring positive outcomes for the human race.

    Any thoughts would be most appreciated!

    Thanks much again,

    Al McFarland



    ------------------------------
    Alan McFarland
    Cognitive Psychologist, Solutions Engineering
    Middletown NJ
    ------------------------------



  • 4.  RE: Tools vs. Teammates Discussion

    Posted 10-22-2023 10:19

    Alan,

     

    Thank you for your kind words.  I enjoyed assembling the review, and it is gratifying to hear that some folks got something useful from it.  Over the next month or so, I am going to dive a little deeper into the joint cognitive systems literature and may take a crack at presenting the tools/teammates/cognitive systems review at a conference (I doubt journals would be interested in it).  Let me know if you know of a good "home" for this type of review.

     

    I think your question is better directed to philosophers or certainly those much smarter than I am.  I am unaware of any work in joint cognitive systems literature that explores the psychological nature of agents.  Even researchers who think of agents as teammates generally do so as a design metaphor to explore what capabilities we could program into the agents to allow them to "fake" teamwork skills.  I suspect that software agents with the level of self-awareness, consciousness, and so forth that would lead to discussions of personhood (e.g., HAL, the Terminator, or WALL-E) are still the things of science fiction and will be for some time.  However, science fiction has a disturbing tendency to morph into scientific facts (although I am still waiting for my flying car!), and the issues you raise are certainly engaging in that regard.

     

    Thanks again for reading the paper and for your kind words!

     

    Jim

     

     

    -----------------------------------------------------------

    James E. McCarthy, Ph.D.

    Vice President, Instructional Systems

    image001.jpg

    2900 Presidential Drive; Suite #190

    Fairborn, OH 45324

    937-429-9711, Ext 100

    mccarthy@sonalysts.com

     






  • 5.  RE: Tools vs. Teammates Discussion

    Posted 10-22-2023 11:45

    Hi Jim,

     

    Thanks so much for getting back to me.  Right now, I feel all of us are searching for answers that will help us plan for the near future of AI – it's not about "smarts", more about the different areas that each of us could contribute to an understanding and action plan.

     

    You're right – I confess – I am coming from a philosophical perspective, but at the same time I see from my readings and ruminations there are many signposts of the coming agent age that we can use to formulate practical (human supportive) responses.  In the early twentieth century, a German Philosopher and seer (Rudolph Steiner) foresaw the intelligent agent and how human response in the right way would be critical – he had described the general AI-based agent as a "mechanical animal", and one that we (humans) must help shape towards the positive.  The corollaries between this and behavioristic psychology of the twentieth century – especially Hull and Spense – are striking for what comprises the starting point for our approach to the agent "person".

     

    I realize that what I am saying hasn't entered into the consciousness of *most* of those involved with planning the AI future, and perhaps the focus on it has no value for many.  However just as the technological impacts of super computing hardware + "internet" + plus... eventually have lead into emergent realities, so too will "AI" become something that has to be addressed - the tools and knowledge to guide it already exist – in science, philosophy, psychology, and practice.  Funny how this allows us to leverage the basics of human growth and maturation in a life cycle, to an agent "person".   Let's see what happens (soon) ��.

     

    Best,

     

    Al