Home Resources Human-AI collaboration in Hybrid Multi-Agent Systems

Human-AI collaboration in Hybrid Multi-Agent Systems

Read full paper

Abstract

This paper examines Hybrid Multi-Agent Systems, integrating both human and non-human intelligent agents, as a new subject of management research. It presents original definitions of key concepts: intelligent agents, artificial intelligent agents, and Hybrid Multi-Agent Systems. These definitions are grounded in Distributed Artificial Intelligence and provide a foundation for exploring the collaboration between human and artificial intelligent agents. The study addresses fundamental research questions regarding the nature of intelligent agents and their role within Multi-Agent Systems, proposing Hybrid Multi-Agent Systems as a novel framework that allows for seamless cooperation between human and non-human entities. Through a narrative literature review, this paper highlights the potential implications of Hybrid Multi-Agent Systems for scientific research in management, offering a conceptual basis for future research in this evolving field.

Author: Rafal Labedzki

References

  1. S. Goonatilleke and B. Hettige, “Past, Present and Future Trends in Multi-Agent System Technology,” Journal Europeen des Syst ´ emes `Automatises´ , vol. 55, pp. 723–739, Dec. 2022.
  2. S. Han, Q. Zhang, Y. Yao, W. Jin, Z. Xu, and C. He, “LLM Multi-Agent Systems: Challenges and Open Problems,” Feb. 2024, arXiv:2402.03578[cs]. [Online]. Available: http://arxiv.org/abs/2402.03578
  3. Y. Talebirad and A. Nadiri, “Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents,” Jun. 2023, arXiv:2306.03314 [cs]. [Online]. Available: http://arxiv.org/abs/2306.03314
  4. A. R. Dennis, A. Lakhiwal, and A. Sachdeva, “AI Agents as Team Members: Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With,” Journal of Management Information Systems, vol. 40, no. 2, pp. 307–337, Apr. 2023, publisher: Routledge. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/07421222.2023.2196773
  5. G. Zhang, L. Chong, K. Kotovsky, and J. Cagan, “Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation,” Computers in Human Behavior, vol. 139, 2023. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85140287945&doi=10.1016%2fj.chb.2022.107536&partnerID=40&md5=076843351fbd906c0c583f0895f313b58 R. LABEDZKI
  6. O. Lemon, “Conversational AI for multi-agent communication in Natural Language,” AI Communications, vol. 35, no. 4, pp. 295–308,[Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85140849537&doi=10.3233%2fAIC-220147&partnerID=40&md5=50418fa9218a4a660ec6578da3fe4de6
  7. L. Aschenbrenner, “Situational Awareness,” 2024. [Online]. Available: https://situational-awareness.ai
  8. J. A. Collins and B. C. Fauser, “Balancing the strengths of systematic and narrative reviews,” Human Reproduction Update, vol. 11, no. 2, pp. 103–104, Mar. 2005.
    [Online]. Available: http://academic.oup.com/humupd/article/11/2/103/763121/Balancing-the-strengths-of-systematic-and
  9. A. Sutton, M. Clowes, L. Preston, and A. Booth, “Meeting the review family: exploring review types and associated information retrieval requirements,” Health Information
    & Libraries Journal, vol. 36, no. 3, pp. 202–222, 2019, eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/hir.12276. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/hir.12276
  10. J. A. Byrne, “Improving the peer review of narrative literature reviews,” Research Integrity and Peer Review, vol. 1, no. 1, p. 12, Sep. 2016. [Online]. Available: https://doi.org/10.1186/s41073-016-0019-2
  11. T. Horsley, O. Dingwall, and M. Sampson, “Checking reference lists to find additional studies for systematic reviews,” Cochrane Database of Systematic Reviews, no. 8, 2011, publisher: John Wiley & Sons, Ltd. [Online]. Available: https://doi.org//10.1002/14651858.MR000026.pub2
  12. M. Wooldridge, “Intelligent Agents: The Key Concepts,” in Multi-Agent Systems and Applications II, V. Maˇr´ık, O. Stˇ epˇ ankov ´ a, H. Krautwur- ´ mova, and M. Luck, Eds. Berlin, Heidelberg: Springer, 2002, pp. 3–43. ´
  13. J. P. Muller, ¨ The design of intelligent agents: a layered approach, ser. Lecture notes in computer science ; Lecture notes in artificial intelligence. Berlin ; New York: Springer, 1996, no. 1177.
  14. N. Wiener, Cybernetics or control and communication in the animal and the machine, 2nd ed. Cambridge, Mass: MIT Press, 2000.
  15. M. E. Bratman, D. J. Israel, and M. E. Pollack, “Plans and resource-bounded practical reasoning,” Computational Intelligence, vol. 4, no. 3, pp. 349–355, Sep. 1988. [Online]. Available: https://onlinelibrary.wiley.com/doi/10.1111/j.1467-8640.1988.tb00284.x
  16. S. J. Russell and P. Norvig, Artificial intelligence: a modern approach, third edition, global edition ed., ser. Prentice Hall series in artificial intelligence. Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid
    Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo: Pearson, 2016.
  17. D. C. Dennett, “Intentional Systems,” The Journal of Philosophy, vol. 68, no. 4, pp. 87–106, 1971, publisher: Journal of Philosophy, Inc. [Online]. Available: https://www.jstor.org/stable/2025382
  18. N. Seel, AGENT THEORIES AND ARCHITECTURES. London: STC PLC: STC Technology Ltd, 1989.
  19. M. Wooldridge and N. Jennings, “Intelligent agents: theory and practice,” The Knowledge Engineering Review, 1995. [Online]. Available: https://www.semanticscholar.org/author/M.-Wooldridge/48106342
  20. A. Dorri, S. S. Kanhere, and R. Jurdak, “Multi-Agent Systems: A Survey,” IEEE Access, vol. 6, pp. 28 573–28 593, 2018, conference Name: IEEE Access. [Online]. Available: https://ieeexplore.ieee.org/document/8352646
  21. M. J. Casper, “Reframing and Grounding Nonhuman Agency: What Makes a Fetus an Agent,” American Behavioral Scientist, vol. 37, no. 6, pp. 839–856, May 1994. [Online]. Available: http://journals.sagepub.com/doi/10.1177/0002764294037006009
  22. M. Emirbayer and A. Mische, “What Is Agency?” American Journal of Sociology, vol. 103, no. 4, pp. 962–1023, Jan. 1998. [Online]. Available: https://www.journals.uchicago.edu/doi/10.1086/231294
  23. W. Rammert, “Distributed Agency and Advanced Technology Or: How to Analyse Constellations of Collective Inter-Agency,” 2012.
  24. L. Fan, G. Wang, Y. Jiang, A. Mandlekar, Y. Yang, H. Zhu, A. Tang, D.A. Huang, Y. Zhu, and A. Anandkumar, “MINEDOJO: Building OpenEnded Embodied Agents with Internet-Scale Knowledge,” 2022.
  25. R. Dunning, B. Fischhoff, and A. Davis, “When Do Humans Heed AI Agents’ Advice? When Should They?” Human Factors, [Online]. Available: https://www.scopus.com/inward/record.
    uri?eid=2-s2.0-85167579278&doi=10.1177%2f00187208231190459&partnerID=40&md5=784bb8e028c3e47febe036fcbc57f929
  26. J. Eatwell, M. Milgate, and P. Newman, Eds., Utility and Probability. London: Palgrave Macmillan UK, 1990. [Online]. Available: http://link.springer.com/10.1007/978-1-349-20568-4
  27. A. S. Rao and M. P. Georgeff, “Modeling Rational Agents within a BDI-Architecture,” 1991, publisher: Citeseer.
  28. D. Kahneman, Thinking, Fast and Slow. Penguin UK, Nov. 2011, google-Books-ID: oV1tXT3HigoC.
  29. B. Parasumanna Gokulan and D. Srinivasan, “An Introduction to MultiAgent Systems,” in Studies in Computational Intelligence, Jul. 2010, vol. 310, pp. 1–27, journal Abbreviation: Studies in Computational Intelligence.
  30. M. Armstrong, “Zarzadzanie zasobami ludzkimi, Oficyna Ekonomiczna, Krakow,” ´ Search in, p. 245, 2005.
  31. S. Poslad, “Specifying protocols for multi-agent systems interaction,” ACM Transactions on Autonomous and Adaptive Systems, vol. 2, no. 4, p. 15, Nov. 2007. [Online]. Available: https://dl.acm.org/doi/10.1145/1293731.1293735
  32. C. Castelfranchi and Y. Lesperance, Eds., ´ Intelligent agents VII: agent theories architectures and languages: 7th International Workshop, ATAL 2000, Boston, MA, USA, July 7-9, 2000: proceedings, ser. Lecture notes in computer science ; Lecture notes in artificial intelligence. Berlin ; New York: Springer, 2001, no. 1986, meeting Name: ATAL 2000.
  33. B. Chaib-draa and F. Dignum, “Trends in Agent Communication Language,” Computational Intelligence, vol. 18, no. 2, pp. 89–101, 2002, eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1467-8640.00184. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8640.00184
  34. Jose A. Maestro-Prietoa and Sara Rodrigue, “Agent organisations: from independent agents to virtual organisations and societies of agents,” ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal, vol. 9, no. 4, 2020.
  35. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, u. Kaiser, and I. Polosukhin, “Attention is All you Need,” in Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc., 2017. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
  36. A. Lazaridou, A. Potapenko, and O. Tieleman, “Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds. Online: Association for Computational Linguistics, Jul. 2020, pp. 7663–7674. [Online]. Available: https://aclanthology.org/2020.acl-main.685
  37. D. Zhang, Z. Li, P. Wang, X. Zhang, Y. Zhou, and X. Qiu, “SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems,” Jan. 2024, arXiv:2401.03945 [cs]. [Online]. Available: http://arxiv.org/abs/2401.03945
  38. I. Rahwan, M. Cebrian, N. Obradovich, J. Bongard, J.-F. Bonnefon, C. Breazeal, J. W. Crandall, N. A. Christakis, I. D. Couzin, M. O. Jackson, N. R. Jennings, E. Kamar, I. M. Kloumann, H. Larochelle, D. Lazer, R. McElreath, A. Mislove, D. C. Parkes,
    A. entland, M. E. Roberts, A. Shariff, J. B. Tenenbaum, and M. Wellman, “Machine behaviour,” Nature, vol. 568, no. 7753, pp. 477–486, Apr. 2019. [Online]. Available: https://www.nature.com/articles/s41586-019-1138-y
  39. H. Shirado and N. A. Christakis, “Locally Noisy Autonomous Agents Improve Global Human Coordination in Network Experiments,” Nature, vol. 545, no. 7654, pp. 370–374, May 2017. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5912653/
  40. R. Jain, N. Garg, and S. N. Khera, “Effective human–AI work design for collaborative decision-making,” Kybernetes, vol. 52, no. 11, pp. 5017–5040, Jan. 2023, publisher: Emerald Publishing Limited. [Online]. Available: https://doi.org/10.1108/K-04-2022-0548
  41. J. Halloy, G. Sempo, G. Caprari, C. Rivault, M. Asadpour, F. Tache, ˆ I. Sa¨ıd, V. Durier, S. Canonge, J. M. Ame, C. Detrain, N. Correll, ´A. Martinoli, F. Mondada, R. Siegwart, and J. L. Deneubourg, “Social Integration of Robots into Groups of Cockroaches to Control Self-Organized Choices,” Science, vol. 318, no. 5853, pp. 1155–1158, Nov. 2007, publisher: American Association for the Advancement of Science. [Online]. Available: https://www.science.org/doi/10.1126/science.1144259
  42. Y. Zheng, Q. Zhao, J. Ma, and L. Wang, “Second-order consensus of hybrid multi-agent systems,” Systems & Control Letters, vol. 125, pp. 51–58, Mar. 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167691119300179
  43. R. W. Griffin, Management, 11th ed. Australia ; Mason, OH: SouthWestern Cengage Learning, 2013.
  44. A. Wojtczuk-Turek, D. Turek, F. Edgar, H. J. Klein, J. Bosak, B. OkaySomerville, N. Fu, S. Raeder, P. Jurek, and A. Lupina-Wegener, “Sustainable human resource management and job satisfaction—Unlocking HUMAN-AI COLLABORATION IN HYBRID MULTI-AGENT SYSTEMS 9 the power of organizational identification: A cross-cultural perspective from 54 countries,” Corporate Social Responsibility and Environmental Management, 2024, publisher: Wiley Online Library.
  45. H. Kerzner, Advanced project management: Best practices on implementation. John Wiley & Sons, 2003.
  46. B. Gebru, L. Zeleke, D. Blankson, M. Nabil, S. Nateghi, A. Homaifar, and E. Tunstel, “A Review on Human–Machine Trust Evaluation: Human-Centric and Machine-Centric Perspectives,” IEEE Transactions on Human-Machine Systems, vol. 52, no. 5, pp. 952–962, Oct. 2022, conference Name: IEEE Transactions on Human-Machine Systems.
    [Online]. Available: https://ieeexplore.ieee.org/document/9720720
  47. Z. R. Khavas, S. R. Ahmadzadeh, and P. Robinette, “Modeling Trust in Human-Robot Interaction: A Survey,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12483 LNAI, pp. 529–541, 2020, arXiv: 2011.04796 Publisher: Springer Science and Business Media Deutschland GmbH ISBN: 9783030620554.
  48. A.-S. Ulfert, E. Georganta, C. Centeio Jorge, S. Mehrotra, and M. Tielman, “Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework,” European Journal of Work and Organizational Psychology, [Online]. Available: https://www.scopus.com/inward/record.uri? eid=2-s2.0-85153532721&doi=10.1080%2f1359432X.2023.2200172&partnerID=40&md5=aa44582a54605f3ab7c7332904e38f7e
  49. L. Ciechanowski, A. Przegalinska, M. Magnuski, and P. Gloor, “In the shades of the uncanny valley: An experimental study of human–chatbot interaction,” Future Generation Computer Systems, vol. 92, pp. 539–548, Mar. 2019. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0167739X17312268
  50. A. Przegalinska, L. Ciechanowski, A. Stroz, P. Gloor, and G. Mazurek, “In bot we trust: A new methodology of chatbot performance measures,” Business Horizons, vol. 62, no. 6, pp. 785–797, Nov. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/
    S000768131930117X
  51. R. R. Blake and J. S. Mouton, The new managerial grid : strategic new insights into a proven system for increasing organization productivity and individual effectiveness, plus a revealing examination of how your managerial style can affect your mental and physical health. Houston : Gulf Pub. Co., Book Division, 1978. [Online]. Available: http://archive.org/details/newmanagerialgri00blak
  52. M. Wojcik, “Algorithmic discrimination in the era of artificial intelligence: challenges of sustainable human resource management,” Edukacja Ekonomistow i Mened ´ zer ˙ ow´ , vol. 69, no. 3, 2023, number: 3.[Online]. Available: https://econjournals.sgh.waw.pl/EEiM/article/view/4540
  53. B. Abedin, “Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective,” Internet Research, vol. 32, no. 2, pp. 425–453, Jan. 2022, publisher: Emerald Publishing Limited. [Online]. Available: https://doi.org/10.1108/INTR-05-2020-0300
  54. D. Gunning, E. Vorm, J. Y. Wang, and M. Turek, “DARPA’s explainable AI (XAI) program: A retrospective,” Applied AI Letters, vol. 2, no. 4, p. e61, 2021, eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/ail2.61. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/ail2.61
  55. A. Picard, Y. Mualla, F. Gechter, and S. Galland, “Human-Computer Interaction and Explainability: Intersection and Terminology,” vol. 1902 CCIS, 2023, pp. 214–236. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85176005051&doi=10.1007%2f978-3-031-44067-0 12&partnerID=40&md5=dbd4fe92a6e5d28a51536ef1d6670209
  56. A. Rosenfeld and A. Richardson, “Explainability in human–agent systems,” Autonomous Agents and Multi-Agent Systems, vol. 33, no. 6, pp. 673–705, Nov. 2019. [Online]. Available: https://doi.org/10.1007/s10458-019-09408-y
  57. T. Mulgan, “Superintelligence: Paths, dangers, strategies,” 2016, publisher: Oxford University Press.
  58. J. E. H. Korteling, G. C. van de Boer-Visschedijk, R. A. M. Blankendaal, R. C. Boonekamp, and A. R. Eikelboom, “Humanversus Artificial Intelligence,” Frontiers in Artificial Intelligence, vol. 4, Mar. 2021, publisher: Frontiers Media S.A. [Online]. Available:
    https://www.frontiersin.org/articles/10.3389/frai.2021.622364/full
  59. “NVIDIA Isaac Sim · GitHub,” 2024. [Online]. Available: https://github.com/isaac-sim
  60. “Sam Altman-Backed Group Completes Largest US Study on Basic Income,” Bloomberg.com, Jul. 2024. [Online]. Available: https://www.bloomberg.com/news/articles/2024-07-22/
    ubi-study-backed-by-openai-s-sam-altman-bolsters-support-for-basic-income