The Machine as an Autonomous Explanatory Agent

Yıl 2024, Sayı: 79, 265 – 279, 15.07.2024

https://doi.org/10.58634/felsefedunyasi.1487376

Öz

Yapay zekâ çalışmalarının nihai amacı, makineyi karar verebilen, çıkarım yapabilen, öngörebilen, tavsiyelerde bulunabilen ve diğer yüksek bilişsel işlevleri gerçekleştirebilen otonom bir faile/eyleyiciye dönüştürmektir. Büyük Dil Modellerinin sergilediği üstün yetenekler, bu amacın gerçekleştiğini ya da gerçekleşmesine ramak kaldığının adeta bir kanıtıdır, zira yapısal olmayan verileri işleme hızları ve veri çeşitliliğini çevikçe işleyebilme yetenekleri çeşitli alanlardaki geniş kullanımlarını mümkün kıldığından makine-insan arasındaki doğal dil iletişimini kesintisiz hâle getirmiştir. Ancak, bilim ve endüstride yetkin olabilmek için, bu tür yeteneklere sahip olan bir eyleyicinin güvenilir, yani eylemlerini ve aldığı kararları açıklayabilir olması gereklidir, ki hesap verebilme otonom bir failin başat niteliğidir. Bu bağlamda, bu makale, mevcut teknolojilerin halihazırda otonom bir açıklayıcı fail oluşturup oluşturmadıklarını veya makinenin otonom bir açıklayıcı fail olmasına zemin hazırlayıp hazırlamadıklarını belirlemeyi amaçlamaktadır. Çalışmanın ilk bölümü açıklama modellerindeki açıklama türlerini ve seviyelerini araştırarak günlük yaşamdaki açıklamaların doğasını anlamak için bir temel ortaya koyar. İkinci bölüm, Açıklanabilir Yapay Zekâ alanındaki açıklayıcı sistem türlerine odaklanarak yapay zekâ çalışmalarındaki açıklama modellerini inceler. Çalışmanın devamında ise, ikinci bölümdeki incelemeye ve İnsan-Bilgisayar Etkileşimi alanına dayanarak, güncel makine öğrenmesi modellerinin otonom açıklayıcı fail işlevi olup olmadığını ve ne ölçüde olduğunu araştırılır.

Anahtar Kelimeler

makine açıklamaları, otonom fail, XAI, ontoloji, IBE, IFE

Kaynakça

  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
  • Baclawski, K., Bennett, M., Berg-Cross, G., Fritzsche, D., Sharma, R., Singer, J., … & Whitten, D. (2020). Ontology Summit 2019 communiqué: Explanations. Applied Ontology, 15(1), 91-107.
  • Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, 5185-5198.
  • Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8, No. 1, 8-13.
  • Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58.
  • GTAI. (n.d.). Industrie 4.0, Retrieved February 28, 2024, from https://www.gtai.de/en/invest/industries/industrial-production/industrie-4-0.
  • Hall, M., Harborne, D., Tomsett, R., Galetic, V., Quintana-Amate, S., Nottle, A., & Preece, A. (2019). A systematic method to understand requirements for explainable AI (XAI) systems. In Proceedings of the IJCAI Workshop on eXplainable Artificial Intelligence (XAI 2019), Macau, China (Vol. 11).
  • Hilton, D. J. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107(1), 65.
  • Hou, Y., Tamoto, H., & Miyashita, H. (2024). "My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 1-7.
  • Kommineni, V. K., König-Ries, B., & Samuel, S. (2024). From human experts to machines: An LLM supported approach to ontology and knowledge graph construction. arXiv preprint arXiv:2403.08345.
  • Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11.
  • Liu, L., Yang, X., Shen, Y., Hu, B., Zhang, Z., Gu, J., & Zhang, G. (2023). Think-in-memory: Recalling and post-thinking enable llms with long-term memory. arXiv preprint arXiv:2311.08719.
  • Maes, P. (1993). Modeling Adaptive Autonomous Agents. Artificial Life Journal, 1(1-2), 135-162.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

The Machine as an Autonomous Explanatory Agent

Yıl 2024, Sayı: 79, 265 – 279, 15.07.2024

https://doi.org/10.58634/felsefedunyasi.1487376

Öz

The holy grail of Artificial Intelligence (AI) is to transform the machine into an agent that can decide, make inferences, cluster the contents, predict, recommend, and exhibit similar higher cognitive faculties. The prowess of Large Language Models (LLMs) serves as evidence: they enable seamless natural language communication and widespread use across various fields by swiftly processing unstructured data and handling diverse datasets with agility. However, in order to be competent in the fields of science and industry, an agent with such capabilities must be reliable, i.e., accountable for its decisions and actions, which is a per se attribute of an autonomous agent. In this respect, this paper aims to determine whether state-of-the-art technologies have already created an autonomous explanatory agent or are paving the way for the machine to become an autonomous explanatory agent. To achieve this, the paper is structured as follows: The first part investigates the types and levels of explanations in explanation models, providing a foundation for understanding the nature of explanations in everyday life. The second part explores explanations in the context of artificial intelligence, focusing on types of explanatory systems in the research field of eXplainable AI (XAI). The third part delves into whether and to what extent the state-of-the-art machine learning models function as autonomous explanatory agents, based on the exploration in the second part and considering the field of Human-Computer Interaction.

Anahtar Kelimeler

machine explanation, autonomous agent, XAI, ontology, HAI

Kaynakça

  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
  • Baclawski, K., Bennett, M., Berg-Cross, G., Fritzsche, D., Sharma, R., Singer, J., … & Whitten, D. (2020). Ontology Summit 2019 communiqué: Explanations. Applied Ontology, 15(1), 91-107.
  • Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, 5185-5198.
  • Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8, No. 1, 8-13.
  • Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58.
  • GTAI. (n.d.). Industrie 4.0, Retrieved February 28, 2024, from https://www.gtai.de/en/invest/industries/industrial-production/industrie-4-0.
  • Hall, M., Harborne, D., Tomsett, R., Galetic, V., Quintana-Amate, S., Nottle, A., & Preece, A. (2019). A systematic method to understand requirements for explainable AI (XAI) systems. In Proceedings of the IJCAI Workshop on eXplainable Artificial Intelligence (XAI 2019), Macau, China (Vol. 11).
  • Hilton, D. J. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107(1), 65.
  • Hou, Y., Tamoto, H., & Miyashita, H. (2024). "My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 1-7.
  • Kommineni, V. K., König-Ries, B., & Samuel, S. (2024). From human experts to machines: An LLM supported approach to ontology and knowledge graph construction. arXiv preprint arXiv:2403.08345.
  • Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11.
  • Liu, L., Yang, X., Shen, Y., Hu, B., Zhang, Z., Gu, J., & Zhang, G. (2023). Think-in-memory: Recalling and post-thinking enable llms with long-term memory. arXiv preprint arXiv:2311.08719.
  • Maes, P. (1993). Modeling Adaptive Autonomous Agents. Artificial Life Journal, 1(1-2), 135-162.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

Toplam 15 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Sistematik Felsefe (Diğer)
BölümAraştırma Makalesi
Yazarlar

Dilek Yargan Universität Rostock 0000-0001-9618-6740 Türkiye

Yayımlanma Tarihi15 Temmuz 2024
Gönderilme Tarihi21 Mayıs 2024
Kabul Tarihi12 Temmuz 2024
Yayımlandığı Sayı Yıl 2024 Sayı: 79

Kaynak Göster

APAYargan, D. (2024). The Machine as an Autonomous Explanatory Agent. Felsefe Dünyası(79), 265-279. https://doi.org/10.58634/felsefedunyasi.1487376

Download or read online: Click here