Published on:

|

Last updated on:

by

Analogies in AI

In the labyrinth of modern technology, artificial intelligence remains an enigmatic concept for many, often shrouded in technical complexity and misunderstanding. Through carefully chosen analogies and accessible explanations, we can use analogies to demystify AI and transform complex AI concepts into understandable frameworks that bridge the knowledge gap between technical experts and the general public.

The importance of understanding AI in today’s world

The growing prevalence of artificial intelligence in daily life, from household devices to self-driving cars, necessitates a basic understanding of AI concepts for effective decision-making and meaningful engagement with technological developments (L’opez & Casado, 2023). The intersection of AI with diverse fields, including healthcare and creative technologies, further underscores the necessity for clear communication methods that enable both professionals and the public to engage with these transformative innovations (Del Sette et al., 2023).

This emerging need for AI literacy has sparked a growing movement to develop innovative educational approaches that make complex technological concepts more approachable for diverse audiences. Users need to understand how AI systems arrive at their conclusions to trust them. This is especially important in fields such as medicine, agriculture, and social sciences, where decisions can have significant consequences. The lack of transparency in AI can lead to scepticism and reluctance to adopt new technologies.

Artificial Intelligence literacy

Using Analogies to Demystify AI Concepts

Analogies serve as cognitive bridges that enable complex technical concepts to be mapped onto familiar, everyday experiences, making them particularly effective for explaining AI systems and their underlying mechanisms (Chirimuuta, 2022). The comparison between artificial neural networks and biological systems, for instance, provides an intuitive framework for understanding how AI processes information, though careful consideration must be given to the limitations of such comparisons when explaining neural computation. 

Research suggests that incorporating common-sense knowledge through analogy-based explanations enables users to better grasp the fundamental principles of AI models and their outputs (He et al., 2022). Analogies function as effective tools for improving understanding and communication in explainable AI. They connect complex AI concepts to familiar ideas, making them more accessible to users.

Concept-Level Explanations

Analogical reasoning, which compares different situations or ideas to identify patterns and draw conclusions, helps non-experts understand complex ideas by linking unfamiliar concepts to familiar ones (He et al., 2024). This approach reduces cognitive load and simplifies understanding while studies also demonstrate that well-crafted analogical explanations positively affect how users engage with AI systems. However while quantitative evidence of analogy effectiveness remains limited, direct user feedback indicates that analogy-based explanations help in AI-related decision-making.

Evidence shows that combining concept-level explanations with analogical inference improves understanding. This method proves particularly useful when users encounter technical terminology or complex data. Through analogical reasoning, developers can create more accessible interfaces that enhance understanding and engagement with AI technologies.

Examples of AI Concepts Explained Through Analogies

Machine Learning: A child learning to walk analogy

Just as a child learns to walk through repeated attempts, adjustments, and feedback from their environment, machine learning algorithms improve their performance through iterative training and error correction mechanisms. This foundational analogy illustrates how AI systems process and learn from data, gradually refining their responses through mathematical optimisation techniques that mirror the natural learning process (Zhou et al., 2023).

Deep Learning: The Russian doll analogy for layered learning analogy

The Russian doll analogy effectively illustrates how deep learning networks process information through multiple nested layers, where each layer extracts increasingly abstract features from the input data. Similar to how each Russian doll contains progressively smaller versions within it, each layer of a deep neural network transforms and refines the data representation, enabling the system to learn complex patterns and hierarchical relationships in the data (Jesus et al., 2024).

AI Bias: Tinted Glasses analogy

In this analogy AI bias is explained with an example of wearing tinted glasses. The AI’s “view” of the world is coloured by the data it’s trained on, potentially leading to skewed perceptions and decisions. Just as wearing tinted glasses affects how we perceive colours, this bias can distort AI systems’ interpretations and decisions, particularly when training data lacks diversity or contains historical prejudices.

Reinforced Learning: Training a pet analogy

Reinforcement learning (RL) resembles training a pet (Hundt et al., 2020), where the principles of reward and behaviour modification are central to the learning process. The process involves a learning entity that receives positive or negative feedback to guide its decision-making. Similar to pet training, where the animal serves as the learning entity and receives treats (positive feedback) or corrections (negative feedback) based on its actions.

Overfitting: Memorizing Exam Answers analogy

Machine learning overfitting resembles memorising test answers without grasping the subject matter (Öter et al., 2024). When AI encounters data it hasn’t seen before, its performance often drops significantly compared to its results with familiar examples. This becomes clear when AI systems train on limited datasets, which creates narrow pattern recognition abilities that don’t translate well to real-world situations. The AI becomes too specialised in recognising specific patterns from its training data, making it less capable of handling the diverse scenarios it meets in practice.

AI Team of Detectives analogy

Neural Networks: The “Team of Detectives” analogy

The “Team of Detectives” analogy illustrates neural networks as specialised investigators working collaboratively, where each detective (neuron) examines specific aspects of evidence (input data) and communicates findings to colleagues, ultimately contributing to a comprehensive answer (Schmidgall et al., 2023). This collaborative process is mirrored in neural networks, where information processing occurs through interconnected layers of specialised units that adapt and refine their responses based on incoming data and pass the information onto other specialised units (Kufel et al., 2023).

AI Bias: The “Tainted Recipe Book” analogy

The “Tainted Recipe Book” analogy likens AI bias to a chef restricted by an incomplete cookbook. When the cookbook contains only European recipes, the chef cannot properly create Asian or African dishes. In the same way, AI systems trained on limited or biased data produce inaccurate results – similar to how a chef following flawed recipes would create poor-quality meals. Just as a chef requires a diverse, accurate cookbook, AI needs comprehensive, representative training data to deliver balanced, equitable outcomes.

Benefits of Using Analogies to Demystify AI

What are the key advantages of investing resources in identifying and explaining AI analogies, given their expanding scope and diversity?

  • Enhanced Understanding through Cognitive Scaffolding: Analogies simplify complex AI concepts by connecting them to familiar, everyday experiences. This approach helps users grasp how AI systems function and process information. When unfamiliar AI principles are linked to well-known experiences, these analogies create mental frameworks that support progressive learning and knowledge retention.
  • Reduction of Cognitive Load: Through the use of familiar comparisons, these explanations link complex AI concepts to everyday knowledge, making them more accessible to non-experts. This approach reduces mental effort when interpreting AI outputs, especially for individuals who find technical language and theoretical concepts difficult to grasp.
  • Improved Decision-Making Support: Through analogies, non-experts can better comprehend AI-assisted decision-making processes, enabling more informed and confident choices. This includes support for Complex Decision Modelling, Real-Time Decision Assistance and Standardised Incident Resolution protocols.
  • Better Explainability: Analogies serve as powerful tools for explaining AI decision-making processes, making them more transparent and interpretable to users. This is crucial for building trust and understanding in AI systems (Veitch & Alsos, 2021).

Challenges in Using Analogies for AI

Using analogies to explain complex topics like artificial intelligence can lead to several misconceptions. While analogies can facilitate understanding by relating unfamiliar concepts to familiar ones, they can also introduce inaccuracies or oversimplifications. A few challenges include:

Oversimplification of Complexity

Analogies often reduce intricate ideas to simpler forms, which can misrepresent the underlying complexity of AI. For instance, while some may compare AI systems to human intelligence, this analogy can mislead users into thinking that AI operates in the same way as human cognition, ignoring the fundamental differences in processes and capabilities.

Confirmation Bias

The use of analogies can lead to confirmation bias, where users favour information that aligns with their pre-existing beliefs about AI. If an analogy supports a user’s preconceived notion, they may disregard contradictory evidence or alternative explanations, thus reinforcing misconceptions about AI’s capabilities and limitations (Bauer et al., 2021).

Challenges in Using Analogies for AI

Lack of Critical Understanding

Analogies can create a false sense of understanding. Users may feel they grasp the concept of AI because they can relate it to something they already know. However, this superficial understanding can prevent deeper inquiry into how AI systems actually work, which is crucial for informed engagement with technology (Yuan et al., 2023).

Misleading Expectations

Analogies might set unrealistic expectations about AI’s performance. For instance, if an AI system is likened to a human expert, users may expect it to perform with similar reliability and intuition, which is often not the case due to the limitations inherent in current AI technologies.

Subjectivity of Analogies and Personalisation Needs

The effectiveness of common-sense explanations can vary based on the user’s experience and background. The qualitative dimensions that characterise analogies are subjective and are impacted by varying experiences and interpretations of common-sense facts.

Conclusion

The evolution of AI technologies requires corresponding updates to the analogies used to explain them, particularly for complex aspects like automated decision-making and predictive analytics. Medical practice demonstrates this shift, as AI-assisted analysis combines with conventional diagnostic methods, this requires new explanatory frameworks for these integrated systems (Andrade et al., 2024). The next phase of research should focus on developing analogies that resonate with specific audiences, communicate technical concepts clearly, and serve diverse groups whilst maintaining accuracy and neutrality.

Through the thoughtful use of analogies to demystify AI, we can transform AI from an abstract concept into something relatable and comprehensible for everyone. As artificial intelligence becomes more deeply woven into the fabric of society, clear communication and understanding of these technologies are essential for informed public engagement. Using familiar comparisons and everyday knowledge to explain AI concepts helps diverse audiences understand this complex technology. These relatable explanations enable better-informed decisions about AI without requiring extensive technical expertise.

By continuing to develop and refine our use of analogies whilst remaining mindful of their limitations, we can build a future where AI literacy is accessible to all, enabling individuals to navigate and participate meaningfully in our increasingly AI-driven world. This understanding will not only enhance public trust in AI systems but also encourage more inclusive and ethical development of artificial intelligence technologies.


References

1. Analogies in AI

L’opez, M. B., & Casado, C. Á. (2023). A Cyberpunk 2077 perspective on the prediction and understanding of future technology. arXiv.Org, abs/2309.13970. https://doi.org/10.48550/arXiv.2309.13970

Del Sette, B. M., Carnes, D., & Saitis, C. (2023). Sound of Care: Towards a Co-Operative AI Digital Pain Companion to Support People with Chronic Primary Pain. In Computer Supported Cooperative Work and Social Computing (pp. 283–288). ACM. https://doi.org/10.1145/3584931.3606971

Chirimuuta, M. (2022). Artifacts and levels of abstraction. In Frontiers in Ecology and Evolution (Vol. 10). Frontiers Media SA. https://doi.org/10.3389/fevo.2022.952992

He, G., Balayn, A., Buijsman, S., Yang, J., & Gadiraju, U. (2022). It Is like Finding a Polar Bear in the Savannah! Concept-Level AI Explanations with Analogical Inference from Commonsense Knowledge. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 10, Issue 1, pp. 89–101). Association for the Advancement of Artificial Intelligence (AAAI). https://doi.org/10.1609/hcomp.v10i1.21990

He, G., Balayn, A., Buijsman, S., Yang, J., & Gadiraju, U. (2024). Opening the Analogical Portal to Explainability: Can Analogies Help Laypeople in AI-assisted Decision Making? In Journal of Artificial Intelligence Research (Vol. 81, pp. 117–162). AI Access Foundation. https://doi.org/10.1613/jair.1.15118

2. Examples of AI Concepts Explained Through Analogies

Zhou, Z., Ning, M., Wang, Q., Yao, J., Wang, W., Huang, X., & Huang, K. (2023). Learning by Analogy: Diverse Questions Generation in Math Word Problem. Annual Meeting of the Association for Computational Linguistics, 11091–11104. https://doi.org/10.48550/arXiv.2306.09064

Jesus, F. S. D., Ibarra, L. M., Villanueva, W., & Leyesa, M. (2024). ChatGPT as an Artificial Intelligence Learning Tool for Business Administration Students in Nueva Ecija, Philippines. In International Journal of Learning, Teaching and Educational Research (Vol. 23, Issue 6, pp. 348–372). Society for Research and Knowledge Management. https://doi.org/10.26803/ijlter.23.6.16

Hundt, A., Killeen, B., Greene, N., Wu, H., Kwon, H., Paxton, C., & Hager, G. D. (2020). “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer. In IEEE Robotics and Automation Letters (Vol. 5, Issue 4, pp. 6724–6731). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/lra.2020.3015448

Öter, A., Ersöz, B., Bülbül, H. İ., & Sağıroğlu, Ş. (2024). Using Generative Artificial Intelligence in Exams: A Research on KPSS with ChatGPT. In International Journal of Educational Research Review (Vol. 9, Issue 4, pp. 269–274). International Journal of Educational Research Review. https://doi.org/10.24331/ijere.1414256

Schmidgall, S., Achterberg, J., Miconi, T., Kirsch, L., Ziaei, R., Hajiseyedrazi, S. P., & Eshraghian, J. (2023). Brain-inspired learning in artificial neural networks: a review. APL Machine Learning, abs/2305.11252. https://doi.org/10.48550/arXiv.2305.11252

Kufel, J., Bargieł-Łączek, K., Kocot, S., Koźlik, M., Bartnikowska, W., Janik, M., Czogalik, Ł., Dudek, P., Magiera, M., Lis, A., Paszkiewicz, I., Nawrat, Z., Cebula, M., & Gruszczyńska, K. (2023). What Is Machine Learning, Artificial Neural Networks and Deep Learning?—Examples of Practical Applications in Medicine. In Diagnostics (Vol. 13, Issue 15, p. 2582). MDPI AG. https://doi.org/10.3390/diagnostics13152582

3. Benefits of Using Analogies to Demystify AI

Veitch, E., & Alsos, O. A. (2021). Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles. In Journal of Marine Science and Engineering (Vol. 9, Issue 11, p. 1227). MDPI AG. https://doi.org/10.3390/jmse9111227

4. Challenges in Using Analogies for AI

Bauer, K., von Zahn, M., & Hinz, O. (2021). Expl(Ai)Ned: The Impact of Explainable Artificial Intelligence on Cognitive Processes. In SSRN Electronic Journal. Elsevier BV. https://doi.org/10.2139/ssrn.3872711

Yuan, C. W. (Tina), Bi, N., Lin, Y.-F., & Tseng, Y.-H. (2023). Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–15). ACM. https://doi.org/10.1145/3544548.3580945

5. Conclusion

Andrade, J. B., Mendes, G. N., & Silva, G. S. (2024). Miller Fisher’s Rules and Digital Health: The Best of Both Worlds. In Cerebrovascular Diseases (pp. 1–8). S. Karger AG. https://doi.org/10.1159/000539323

It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

This field is required.

We don’t spam! Read our privacy policy for more info.


Author: