About this blog.
This final installment of the AI blog series seeks to summarize a few of the key limitations and concerns regarding the use and application of artificial intelligence. We have seen concrete examples of how you can prepare data to include in a PS Query, how you can get help to build a requirement guide, and visualized a "cheat sheet" for smart prompting. Now, we look at what some AI experts, researchers, and skeptics have to say while providing a few common examples.
AI makes things up.
According to a 2023 blog from the Duke University Libraries, artificial intelligence has the potential to “fabricate or hallucinate citations.” The authors emphasize that “the existence of these new AI-based tools requires their users to think about how to carefully and ethically incorporate them into their own research and writing.”
Content Generation:
Example: An AI writing assistant might create fictional references or citations when asked to generate a research paper or article.
Reason: If the AI lacks access to real-time data or specific knowledge, it might fabricate details to fill in gaps, resulting in inaccurate or entirely made-up content.
Image Recognition and Generation:
Example: An AI image recognition system might incorrectly identify objects in a photo, or an AI image generator might create images with unrealistic or non-existent elements.
Reason: AI models can misinterpret visual data or combine elements in ways that don’t reflect reality, leading to errors or fabricated details.
AI still struggles to make obvious decisions.
According to Ben Jones (2024), AI systems often struggle with tasks that require common sense because they lack the intuitive understanding of everyday situations that humans naturally possess. This includes understanding context, making inferences, and applying knowledge flexibly across different scenarios.
Common Sense Reasoning:
Example: If an AI is asked whether it should put a glass of water in a microwave to boil it, it might not recognize the potential danger of overheating the water.
Reason: AI lacks common sense reasoning and might not understand the practical implications of certain actions.
Ethical and Moral Judgments:
Example: Deciding whether to prioritize one person’s safety over another’s in a self-driving car scenario.
Reason: AI does not have the ability to make ethical or moral judgments, which are often required in complex, real-world situations.
AI may not be great at making detailed predictions from your data.
Think about it. You give it a bunch of data and then ask it what enrollment will average in 2030 based on the data. What does AI do? It can guide you but does not give you the answer. Saqr and López-Pernas explore how AI explanations are often based on aggregate data, which may not capture individual differences among students. This can lead to inaccurate projections and highlights the need for human involvement to ensure personalized and accurate decision-making.
Student Performance:
Example: Predicting the exact grades a student will achieve in future exams.
Reason: Student performance can be influenced by a wide range of factors, including personal circumstances, changes in motivation, teaching quality, and unexpected life events. AI models may not be able to account for all these variables accurately.
Learning Outcomes:
Example: Predicting the specific skills and knowledge a student will acquire by the end of a course.
Reason: Learning outcomes depend on various factors including the student’s prior knowledge, engagement level, teaching methods, and classroom environment. AI may struggle to predict these outcomes with high accuracy due to the complexity and variability of the learning process.
Other takeaways.
Generative AI can do a lot, but you will need to approach the output it provides with a skeptical eye. Despite the research on AI, discussions around ethics and social aspects are limited. There seems to be more “focus on technology and its growing functionalities, without asking what we, as a social and political community, want to do with it (Cebral-Loureda et al.,p. 32, 2023).” What should we be asking ourselves? How can we ensure that technological advancements align with the ethical, social, and political values of our community? How should we be reflecting on the broader implications of technology?
Sources cited
- Cebral-Loureda, M., Rincón-Flores, E. G., & Sanchez-Ante, G. (Eds.). (2023). What AI Can Do: Strengths and Limitations of Artificial Intelligence. CRC Press.
- Jones, B. (2024). AI literacy fundamentals: Helping you join the AI conversation. Data Literacy Press.
- LinkedIn Learning. (24, March 21). Excel for business analysts: Limitations on AI and data. Retrieved from https://www.linkedin.com/learning/excel-for-business-analysts/limitations-on-ai-and-data
- Rozear, H., & Park, S. (2023, March 9). ChatGPT and fake citations. Duke University Libraries Blogs. Retrieved from https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/
- Saqr, M., López-Pernas, S. Why explainable AI may not be enough: predictions and mispredictions in decision making in education. Smart Learn. Environ. 11, 52 (2024). https://doi.org/10.1186/s40561-024-00343-4