Weekly Reflection Blog Post #3- Jan. 30th

Generative AI has been one of the most popular and controversial tools that have been introduced to humanity in a long time. Personally, I can recognize it as both a possibly helpful resource but also as a detrimental scar in education.

From a more optimistic view, AI can be a good support alongside thinking to supplement ideas. In the past, I have used it and seen it being used to help make assignment instructions and expectations more clear, to brainstorm/mind map/structure ideas, rephrase/refine ideas, expand plans to be more elaborate, and correct mistakes in editing. These uses make it more like a partner rather than doing everything for you. Thinking deeper, seeing alternative perspectives, and working on organization skills are all important things that AI can help one work on. 

In education, I also have seen how AI has been used a lot to help the teachers by reducing their taskloads. It can help design lesson plans, make alternative plans for different students, help with unbiased feedback, and design formative assignments and practice questions. This is beneficial because then they have more time and energy to focus on other aspects of the class that AI cannot help with, such as community, management, relationships, and one-to-one help. True human care and judgment are always needed in schools and will never be constructed by a bot.

While I mentioned many benefits of AI, there are also numerous limitations that turn the topic grey. One of the most obvious gaps is in the actual reliability of AI. To many students, but also teachers too, responses may sound like it knows what it is talking about, but in reality the answers will be forged from false information. Accuracy is a major flaw when pulling information from so many popular sources. This is most apparent when referencing different quotations. AI does not quite have the capacity to do this correctly and it pulls responses from thin air, as we saw in the lawyer case- https://www.cbc.ca/news/canada/british-columbia/lawyer. Students and adults do not even realize how they are being misled and spread false narratives all the time. This is dangerous for students because it takes away their critical skills. Another seemingly basic flaw of AI is the fact that it is not real. There is no person on the other end supporting the words. The AI does not actually understand student learning, specific contexts, emotions, or supporting growth; only students themselves and teachers can provide these traits. 

Additionally, students realize that AI can be seen as cheating the assignment but they do not see that it is cheating themselves. Dependency is locked because their problem solving and deeper thinking skills are replaced by this immediate generation at their finger tips. They don’t have to think for themselves anymore so why do it. This is very dangerous for middle school aged students who aren’t even learning the foundational skills and are setting themselves up with AI right away. This will most likely put them on a rigid dependency path for the rest of high school and even university. This is all to finish assignments quicker to spend more time on things they would rather be doing outside of school. This is obviously not the children’s faults either. They are in a world where they are being exposed 24/7 to AI whether they want to or not. And they are in a system where completion and results are overstressed rather than a meaningful process with creativity, inquiry, and deep-critical thinking. 

With all of this being said in an educational setting, there are even more complications and issues with AI at a larger level. A major issue is the environmental impact of AI. I won’t exhaust this portion of the debate after our lengthy discussion in class but it is worth mentioning. Everything from texts, images, videos, and various graphics build up and use a significant amount of energy and resources to produce. Carbon emissions and water usage are two areas that are of major concern with most people. No matter one’s stance, I believe AI consequences and its greater costs should be more public and normalized. I have met many people who had no idea that it even uses the environment to operate and they have looked at it in a different light ever since. Along with this, other issues arise such as with privacy and ownership. People have to know their rights and risks before using AI due to the mass capture of personal information and data. The content from which AI is based on is captured without obvious consent which makes integrity blurrier than ever before. Therefore, even if AI can be used for good, innocent, and helpful tasks, it still will have ethical issues behind it all. We have to morally measure this out and be conscious of the responsibilities we have when using such powerful tools. https://youtu.be/H_c6MWk7PQc?si=F_FxXiTpGoVAoHO2

From John Green Youtube

Leave a Reply