
The Role of AI Technology and Ethical Decision-Making: Perspectives From An Undergraduate Student and Faculty Member
Student Success Technology
August 8, 2025
The Rise of AI in Academic Life
Most college students keep a detailed checklist of priority “to-do” items. As students head back to campus at the start of a new semester, two items stand out as highlights: 1) ensuring that they have their course syllabi downloaded, and 2) confirming that their ChatGPT subscription has not expired yet. In a few short years, AI (including Chat) has become many students’ favorite tutor and tool, integrating itself into higher education and transforming the way students approach their academic work (Walsh, 2025). The ethical quagmire around the use of AI begs the question, “Is this reliance on AI technology really a good idea?” This AI issue poses an ongoing quandary around student integrity and ethical decision-making, as many educators and students grapple with the pros and cons of the usage of AI in academic settings.
The potential of generative AI appears promising enough to help revolutionize learning while simultaneously reshaping students’ creativity and academic integrity. The purpose of this scholarly blog is to share some perspectives on AI use from the perspectives of both a student and a faculty member. Aysa Tarana is a recent pre-med graduate in Genetics, Cell Biology, and Development with minors in Art History and Philosophy from the University of Minnesota Twin Cities. Starting this fall 2025, she will be a master’s student in the Integrative Biology and Physiology MS program. Michael Stebleton is Professor of Higher Education at the University of Minnesota Twin Cities. He teaches both undergraduate and graduate courses in the College of Education and Human Development.
Higher education, as a system intended to both nurture the skills and souls of its students, serves as a support system designed to develop students’ abilities and propel them towards their career and life goals (Stebleton & Tarana, 2024). Where college exists to challenge students to think critically and enhance their authenticity and uniqueness, extensive and unlimited access to AI could counter that narrative. On a personal level, while students could see AI tools as valuable for enhancing their creativity and aiding their learning, others may see them as contributing to a steady erosion of critical thinking (Terry, 2023).
To explore these issues in higher education and artificial intelligence, I (Aysa) conducted informal interviews with 10 college students who identify as women and people of color (BIPOC), two identities that resonate with my own student identity. Although this was not an empirical study, these individuals were informally selected through purposive sampling to understand their perspectives on AI’s nuanced role in education. These participants were chosen not only for their range of academic disciplines, including technology, business, and healthcare, but also for the intersectional perspectives they brought on the way technology has impacted their learning, access, and student identity. Although this inquiry does not represent a formal research study, the responses gathered from this informal exercise capture some of the sentiments held by undergraduate students about AI usage in their academic work.
How AI Benefits Students’ Work
Based on an informal synthesis of the interviews conducted, many students enthusiastically praised AI for helping them understand complex academic assignments and ensuring higher-quality submissions. One student, a senior at the University of Minnesota Twin Cities, noted how AI tools, specifically Grammarly, helped them redefine (and rewrite) their ideas, leading to a greater sense of confidence in their submissions for creative writing courses. AI has been this student’s assistant, helping them break down their thoughts into tangible sentences, giving them personal feedback, and allowing them to learn from their mistakes (Luckin et al., 2016). Why carve out one hour of your day to go to a professor’s office hours when the tools for success are only a few clicks away? A moment of confusion about a writing assignment or a question about how to break down the citric acid cycle could be answered in the same amount of time it takes to spell the word “citric.”
Furthermore, AI’s swift ability to provide information further enhances its advantages for students. Countering claims that it diminishes creativity, one student from the interviews proclaimed that AI “improves creativity by giving alternate options that you can build on.” This statement reflects how AI offers alternative perspectives and solutions that can spark original ideas. In other words, when used intentionally, AI can be the key to unlocking new avenues for creative problem-solving.
Many students believe that AI enables them to explore ideas and approaches they have never considered previously. Harvard University’s Nagelhout (2024) contended that “one of the most positive things about generative AI is its potential for creativity and exploration” (para. 7). In this sense, AI is not just an assistant, but a valuable partner in the creative process, clearly helping students enhance their creativity and think divergently. This way of creative collaboration is praised by students who believe that AI “makes it easier to do better,” such as generating ideas, structuring essay prompts into outlines, and providing specific suggestions as examples. When working on complex problems or trying to understand difficult concepts, the traditional educational approach may involve a narrow set of possible solutions based on rote information the student already knows. AI, however, can create diverse results, encouraging students to reconsider their thought processes to the problems that stump them.
The Risk of Over-Reliance on AI Tools
The double-edged sword of AI’s capabilities, though, lies in the fact that students must be actively engaging with AI to truly unlock new paths towards enhancing their creative abilities and problem-solving skills. Otherwise, by generalizing students’ learning experiences and reducing academics to a race for achieving efficiency and productivity rather than individuality, AI may take over a student’s critical thinking skills. Over the past few years in my undergraduate career, I (Aysa) have heard the word “ChatGPT” almost every day in and out of my classes. ChatGPT represents the colossal capabilities of AI: if AI can help students write essays, come up with resume templates, generate conversation prompts, and even assist with math homework, why wouldn’t students start to develop an over-reliance on AI? AI’s distinct generic outputs are incredibly helpful for synthesizing ideas into direct forms, but the act of creativity and thought gets eclipsed and lost in the process. If we can turn towards AI to do our thinking for us, the value of our critical thinking, writing, decision making, and creativity seems to pale in comparison to the limitless potential of AI.
Perhaps more disconcerting is that an over-reliance on AI usage, especially with writing assignments, outsources the critical thinking and innovative processes that are required of writing (Warner, 2025). In a sense, the student loses their agency and forgoes their voice when they rely on Chat to do the critical, more challenging work for them. For faculty, we (Michael and his faculty colleagues) are often left wondering as we grade papers, “Is this really the student’s own words?” Again, both students and faculty are questioning the best ways to embrace new technologies, while also supporting authentic student engagement and learning.
To combat the risk of losing our own thoughts to AI, students like myself (Aysa) need to actively make the choice not to simply take an answer that AI provides without inserting our own critical thinking. Becoming a passive receiver of AI impairs the necessary critical thinking skills needed to function effectively, especially in post-secondary education. Rather than absorbing the information that AI provides on a surface level, students need to know how to identify patterns, formulate their own thoughts, and connect them to different ideas, and understand how to push the boundaries of rote thinking to actively engage with the content they learn. The authoritative nature of AI misleads students to believe that the answers they are receiving are trustworthy, leading to ignorance of the biases present within an AI’s training data. This bias results in the consolidation and dissemination of information of questionable quality–even, at times, demonstrably and blatantly incorrect (Larson et al., 2024).
With AI’s succinct and informed tone of presenting content, students may be less inclined to question or critically assess the validity of AI and take its claims as fact. As a result, AI-generated responses are more often accepted at face value, despite the necessity of applying the same level of fact-checking one would with traditional literary sources. This shift has significant implications for key academic skills. If AI is constantly relied upon for basic knowledge, how will students learn the skills they need for higher-level learning? There is irreplaceable value in the ability to conceptualize, analyze, and evaluate concepts. As a cornerstone of learning, these skills cannot be computerized. Nevertheless, ethical choices start to seem small compared to getting what you want in an easier manner.
Thoughtful AI Use in Academia
At the University of Minnesota Twin Cities, I (Aysa) have completed classes where the professors condemn AI to the highest degree, citing it as the ultimate demise of creativity. Yet, in other courses, my professors have expressed a completely different message, encouraging me and my peers to utilize AI to their advantage. As a result of these conflicting directives, students often are confused about when and how to use Chat. In their own work, faculty members are either experimenting with generative AI to enhance learning and teaching, or plotting to put all AI programs in a permanent time-out corner.
With these polarizing viewpoints, we (Aysa and Michael) both have had to ask challenging questions: Is intentionally creating spaces to explore AI’s potential in a responsible manner the best route to understanding its benefits? Or is it better to diminish the existence of AI to promote a student’s own capabilities without reliance on an external tool that can be so easily accessible? While I (Aysa) personally refrain from utilizing AI, I also ponder the following question: Would restricting AI usage harm students’ educational experiences by removing a potentially valuable learning resource? (Note: for the record, we did not use AI for collaborating on this blog post–and we did not discuss or consider using it).
I (Aysa) wondered about these questions, twirling my pen in my campus’ health science library, my computer lying inches away from me. I realized I could have easily answered my questions within seconds by opening up a new browser to ChatGPT. What is my ethical responsibility in this situation? Had I opted to cheat, my answers would not have been as authentic, and my values compromised in the process. Outsourcing my thinking—even to a digital version of myself—troubled me. At what point does AI reach its end? To find out, I began to ask AI to help me format my own thoughts in a personal and academic context, to understand its capabilities.
While we (Aysa and Michael) both continue to grapple with the ethical implications of AI in our own work, some universities have begun actively embracing AI’s potential. Beyond individual concerns, educational institutions have taken it up a notch by launching AI-driven initiatives to explore the roles of AI within student learning. For example, the California State University System announced a partnership with OpenAI to explore the capabilities of AI across all of its 23 campuses to develop solutions on using AI ethically within a classroom setting. This partnership highlights the importance of the roles institutions themselves play in shaping the future of AI in education (Palmer, 2025).
From students’ perspectives, AI, while a valuable tool, is tied to concerns with plagiarism and academic misconduct. As one business student shared, “I am not sure whether I like or do not like AI because it is not easy to detect what is AI-generated and what is not, especially as technology continues to develop rapidly.” This uncertainty highlights a growing dilemma within academia: AI blurs the line between machine-generated content and authentic student work. From one side of the debate, some university leaders and faculty are attempting to eliminate AI and related tools, such as the University of Notre Dame, which decided to ban the use of Grammarly, despite much debate and rancor from students (Palmer, 2024). At best, the debates around appropriate ethical AI usage remain murky for students and teachers (Goldstein, 2025; Hill, 2025).
Negotiating AI Use: Student-Faculty Interaction
Many students find themselves within a gray area, wondering whether AI for assistance would constitute a breach of academic integrity, not to mention one’s own personal code of values and ethics. In today’s academic setting, it often feels–as a student (Aysa)—that the value of time needed to learn, understand, and teach others is overlooked. Brainstorming ideas, penning down thoughts frantically, discussing the papers’ outline, revising every sentence five times over until you finally hit the submission button seconds before the 11:59 PM deadline, is now increasingly becoming a dead sport. This deeply intellectual, formative process can be artificially sped up with AI and become habitual.
Students have begun to attach themselves to the idea of “checking off the boxes” (i.e., getting the work done quickly) rather than learning for the joy of learning itself. AI has the capacity to eradicate an effortful process of learning, replacing it with performative and transactional shortcuts. Academic labor is at risk of devolving into a fruitless cycle of disengagement and instantaneous gratification (Sarofian-Butin, 2025), raising questions about the true purpose of learning in universities. While some universities have heavily implemented AI-detection tools, those tools only last as long as the current version of AI stays the same—and, sometimes, they are still woefully inaccurate. To the dismay of faculty members nationwide, the ambitious promise of AI-detection tools becomes a disappointment as AI evolves rapidly.
Similarly, faculty members find themselves with questions about how to negotiate the use of AI in classrooms, leading to uncertainty and confusion for instructors and their students (Starn, 2025). Professors have returned to old yet reliable methods, re-introducing in-person multiple-choice quizzes and blue book exams. Some have attempted to design their assignments with the aspiration to be “AI-proof”—without much documented success (Sarofian-Butin, 2025). Inevitably, some faculty (including Michael and his fearless teaching assistants) reluctantly take on the task of ChatGPT policy enforcement officials, not a role that is ever welcomed. Despite faculty members’ best intentions, I (Michael) often question whether students really want feedback on their work or if they actually value regular, in-person student-faculty interactions (Latham, 2025). This inquiry regarding the desire for more human connection between student and faculty remains valid, especially when chatbots and AI teaching assistants are now taking on many of these tasks once done by instructors (Walter, 2024).
Unfortunately, both faculty members and students can fall into an implicit agreement of complacency around AI use. Some students clearly do not put much effort into assignments, and faculty respond accordingly, placing a lack of meaningful investment in evaluating students’ papers, often resulting in perfunctory comments (some that are likely AI-produced). When this scenario occurs, a vital opportunity is missed for both the student and professor to contemplate the ideas and value embedded in the work itself. Professors not only miss their chance to better understand how their students are thinking and evolving, but students are deprived of the intellectual growth that comes from reflection and the feedback. As a faculty member who regularly teaches undergraduates, I (Michael) seek a more nurturing and collaborative relationship with my students based on mutual respect and trust. With mutual accountability and trust, faculty and students can reshape their collective educational experience in a way that acknowledges the persistent reality of AI and reaffirms the understated value of human effort, intellect, and growth.
The Future of AI in Academia
Beyond academia, students are navigating how AI can shape their future careers after higher education. They are acknowledging AI’s merits and associated risks. In my (Aysa) informal study, numerous students expressed that AI is useful for streamlining tasks, with one noting, “I think that AI helps with everyday work tasks such as crafting emails.” Nevertheless, a few students voiced their caution, stating, “I think there are a lot of ways AI can be mismanaged or mishandled, but there are also many ways it can enhance our lives.” These perspectives suggest that while AI has potential, ethical considerations are pertinent to ensure its responsible implementation.
Student affairs educators occupy critical roles in supporting students as they navigate the evolving landscape of AI beyond academia. As AI becomes increasingly integrated into multiple professional sectors, both students and faculty members will need to find ways to co-exist with AI systems. It is almost certain that AI is here to stay—and advocates often suggest that AI will take over many of the functions currently held by higher education professionals, including those tasks completed by academic advisors, career counselors, teaching assistants, and faculty members (Latham, 2025). Despite the uncertainty that lies ahead, student affairs educators and higher education professionals can take proactive steps to support students around ethical AI use in the future.
Three Recommendations for Student Affairs Educators
Based on the informal analysis of the interviews conducted, we offer several recommendations for student affairs professionals. The suggestions are also grounded in the literature on the emerging topic of AI in higher education.
Strategy #1: Create Environments to Openly and Authentically Discuss Use of AI
Instead of prohibiting AI completely or promoting unchecked reliance, student affairs educators can foster discussions about using AI responsibly. By hosting workshops, panels, and student-led discussions, educators can create spaces for students to critically evaluate when and how AI can be a valuable tool for learning without replacing their own creative thinking. These spaces may offer opportunities to converse about ethics and student integrity (Bertram Gallant & Rettinger, 2025).
Strategy #2: Be Clear and Consistent about the Use of AI and Related Technologies
Student affairs educators and faculty members should create clear policies on AI usage that differentiate between ethical assistance and unethical academic dishonesty. Student affairs educators should work to clearly communicate these policies and ensure that students know how to integrate AI into their work while maintaining academic integrity. Similarly, faculty members can initiate supportive and honest discussions about potential academic dishonesty scenarios (e.g., suspected cheating on a paper)—and treat such cases as developmental learning opportunities. Additionally, students must realize that each course and faculty member may have different policies and expectations around AI use in the classroom.
Strategy #3: Strategically Integrate AI into Student Support Services and Tools
As AI and technology increasingly shape professional fields, educators can provide career development resources to prepare students for workplaces that integrate AI. This training may include AI literacy programs, AI-enhanced resume-building sessions, and ethical discussions about AI in professional settings. Most educators agree that AI literacy is critical, yet there is little consensus on what this means or includes (DeVaney, 2025). We recommend that students, student affairs educators, and faculty members create opportunities to collaborate around issues and expectations of AI use.
In conclusion, AI is not going away. Hardly a day goes by without an article published in the Chronicle of Higher Education or Inside Higher Education about AI and its impact on higher education and student learning. Clearly, AI (and platforms such as ChatGPT) can be viewed as a useful tool or as a curse. Students and educators can take active steps to engage in intentional conversation about ethical AI use—including how students might use technology to enhance their work without overtaking their creativity, critical thinking, and voice.
Discussion Questions:
-
Examine your own use of AI and technology. How have you used ChatGPT or other large language systems? Do they make you a stronger writer?
-
If you are a student affairs educator, how might you approach a conversation with a student about ethical AI use?
-
If you are a student using AI, how do you decide how and what to use in your own work?
-
Identify one concept from this blog that you might integrate into your own work.
References
Bertram Gallant, T., & Rettinger, D. A. (2025). The opposite of cheating: Teaching for integrity in the age of AI. The University of Oklahoma Press.
DeVaney, J. (2025, March 4). AI and education: Shaping the future before it shapes us. Inside Higher Education. https://www.insidehighered.com/opinion/blogs/learning-innovation/2025/03/04/ai-and-education-shaping-future-it-shapes-us
Goldstein, D. (2025, April 14). Teachers worry about students using A.I. but they love it for themselves. The New York Times. https://www.nytimes.com/2025/04/14/us/schools-ai-teachers-writing.html?smid=nytcore-ios-share&referringSource=articleShare
Hill, K. (2025, May 14). The professors are using ChatGPT, and some students aren’t happy about it. The New York Times. https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html?unlocked_article_code=1.HU8.hu4V.ny2YlhrBeMGA&smid=url-share
Larson, B. Z., Moser, C., Caza, A., Muehlfeld, K., & Colombo, L. A. (2024). Critical thinking in the age of generative AI. Academy of Management Learning & Education, 23(3). https://doi.org/10.5465/amle.2024.0338
Latham, S. (2025, April 8). Are you ready for the AI university? The Chronicle of Higher
Education. https://www.chronicle.com/article/are-you-ready-for-the-ai-university?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_13160393_nl_Afternoon-Update_date_20250408&sra=true
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed. An argument for AI in education. Pearson.
Nagelhout, R. (2024, September 10). Students are using AI already: Here’s what they think adults should know. Harvard Graduate School of Education. www.gse.harvard.edu/ideas/usable-knowledge/24/09/students-are-using-ai-already-heres-what-they-think-adults-should-know
Palmer, K. (2024, November 26). Is Grammarly AI? Notre Dame says yes. Inside Higher Education. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/11/26/grammarly-ai-notre-dame-says-yes
Palmer, K. (2025, February 5). Tech giants partner with Cal State system to advance ‘equitable’ AI training. Inside Higher Education. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2025/02/05/cal-state-system-tech-giants-partner
Sarofian-Butin, D. (2025, March 19). In the age of AI, is education just an illusion? The Chronicle of Higher Education. https://www.chronicle.com/article/in-the-age-of-ai-is-education-just-an-illusion
Starn, O. (2025, February 27). My losing battle against AI cheating. Duke Chronicle. https://www.dukechronicle.com/article/2025/02/losing-battle-ai-cheating
Stebleton, M. J., & Tarana, A. (2024, February 26). Building students’ souls or skills? Student affairs educators can foster both goals. Journal of College and Character Connexions, 10(1). https://www.naspa.org/blog/building-students-skills-or-souls-student-affairs-educators-can-foster-both-goals
Terry, O. K. (2023). I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt?sra=true
Walsh, J. (2025, May 5). Everyone is cheating their way through college. New York Magazine. https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html
Walter, A. (2024, July 10). A professor’s digital mini-me: Could Morehouse College’s AI teaching assistants make a difference? The Chronicle of Higher Education. https://www.chronicle.com/article/a-professors-digital-mini-me?sra=true
Warner, J. (2025). More than words: How to think about writing in the age of AI. Basic Books.
Author Note: The authors would like to thank Vic Massaglia for his helpful comments on this post.