I have seen a lot of things in my short time at this University, from Newcomb steps becoming a waterfall to Trin lines backed up to Insomnia Cookies. But most strikingly, I’ve seen lots of Artificial Intelligence. There is now an AI-generated overview for every google search, and ChatGPT is capable of creating notes for the readings and assignments that students fail to complete themselves. It seems AI has become unavoidable in our daily life. And during this past finals season, this software was even more appealing to use for assignments or studying, but it can also be academically compromising. With AI software growing, it is crucial for us to reevaluate our personal reliance on AI and begin understanding how it acts as a problematic shortcut.
Ever since apps like ChatGPT took off, student usage of AI software has grown immensely. And as our reliance has grown, so has our willingness to bypass academic integrity through such tools. With fears of failing integrity, there have been attempts to address AI concerns — Gov. Glenn Youngkin has issued an executive order setting guidelines for AI in classrooms, and the Honor Committee outlined their own stance on AI. While these policies have targeted formal academic dishonesty, they do nothing to address the other dimension of this problem which is students’ personal dependency on AI.
Professors around the country have already started raising concerns about our generation’s capacity for critical thinking and creating original content. Specifically, they have suggested that we are losing both of these abilities and that this loss is only further enabled by AI shortcuts. And they are not just referring to the use of AI for essays, cheating if you will. Since most generative AI is not banned from assignments like weekly readings, students can easily rely on it to synthesize conclusions without reading a single page, something which reinforces a personal dependency on AI. I get it — we, college students, do not always have time to read endless pages for humanities classes or sit and write original code for computer science. But If we rely too heavily on this software, we potentially lose crucial capabilities that help us discern and evaluate information.
It is concerning how easily we are willing to undermine the skill sets meant to be acquired in college, chief among them critical thinking and original thought. Indeed, part of being a student at any university, especially ours, is honing our critical thinking and analytical skills. We are here to think through dense concepts and develop our own ideas in the process. These are foundational principles of academia which prepare us to substantively engage and contribute to society at large after our years here. By neglecting these fundamental principles through allowing AI to do our thinking for us, we are putting ourselves at a great disservice both while attending this university and in the future.
It is not just our thinking skills that are vulnerable in the process of using these shortcuts but also the very facts that we rely upon — the information that AI creates is also potentially false. Such software is known to create fallible information since the design structure simply synthesizes human-written information. And since most sites from which this software is dragging information are either biased or not completely fact-checked, the conclusions AI may come up with can be faulty. Especially as students, we cannot blindly rely on fallible systems to shortcut our own analysis and critical thinking because of the faulty reliance it creates. And in this way, a toxic feedback loop is created — we use AI without using critical thinking, so our critical thinking abilities diminish, so we rely more heavily on AI.
Yet, it would be a misrepresentation of AI if we completely overlooked some of the benefits. It is a useful tool to help people work through difficult tasks. AI can help organize ideas and attempt to give objective feedback on work submitted to software like ChatGPT. These features may even help us develop instead of inhibiting growth by giving simplifications of complex concepts. But this can only be helpful, instead of harmful, if we have developed the internal abilities to be able to evaluate these simplifications and syntheses. And the place where you develop these skills is in college, in the mundane acts of reading and taking notes. So while this software may have benefits outside of universities, as we spend our four years as a student here, we have to be cautious of any activities which effectively replace the abilities we are supposed to be developing.
Universities have spent too much time focusing on academic dishonesty in relation to AI, failing to realize that it is not the large acts of AI plagiarism which are dangerous. Rather, it is the mundane, daily uses which create a slippery slope. To be sure, addressing AI on this level is outside the purview of the University, so the responsibility falls on students to be more conscious of and accountable to their AI use. Relying too heavily on AI to perform critical analysis hinders our ability to engage deeply or develop critical thinking ourselves. We must be cautious of how AI might undermine the skills that make us strong contributors to society and democracy.
Ryan Williams is a viewpoint writer who writes about politics for The Cavalier Daily. He can be reached at opinion@cavalierdaily.com.
The opinions expressed in this column are not necessarily those of The Cavalier Daily. Columns represent the views of the authors alone.