College & Research Libraries News vol. 85, no. 6 (June 2023) C&RL News June 2023 205 The field of Natural Language Processing (NLP) has seen significant advancements in recent years, thanks in large part to the development of powerful language models such as ChatGPT. ChatGPT, short for Chat Generative Pre-trained Transformer, is a large-scale neural language model developed by OpenAI that is capable of generating human-like responses to natural language input. With its impressive performance on a range of lan- guage tasks, ChatGPT has quickly become one of the most widely used language models in NLP research and application.1 The preceding paragraph showcases ChatGPT’s capabilities, as it was composed and for- matted entirely using artificial intelligence (AI). Clearly, ChatGPT holds tremendous power when it comes to writing-based tasks. As noted in a recent C&RL News article by Christopher Cox and Elias Tzoc, ChatGPT, and similar large language model technologies, have the potential to be disruptive technologies, significantly affecting not just academic libraries but higher education as a whole.2 In this article, we aim to explore some of these potential issues and propose a few possibilities for how we, as information professionals, may be able to help address them as they emerge. Challenges of ChatGPT in higher education The emergence of ChatGPT has generated substantial concerns and debates among aca- demics about its potential impact on plagiarism, academic integrity, and the reliability of scientific research.3 While ChatGPT offers opportunities to improve communication and collaboration, it also presents several challenges, including the proliferation of plagiarism cases, the creation of fictitious references, and the propagation of hidden biases related to gender, race, ethnicity, and disability status.4 Plagiarism and academic integrity A central question raised by the introduction of ChatGPT is where to draw the line for pla- giarism when using this AI tool.5 For instance, if an author uses ChatGPT to improve the readability of an existing article or to gather information to support a point of view, should this be considered plagiarism? Furthermore, when students rely on ChatGPT’s responses without proper citation, are they committing academic dishonesty? ChatGPT itself suggests that using its responses without attribution constitutes plagiarism, while employing it to generate ideas is not, if those ideas are developed into original work. Zoë (Abbie) Teel is a Master of Library Science Candidate (May 2024) at the University of North Texas, email: abbieteel@my.unt.edu. Ting Wang is a lecturer at the School of Library and Information Management at Emporia State University, email: twang2@emporia.edu. Brady Lund is an assistant professor at the University of North Texas’s Department of Information Science, email: Brady.Lund@unt.edu. © 2023 Zoë (Abbie) Teel, Ting Wang, and Brady Lund. Zoë (Abbie) Teel, Ting Wang, and Brady Lund ChatGPT conundrums Probing plagiarism and parroting problems in higher education practices mailto:abbieteel@my.unt.edu mailto:twang2@emporia.edu mailto:Brady.Lund@unt.edu June 2023 206C&RL News However, this distinction may not be clear-cut for many users, leading to confusion and potential misuse of the technology. Some academics argue that using ChatGPT is as unethical as ordinary plagiarism and could have serious consequences, including failing grades and academic penalties.6 To address this issue, universities have developed detectors like GPTZero and implemented measures to check for possible plagiarism in assignments.7 However, academia must also consider the potential benefits of technology and establish ethical boundaries to strike a balance between human expertise and advanced technology.8 One approach to mitigating plagiarism concerns could be to incorporate AI tools like ChatGPT into academic curricula as educational aids, teaching students how to use them responsibly and ethically.9 This would allow students to harness the potential of AI for research and idea generation while maintaining a strong foundation in academic integrity. Fictitious references Another challenge for ChatGPT is the creation of fictitious references, which can poten- tially impact the credibility of academic research.10 When ChatGPT generates a research paper, it may cite and create references or articles that do not exist. For example, the author cited may be legitimate and have published research on the subject, but the article in the citation could be a fabrication.11 Authors should be responsible for verifying the accuracy of references and citations provided by ChatGPT, as accurate citations are essential to maintaining academic research’s integrity.12 Citation practices are a crucial element of academic writing, as they contribute to the cred- ibility of the author’s work. They demonstrate the author’s knowledge and expertise in their field and the breadth of research conducted. Additionally, citing sources is a way of paying homage to the work of other researchers, thereby showing respect for their contributions. To address the issue of fictitious references, universities and research institutions should implement measures to ensure that all citations in academic work are accurate and verifi- able. These measures could include using citation-management software, requiring authors to provide full-text copies of cited sources, and implementing regular citation audits as part of the peer-review process. Bias in ChatGPT-generated content A further concern with ChatGPT is the potential for the AI to unintentionally perpetuate hidden biases related to gender, race, ethnicity, and disability status when used in academic research.13 The increased use of AI-generated research papers presents a risk to the reliability of scientific research due to the potential for biases and errors that may be difficult to detect and correct.14 While ChatGPT responds without personal opinions or beliefs, it is unclear whether or how the AI can mitigate prejudice if trained by individuals with strong opinions.15 To address this issue, researchers and developers should prioritize transparency in AI training processes and datasets to minimize potential biases. Additionally, efforts should be made to include diverse perspectives in the development and training of AI models to ensure that they are representative of different experiences and viewpoints. Moreover, users of AI tools like ChatGPT must exercise critical thinking and diligence when using AI-generated content in their research. They should be aware of potential biases June 2023 207C&RL News and verify the accuracy and reliability of the information provided by the AI. This will help maintain the integrity and credibility of academic research while mitigating the risks associ- ated with AI-generated content. Conclusion The introduction of ChatGPT has brought academia to a critical juncture where plagiarism and ethical boundaries must be redefined and articulated. Like other periods in educational history, the emergence of new inventions and tools often leads to a period of disorientation and debate.16 Rather than viewing ChatGPT and similar technologies as threats, academ- ics should embrace the challenges they present and use them as opportunities to broaden and deepen their understanding of ethical and responsible boundaries. Engaging in debates about the role of AI in higher education can heighten awareness of necessary boundaries and promote ethical use of these technologies. Furthermore, collaboration between AI de- velopers, researchers, educators, and students is crucial in navigating the ethical challenges presented by ChatGPT and other AI tools. Endless possibilities—that is the reality of where artificial intelligence of this nature is leading the academic community. It cannot be stopped. Eventually, it can be suspected that a tool like ChatGPT will look like a minor creation, laying the foundation for a better, more extensive, and more efficient system that may become as popular as tools like Canvas, Google Suite, Microsoft Office, and Smartboards, which are used regularly in an academic setting. The academic community must make a choice: to embrace or to fear. Embracing AI could lead to significant potential, increasing the production of publications, helping students with their work, and reinforcing the idea of critical thinking. To fear this innovation could be a missed opportunity. Academic communities and individual institutions must determine their approach and get ahead of this new wave of technology. Notes 1. ChatGPT, response to “What can you tell me about ChatGPT and its significance in the field of Natural Language Processing (NLP)?” (2023, March 2), Open AI. 2. C. Cox and E. Tzoc, “ChatGPT: Implications for Academic Libraries,” College and Research Libraries News 84 (2023): 99–102. 3. B. D. Lund, Ting Wang, Nishith Reddy Mannuru, Bing Nie, Somipam Shimray, and Ziang Wang, “ChatGPT and a New Academic Reality: Artificial Intelligence–Written Papers and the Ethics of the Large Language Models in Scholarly Publishing,” Journal of the Association for Information Science and Technology 74 (2023): 570–81. 4. S. Yanisky-Ravid, “Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era: The Human-Like Authors Are Already Here,” Michigan State Law Review, review 659 (2017). 5. M. R. King, “A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education,” Cellular and Molecular Bioengineering 16 (2023): 1–2, https://doi .org/10.1007/s12195-022-00754-8. 6. D. Cotton, P. Cotton, and J. R. Shipway, “Chatting and Cheating. Ensuring Aca- demic Integrity in the Era of ChatGPT,” EdArXiv Preprints, January 10, 2023, https://doi .org/10.35542/osf.io/mrz8h. https://doi.org/10.1007/s12195-022-00754-8 https://doi.org/10.1007/s12195-022-00754-8 https://doi.org/10.35542/osf.io/mrz8h https://doi.org/10.35542/osf.io/mrz8h June 2023 208C&RL News 7. Lund et al., “ChatGPT and A New Academic Reality.” 8. B. D. Lund and T. Wang, “Chatting about ChatGPT: How May AI and GPT Impact Academia and Libraries?,” Library Hi Tech News, February 14, 2023, https://doi.org/10.1108 /LHTN-01-2023-0009. 9. S. Pertile, V. P. Moreira, and P. Rosso, “Comparing and Combining Content- and Citation-Based Approaches for Plagiarism Detection,” Journal of the Association for Informa- tion Science and Technology 67 (2015): 2511–26. 10. R. Dale, “GPT-3: What’s It Good For?,” Natural Language Engineering 27, no. 1 (2021): 113–18. 11. M. P. C. Lin and D. Chang, “Enhancing Post-secondary Writers’ Writing Skills with a Chatbot,” Journal of Educational Technology & Society 23, no. 1 (2020): 78–92. 12. K. Hyland, “Academic Attribution: Citation and the Construction of Disciplinary Knowledge,” Applied Linguistics 20, no. 3 (1999): 341–67. 13. King, “A Conversation on Artificial Intelligence.” 14. C. Basta, M. R. Costa-jussà, and N. Casas, “Evaluating the Underlying Gender Bias in Contextualized Word Embeddings,” Proceedings of the Workshop on Gender Bias in Natu- ral Language Processing 1 (2019): 33–39, https://doi.org/10.18653/v1/W19-3805; A.-M. Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis, “Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior,” arXiv:1802.00393 (2018), http://arxiv.org/abs/1802.00393. 15. B. Gordjin and H. T. Have, “ChatGPT: Evolution or Revolution?” Medicine, Health Care, and Philosophy 26 (2023): 1–2. 16. J. Brennan, Steve Ryan, Marina Ranga, Simon Broek, Niccolo Durazzi, and Bregtje Kamphuis, “Study on Innovation in Higher Education,” Publications Office of the Euro- pean Union (2014), https://www.lse.ac.uk/business/consulting/assets/documents/study-on -innovation-in-higher-education.pdf. https://doi.org/10.1108/LHTN-01-2023-0009 https://doi.org/10.1108/LHTN-01-2023-0009 https://doi.org/10.18653/v1/W19-3805 http://arxiv.org/abs/1802.00393 https://www.lse.ac.uk/business/consulting/assets/documents/study-on-innovation-in-higher-education.pdf https://www.lse.ac.uk/business/consulting/assets/documents/study-on-innovation-in-higher-education.pdf