ChatGPT: Navigating the Legal Loopholes of AI

Artificial Intelligence (AI) has attracted much attention from the scientific, industry, and tech communities. Even the general population has been intrigued by the development of AI which has been instigated by series, documentaries, and novels showing how our future coexisting with AI may look like. Beyond these futuristic scenarios, the rapid development of AI technologies represents a quickly evolving challenge for current laws.

On November 30th, 2022, one of the most significant breakthroughs in AI was released to the public: ChatGPT chatbot. In general terms, ChatBox is a program designed to assist users in answering questions and providing information on a wide range of topics. In contrast to search engines such as Google or Bing, the chatBox gives a fluent and conversational answer in a human-like form. Natural language processing is becoming more complex and it may turn eventually indistinguishable from language produced by humans. ChatGPT is capable of writing complex essays and poems, and has even been listed as an author in a scientific journal.

“ChatGPT has received a lot of excitement but has also raised several serious concerns. The advent of AI chatbots may redefine the educational, healthcare, scientific, and intellectual property fields.

The ability of ChatGPT to write clear essays, and poetry, and answer concisely to questions has given ChatGPT popularity but also alarmed the universities. As ChatGPT answers in an almost unique form, it can evade the state-of-the-art software against plagiarism and is forcing professors to re-evaluate the way they teach. However, the process to evaluate, grade, and the ethical implications will require reforming the existing guidelines of the universities.

In science, the ChatGPT may represent a valuable tool to assist researchers in their studies and help non-native English speakers to improve their manuscripts. In fact, there is now an open debate on whether it is correct to simply cite the AI tool or whether the chatbot should be listed as an author. Indeed, the Chat GT was cited recently as one of the authors in a recent medical study. This has raised red flags, as many editors and scientists do not believe that the AI can take responsibility for the content and integrity of scientific papers, hence, cannot meet the criteria for being listed as an author. Publishers of scientific articles, such as Nature and Science, are taking action to regulate the use of AI in accepted articles

ChatGPT has been also used for medical purposes, and according to a recent medical research study,  ChatGPT passed the United States Medical Licensing Exam (USMLE), demonstrating a high level of concordance and insight in its explanations to the answers. While this sounds highly impressive, doctors and specialists alerted about the risks of using the chatbot for diagnosis and referrals, as again, this AI functionality cannot take any real responsibility.

Although the advantages of AI are immense and undeniable, the legal loopholes and lagoons are becoming more evident. The impact of AI is now immediate in many aspects of our society: education, health care and science and urges news laws to regulate AI.


About the Author

Jazmin Labra Montes is an Academic Advisor for the National Autonomous University of Mexico and is currently working at the University of Cambridge. Jazmin’s research interest are the study of data law protection and health data.