AI and ChatGPT in teaching
Artificial Intelligence (AI) poses new challenges to higher education institutions. This page collects information and (later on) recommendations for action on how to deal with AI in teaching. Since November 2022, the currently most well-known application of AI is on the market: ChatGPT. Due to its availability and widespread use, this website as of now primarily refers to information related to the use of ChatGPT.
The website will be updated continuously, current status: 20.04.2023 (update on data protection).
What is ChatGPT?
ChatGPT is a text-based, dialogue-oriented chatbot that can be accessed and used via a browser. ChatGPT was developed by the US-based company OpenAI, which specialises in AI research and application, and was released in November 2022. ChatGPT is based on the GPT-3.5 language model. Language models are machine learning models that calculate word order probabilities, i.e. they try to predict the most likely next word from a list of words and thereby generate human-like text.
communicates in multiple languages and is not limited to any specific topic.
uses AI to understand user queries and respond to them as naturally as possible (text generator). Users converse with ChatGPT in natural language.
"understands" the context of a conversation. This means that the software remembers the course of the conversation and can refer to earlier moments in the conversation. Users can, for example, ask further questions or follow-up questions on a topic, have a subject explained again or in simplified form, and much more. A dialogue is created.
can create anything with a text structure, i.e. it also generates programming code in various programming languages and solves mathematical problems.
How can ChatGPT be used?
ChatGPT is a research prototype under development (research preview). User queries and input are used to improve the software. The use of ChatGPT is currently free of charge and there are no licensing terms that apply to its use. A one-time registration is required.
Please note that personal data is processed when using ChatGPT. The processing of the data takes place on servers in the USA. The USA has not been certified by the European Court of Justice as having an adequate level of data protection. In particular, there is a risk that your data may be subject to access by US authorities for control and monitoring purposes and that no effective legal remedies are available against this. The USA is currently not certified by the European Court of Justice as having an adequate level of data protection. There is a risk that your data may be subject to access by US authorities for control and monitoring purposes and that no effective legal remedies are available against this.
In any case, ChatGPT processes the following personal data:
In the course of registration: first and last name, email address and telephone number
During the use of the chatbot: IP address of the computer through which you access ChatGPT
Data that you enter into the chatbot will be stored and further used. The data you enter is used in particular for training the chatbot and generating its responses.
With ChatGPT Plus, OpenAI offers a paid version ($20/month), advantages are:
Faster response times
Priority access to new features, e.g. use of the language model GPT-4
What are the weaknesses of ChatGPT?
While ChatGPT has impressive results, it also has many weaknesses, some of which have been improved by the current GPT-4 language model:
Timeliness: The chatbot does not access the internet. Its knowledge base ends in 2021.
Correctness and hallucination: ChatGPT cannot check information/data for correctness or confirm it. The software sometimes produces fictitious output (hallucination).
Quality of the output: The software does not ask questions to better understand a less precise request.
Lack of transparency: ChatGPT does not specify the sources of information used, making it difficult to verify the accuracy of the output.
Tricking of the chatbot: Measures in the software that are supposed to prevent undesired (non-ethical) dialogues can be circumvented (formulate questions differently, change context).
Risk of bias in AI-based software: This can be caused by the database, by people training AI-based software, etc.
Graphics: The free-to-use chatbot is a text generator and cannot interpret or create graphics. For queries that include images or graphics, the chatbot does not reference the graphics or images. ChatGPT based on the current language model GPT-4 is able to describe images and recognise their context.
Weak availability: The free version of the chatbot is often unavailable due to very high access numbers.
Is AI-generated output detectable?
The provider of the WU plagiarism software (turnitin) has integrated an AI check for English texts up to a length of 15,000 characters into the plagiarism check report on a trial basis. Thus you have the possibility to screen plagiarism via LEARN but also canvas texts for AI conformity. Since the update, the established plagiarism check report now shows a percentage between 0 and 100 for English works under 15,000 characters under "AI". This percentage value indicates the proportion of this total text that was detected by the model used. For work that cannot be checked by the model, the entry "- -" can be found.
More information can be found in the FAQs of turnitin:
In addition, a number of easy-to-use online tools are currently available that calculate how likely it is that the text is more likely to be AI-generated or human-written.
Please note: the tools offered are often optimized for different language models.
Tests have shown that:
Tools that are optimised for older large language models are less likely to recognize AI-generated texts, especially if they were generated with more up-to-date models.
Detectors can be deceived.
The information on the method provided in connection with the respective detector can also be helpful in assessing the detection result (see for example: https://www.turnitin.com/solutions/ai-writing for the solution of turnitin).
Unauthorized use of ChatGPT/AI-based software by students
In examinations and performance components of courses, the use of unauthorized aids is considered as cheating. In order for such tools to be considered as unauthorized aids, teachers must announce the (un)authorised aids in the syllabus for examinations and performance components of courses before the beginning of each semester. It is recommended to define the authorized aids and to list unauthorized aids in addition by examples. In the case of an exhaustive list of unauthorized aids, the conclusion could be drawn that everything else is allowed.
If unauthorized aids are used in academic theses, this is considered academic fraud. In addition, the independent achievement will be usually insufficient as a result.
If the use of Chat GPT or AI-based software is defined as unauthorized aid, the usual study regulations and processes apply.
Outlook WU Actions ChatGPT
ChatGPT is just one of many AI-based tools. There are other text generators, chatbots, text to image, text to video and many more.
Alphabet Inc. is developing Bard, a comparable conversational chatbot that draws on current information/data from the internet and will be integrated into Google.
Microsoft has already integrated ChatGPT into its search engine Bing and will also integrate the software into its Edge browser.
Microsoft will integrate the software into the MS Office package
The technology of ChatGPT is already being adapted for subject-specific application areas; large tech corporations are investing very large sums in AI companies.
What does this mean for WU?
Phase 1: Awareness (March 2023): in-house mailing, website for lecturers with the most important information on the status quo.
Phase 2: Prevention & Detection (until early summer 2023): Communication and training on prevention and detection strategies.
Phase 3: Effective Use (from summer 2023): Awareness-raising and training of students, inclusion in Fit4Research, introduction to scientific work, individual support for lecturers.