Law and Compliance
General legal questions on dealing with generative AI services
Legal guidance on the use of generative AI at universities
Related areas of law
When using generative AI services, both the user's input and the further use of the generated results can raise legal questions. This applies in particular to the following areas of law:
- Copyright and exploitation rights
e.g. questions relating to terms of use, rights of use and copyright. B. Questions about terms of use, rights to AI output and the use of protected content - Personal rights
z. BA the right to one's own image and the right to informational self-determination - Data protection law
e.g. the processing of personal data of third parties by AI services- Note on the institutional use of external AI services
- Labour and employment law
z. E.g. the use of AI in an employment context or in assessments
The following FAQs address key questions from these areas and explain them in the respective legal context. Questions relating to examination law are excluded and are dealt with separately.
To what extent do the texts used for the training of AI services affect the copyrights of third parties and what does this mean for my work with such services?
tl;dr
Background
Many AI services have been trained with very large collections of texts that may also contain copyright-protected works, such as books, press articles or website content. Whether and under what conditions such use is permitted in the context of training is the subject of legal discussions and in some cases has not yet been conclusively clarified.
However, these questions do not concern the use of an AI service by employees or students, but the legal responsibility of the respective providers.
As a user of an AI service, you are not responsible for the data with which the underlying model was trained.
Copyright only becomes relevant for your daily work when AI-generated content is used further, for example by:
- Inclusion in texts, publications or digital presentations,
- Publication or dissemination,
- Use in teaching, administration or research.
KI editions may be similar to or heavily paraphrase existing works. In such cases, unchecked copyrights or rights of use of third parties may be infringed. The responsibility for this lies with the person using the AI output.
Note: The copyright in the training concerns the provider - the copyright in the use concerns you.
Note on rules and regulations
- Art. 4 para. 1 GDPR - Definition of personal data
- Art. 9 para. 1 GDPR - Special categories of personal data
- Art. 6 para. 1 GDPR, § 3 DSG NRW - Legal bases of data processing
- § 22 KunstUrhG - Right to one's own image
- Art. 8 ECHR, Art. 1 and 2 GG - Right to respect for private and family life / right to informational self-determination
Traffic light rule: To what extent may I use the results of the AI?
🛑 Not permitted
- Unauthorised adoption or publication of AI-generated content with recognisable reference to a work
- Example: Publication of an AI text, that is strongly modelled on a well-known newspaper article
⚠️ Conditionally permitted
- Use of AI outputs after review and own editing
- Example: Use of an AI text as a draft, which is revised in terms of content and language
✅ Unproblematic
- Use of AI servicesServices regardless of the training basis
- Example: Research, collection of ideas or structuring without adopting external texts
May I enter copyrighted works or parts of works in prompts of AI services?
tl;dr
Background
When using AI services, users often enter text as a prompt, for example to:
- Summary,
- Analysis,
- Übersetzung,
- sprachlichen Überarbeitung.
These texts may be protected by copyright, such as excerpts from books, articles or teaching materials.
The mere input of a protected work already constitutes a reproduction in the sense of copyright law (Section 16 UrhG) and can - depending on the constellation - be an act of use, i.e. an act that is unauthorised without the consent of the author, unless a legal barrier permits use.
Whether this use is permitted depends in particular on:
whether the texts are subject to your own rights of use (e.g. authored by you),
- whether a legal restriction applies (e.g. right to quote, teaching and learning),
- whether the AI service stores the content, processes or passes on the content,
- and whether the use is purely internal or public.
The university therefore recommends using copyrighted works in prompts only to the extent necessary for the respective purpose and, in particular, not to enter complete third-party works into external AI services.
Reference to regulations
- Copyright Act (UrhG), in particular
- § 44a UrhG (temporary reproductions),
- § 51 UrhG (right to quote),
- § 60a ff. UrhG (teaching and learning),
- Terms of use of the respective AI services,
- Licence conditions of the works used. Licence conditions of the works used.
Rule of thumb: When can I use copyrighted works in AI?
🛑 Not permitted
- Enter complete third-party works without usage rights
- Example: Upload an entire book chapter or the complete lecture notes into an external AI service
⚠️ Conditionally permitted
- Use of extracts for analysis or editing purposes
- Example: Enter short text passages for summarising or linguistic revision
✅ Unproblematic
- Use of own or freely licensed content
- Example: Own texts, open access content or texts under a suitable Creative Commons licence
Who owns the copyrights and exploitation rights to media generated by AI services?
tl;dr
Background
According to German copyright law in conjunction with EU law, only natural persons can be the authors of a work. Content that is generated completely automatically by an AI service is therefore generally not protected by copyright.
However, this does not mean that AI-generated content can always be used freely and without restrictions. For practical use, a distinction must be made between three constellations in particular:
Copyrightability of the output
If the AI-generated content contains sufficient intellectual creation of its own that is significantly based on human decisions (e.g. through creative selection, control or editing), copyright protection may arise in individual cases. The copyright and exploitation rights belong to the person who performed the creative work that is decisive for the character of the work.
Touching third-party rights
KI outputs may be similar to existing copyright-protected works or partially reproduce them. In such cases, third-party copyrights or rights of use may be infringed.
Contractual restrictions
Independent of copyright law, restrictions on further use may arise from the terms of use of the respective AI service.
The following principle therefore applies to the university: AI outputs are not a legal vacuum. Their use always requires independent examination.
When AI outputs contain copyrighted material
In individual cases, the output of an AI service may contain passages that are very similar to existing works or partially reproduce them. In such cases, no new copyright is created in the AI-generated content.
A transfer or publication may then violate the copyrights of third parties, even if the content was generated automatically. AI services are therefore no substitute for a copyright check before further use.
When copyrighted material is used in the prompt
If copyrighted material is entered as part of a prompt, the existing rights to this material may also affect the AI output.
In particular, further use of the output may be prohibited if it is based in substantial parts on copyrighted content and no corresponding licence or legal restriction applies.
The use of an AI service does not lead to an automatic "rights clearance" of the content entered.
Reference to regulations
- Copyright Act (UrhG), in particular
- § 2 UrhG (definition of work),
- § 7 UrhG (authorship),
- § 15 ff. UrhG (exploitation rights),
- Directive (EU) 2019/790 (DSM Directive),
- Terms of use of the respective AI services.
Traffic light rule: When can data be used?
🛑 Prohibited
- Use of AI outputs, that recognisably reproduce third-party works
- Example: Publishing an AI text that essentially consists of a known article.
⚠️ Conditionally permitted
- Use after checking content and own editing
- Example: Use of an AI text as a draft, which is revised, is revised, supplemented and independently responsible
✅ Unproblematic
- Use of AI output without reference to a work or third-party rights
- Example: Collections of ideas, structural proposals or purely technical texts without external impact
Can I publish, share or commercially utilise AI-generated content?
tl;dr
Background
Since AI services themselves cannot establish copyrights, purely automatically generated content is generally not protected by copyright because it lacks the necessary level of creativity. This means that their use is generally possible - even beyond purely private use.
However, there are significant restrictions that must be observed before publication or distribution:
- Third-party rights
KI outputs may be similar to or partially reproduce existing copyrighted works. In such cases, publication or further use may infringe third-party copyrights or rights of use, even if the content was generated automatically. In addition, AI-generated content may infringe the trademark and personal rights of third parties if it contains well-known trademarks or images of real people.
- Influence of the prompt
If copyrighted material was used as part of a prompt, this may affect the admission of the further use of the output (cf. FAQ 3.2 and 3.3).
- Contractual restrictions on use
Independent of copyright law, the terms of use of the respective AI service may stipulate the extent to which AI-generated content may be used, shared or commercially exploited.
- Special contexts
In the case of publications in the name of the university, teaching materials, research outputs or administrative processes, increased requirements for diligence, transparency and quality assurance apply.
Reference to regulations
- Copyright Act (UrhG), in particular Sections 15 ff. UrhG (exploitation rights),
- Directive (EU) 2019/790 (DSM Directive),
- Terms of use/terms and conditions of the AI services used,
- Internal university regulations on publications and public relations.
Traffic light rule
🛑 Inadmissible
- Publication of AI content with recognisable reference to a work without rights clearance
- Example: Publication of an AI text that corresponds in structure and wording to a well-known specialist article
⚠️ Conditionally permitted
- Use after review, Editing and compliance with the terms of use
- Example: Publication of an AI draft after content revision and own responsibility
✅ Unproblematic
- Use without third-party rights and without contractual restrictions
- Example: Use of AI-generated ideas, graphics or text without work similarity
Can an AI service be used to generate texts, images or other media in the style of a specific person or work?
tl;dr
Background
The mere "style" of a person or a work is not protected by copyright. Copyright law does not protect abstract styles, genres or styles of writing, but rather specific works and their individual form of expression.
However, several legal boundaries may be affected when creating AI content "in the style of XY":
Copyright
If an existing work is recognisably imitated or reproduced in substantial parts, this may constitute unauthorised adaptation or reproduction.
Personal rights
In the case of living or clearly identifiable persons, a stylistic imitation - particularly in the case of voice, image or characteristic statements - may infringe personal rights. In particular, if the generated content leads to the likelihood of confusion with the real person or their work.
Risks of competition and deception
Publications may give the impression that the content originates from the named person or is associated with them. This can lead to claims for removal and injunctive relief.
Scientific and professional integrity
In teaching, research and administration, stylistic imitation without transparent categorisation can be perceived as misleading or dishonest.
An abstract stylistic orientation (e.g. "factual", "journalistic", "academic") is therefore particularly admissible, but not a recognisable imitation of specific works or persons.
Reference to regulations
- Copyright Act (UrhG)
- § 2 UrhG (definition of work)
- § 23, 24 UrhG (adaptations and alterations)
- General right of personality (Art. 1 para. 1, Art. 2 para. 1 GG)
- Act against Unfair Competition (UWG) (in case of misleading statements)
- Religibility and integrity regulations from the NRW Higher Education Strengthening Act
- Internal university regulations on good scientific practice and public relations
Traffic light rule: When may data be used?
🛑 Unzulässig
- Creation of content with recognisable proximity to a work or person
- Example: "Write a text in the style of [specific author], with a similar structure, choice of words and argumentation."
⚠️ Conditionally permitted
- Abstract stylistic orientation without concrete imitation
- Example: "Write a factual text in the style of a scientific article"
✅ Unproblematic
- Nutzung allgemeiner Stilmerkmale ohne Personenbezug
- Beispiel: "Phrase the text in an understandable and neutral way"
Do I have to disclose that content was created with the help of AI?
tl;dr
Background
The use of AI services is now a regular work tool. Not every use therefore automatically requires labelling. Rather, the context in which AI-generated or AI-supported content is used and the expectations of the addressees are decisive.
Disclosure may be necessary in particular if it would otherwise give the impression
- that the content was created entirely independently,
- it is an original professional judgement,
- or a personal statement, decision or performance.
This becomes legally relevant in the following contexts in particular:
- Teaching and examinations (personal performance, equal opportunities),
- Research (scientific honesty, reproducibility),
- Administration (allocation of responsibility, due diligence),
- Public relations (avoidance of misleading information).
Independent of a formal disclosure obligation, the following applies: The responsibility for the content of AI-supported content always remains with the person using it.
Reference to rules and regulations
- Principles of good scientific practice
- Arbeits- und dienstrechtliche Sorgfaltspflichten
- Art. 5 GDPR (transparency, fairness)
- Personal rights (avoidance of misleading attributions)
- if applicable. internal university regulations on teaching, examinations and publications
- examination regulations
Traffic light rule: When is disclosure of the use of AI necessary?
🛑 Required
- Disclosure is necessary, to avoid misleading
- Example: Use of AI in examination results, expert opinions, official statements or scientific publications
⚠️ Recommended
- Disclosure increases transparency and traceability
- Example: Use of AI to support teaching materials, Presentations or concept papers
✅ Not required
- No legitimate expectation of full personal contribution
- Example: Internal collection of ideas, structuring of texts, linguistic revision without external impact
Can images, videos, audio material or sensitive data from third parties be entered into an AI service?
tl;dr
Background
AI services often process input ("prompts") in cloud environments, on servers outside the university or even outside the EU.
As soon as these inputs contain personal data, processing within the meaning of the GDPR takes place.
There must be a legal basis for this (Art. 6 GDPR) - such as consent or a legitimate university purpose in accordance with Section 3 DSG NRW.
Since the use of external AI services usually does not fulfil a clear university purpose and involves data transfers to third countries, it is not permitted for personal or sensitive data.
Particular caution applies to so-called special categories of personal data (Art. 9 para. 1 GDPR) - such as health data, religious or political beliefs or biometric characteristics.
The processing of such data is generally prohibited unless the data subject has expressly consented.
Independent of data protection law, there are also personal rights such as the right to one's own image (Section 22 KunstUrhG) and the right to one's own word, which can be violated by uploading such files.
Reference to regulations
- Art. 4 para. 1 GDPR - Definition of personal data
- Art. 9 para. 1 GDPR - Special categories of personal data
- Art. 6 para. 1 GDPR, § 3 DSG NRW - Legal basis for data processing
- § 22 KunstUrhG - Right to one's own image
- Art. 8 ECHR, Art. 1 and 2 GG - Right to respect for private and family life / right to informational self-determination
Traffic light rule: When may data be used?
🛑 Never
- When a person is recognisable in images, videos or sound recordings or sensitive data is included.
- Example: Photos of colleagues, interviews, seminar recordings, examination results, student files
⚠️ Only with consent
- If the person concerned has expressly consented to the use or has made it public themselves and the purpose is comparable.
- Example: Public lecture video of a professor, use with written consent
✅Admissible
- If there is no longer any personal reference (anonymised, synthetic, fictitious).
- Example: AI exercise with self-generated sample data or stock photos without personal reference
Can I imitate personal recordings, audio material or other sensitive data from third parties with an AI service?
tl;dr
Background
AI systems can clone voices, realistically recreate faces or generate entire videos with seemingly real people ("deepfakes").
This involves the processing of biometric data, which is classified as special categories of personal data in accordance with Article 9(1) GDPR.
Their use is generally prohibited unless there is express consent or a special legal basis.
In addition, several property rights apply:
- the right to one's own image (Section 22 KunstUrhG),
- the right to one's own word,
- as well as the general right of personality pursuant to Article 1 (1) and Article 2 (1) of the Basic Law.
AI-based impersonations can therefore lead to various legal offences - from forgery of documents or identity (Section 267 StGB) to false suspicion (Section 164 StGB) and violation of the most personal sphere of life through image recordings (Section 201a StGB).
The freedom of art (Art. 5 para. 3 GG) protects artistic expression, but not deception or the unauthorised reproduction of real persons.
As an example, the publication of an AI interview with former racing driver Michael Schumacher (2023) was judged by the courts to be a violation of personal rights.
Reference to regulations
- Art. 6, Art. 9 GDPR - Legal bases and special categories of data
- § 3 DSG NRW - Processing on public order
- § 22 KunstUrhG - Right to one's own image
- Art. 1 and 2 GG - Right to informational self-determination
- §§ 164, 267, 268, 201a StGB - Criminal consequences of deception or misuse of identity
- Art. 5 para. 3 GG - artistic freedom (only in the case of clearly recognisable, permissible artistic use)
A traffic light rule: When may data be used?
🛑 Never
- when real people are imitated or deceptively authentically reproduced without their admission.
- Example: voice or face cloning, deepfake videos, fake interviews or signatures
⚠️ Only with consent and labelling
- If the data subject has consented in writing and the use is clearly labelled as AI-generated.
- Example: Research or teaching projects with documented consent
✅ Admissible
- If only fictitious or synthetic persons, voices or images are generated that do not imitate real people.
- Example: Generation of neutral avatars or artificial characters for digital presentations
How can inequalities or discrimination in access to AI services on the part of students be avoided?
tl;dr
Background
The principle of equal treatment obliges universities to create fair learning and examination conditions (Art. 3 GG, § 3 Law Governing the Universities in North Rhine-Westphalia).
This requirement also applies to new digital tools such as AI services.
Two central aspects are at the forefront of this:
- Competence equalisation:
Students have very different levels of experience in using AI tools.
Teachers should not assume use without first providing the basics or accompanying information.
The university provides support through training and information programmes to enable everyone to get started. - Equal access:
If an AI service is used in a course, it must be ensured that all students can use it under the same conditions.
Cost-based, regionally restricted or contingent services may not be used on a mandatory basis.
Therefore, central AI services provided by the university should be used in preference.
External tools may only be used if equal access, data protection and licence conditions are guaranteed for all.
Inequalities arise not only from different prior knowledge, but also from unequal access to infrastructure (hardware, internet connection, accounts, language settings).
The aim is therefore equality of opportunity, not identity of all usage options.
Reference to regulations
- Art. 3 para. 1 and 3 GG - Principle of equality and prohibition of discrimination
- § 3, § 59 Law Governing the Universities in North Rhine-Westphalia - Equal opportunities and fair study conditions
- § 64 Law Governing the Universities in North Rhine-Westphalia - Principles of teaching, Study and examination
- General Equal Treatment Act (AGG) - Prohibition of discrimination (supplementary)
- EU Charter of Fundamental Rights Art. 21 - Equality and non-discrimination
Traffic light rule: When may data be used?
🛑 Never
- When a fee-based or regionally restricted AI service that not all students can use is mandatory.
- Example: Obligation to use a ChatGPT Plus account or cloud tool with restricted access
⚠️ Only with compensatory measure
- If an AI service is used optionally, but support services or alternatives must be provided.
- Example: Voluntary use of an external tool with accompanying introduction or training
✅ Admissible
- If the university or department itself provides AI services and all students can access them under the same conditions.
- Example: Use of the central university AI service (e.g. via KI:connect@H-BRS)
Can I use AI services to evaluate or analyse statements, texts or actions of real people?
tl;dr
Background
AI services can interpret, evaluate or classify texts, statements or behaviour of real people.
As soon as this analysis relates to an identifiable person, it is a processing of personal data (Art. 4 No. 1 GDPR).
Results of such analyses are considered profiling (Art. 4 No. 4 GDPR) and may only take place under strict conditions.
The use of such analyses can violate personal rights, for example through
- disparaging or defamatory representations,
- conclusions about character, political stance or performance,
- or publication without consent.
Even seemingly neutral requests such as "Analyse the writing style of Prof. X" or "How empathetic is person Y?" can be legally problematic because they draw conclusions about a real person.
Only in clearly defined exceptional cases is use possible:
- Didactic: If the public actions or rhetoric of well-known people are analysed in a course.
- Scientific: If there is a research project and suitable protective measures (e.g. pseudonymisation, purpose limitation) are in place.
In all other cases, the analysis of real persons by AI services should be avoided or anonymised.
Reference to regulations
- Art. 4 No. 1, No. 4, Art. 6, Art. 22 GDPR - Personal reference, profiling and legal bases
- Art. 89 GDPR, Section 3 DSG NRW - Processing for scientific or educational purposes
- Art. 1 para. 1, Art. 2 para. 1 GG - General right of personality
- §§ 823, 1004 BGB analogue - Civil law protection against violations of personality
- § 186, 187 StGB - Defamation and libel
- Art. 5 GG - Freedom of expression and academic freedom (balancing required)
Traffic light rule: When may data be used?
🛑 Never
- When statements, actions or characteristics of real people are analysed or evaluated without consent or a legal basis.
- Example: "Evaluate the personality of my lecturer", "Analyse the political stance of XY"
⚠️ Only with consent or a clear purpose
- When a didactic or scientific analysis is carried out, that relates to publicly accessible, factual content and respects data protection.
- Example: Analysis of public speeches or election programmes of well-known people in the seminar "Rhetoric and AI"
✅Admissible
- If only fictitious, anonymised or synthetic persons are analysed.
- Example: AI exercises with fictional characters or anonymised text examples
What can I do if AI-generated content affects or falsifies my person?
tl;dr
Background
AI services today can create deceptively real-looking content - such as fake photos ("deepfakes"), manipulated voices, invented quotes or texts with your name.
Such content often violates the general right of personality (Art. 1 Para. 1 and Art. 2 Para. 1 GG), in particular
- the right to one's own image (Section 22 KunstUrhG),
- the right to one's own word,
- and the right to informational self-determination.
Unauthorised deepfakes or misrepresentations can lead to damage to reputation, discrimination or deception of third parties.
The perpetrators are usually not the AI service itself, but people who create and disseminate the content with the help of an AI tool.
What you can do:
- Save evidence:
Document screenshots, URLs, publication times and metadata if applicable.
These documents form the basis for later claims. - Contact the publishing platform:
Request deletion or blocking, if necessary with reference to GDPR or violation of personal rights. - Assert rights under GDPR:
- Request information (Art. 15): Who processes or disseminates data about me?
- Deletion (Art. 17): Request removal of unauthorised content.
- Complaint (Art. 77): Contact the competent data protection supervisory authority.
- Proceed under civil or criminal law:
- Claim for injunctive relief and removal (Sections 823, 1004 BGB analogously),
- possible criminal charges in the event of insult, defamation or unauthorised use of images (Sections 185 ff., 201a StGB).
- Internal university support:
Contact the Data Protection Commissioner, the Legal Department or the IT Security Officer for help with the assessment and formulation of requests for deletion or injunctive relief.
Reference to regulations
- Art. 15-21, 77, 82 GDPR - Rights of data subjects, complaints and damages
- Art. 1 para. 1, Art. 2 para. 1 German Basic Law - General right of personality
- §§ 823, 1004 German Civil Code analogue - Injunctive relief and removal claims
- § 185 ff, 201a StGB - Criminal law protection against libel and unauthorised recordings
- § 22 KunstUrhG - Publication of images only with consent
Traffic light rule: When may data be used?
‼️ React immediately
- If an AI image, video or text misrepresents or defames you. Document, inform the platform, consider legal action.
- Example: Deepfake video, fake quote on social media
⚠️ Check and react
- If an AI content uses your image or voice without any recognisable defamatory purpose. Ask, clarify the source, request deletion if necessary.
- Example: AI portrait published without consent
🛡️ Preventive protection
- Share personal content online sparingly, inform about privacy settings and reporting options.
- Example: Post your own photos only in protected areas
How is the protection of my own data guaranteed when using AI services?
tl;dr
Background
When dealing with AI services, a distinction must be made between
- university-owned AI services and
- external AI services that are operated outside the responsibility of the university (see FAQ 5.2).
In the case of university-owned AI services, the university is the controller within the meaning of the GDPR.
The protection of your data is guaranteed on several levels:
- Data minimisation and purpose limitation
Only the personal data that is required for the use of the service (e.g. university ID, role) is processed during registration and login.
Any use beyond this for other purposes does not take place. - Separation of data types
A distinction is made between- authentication data (e.g. login, role information),
- usage data (e.g. session information) and
- content data (e.g. prompts, text input).
Different protection measures apply to each of these categories.
- Technical protective measures
- Access to the AI service is via secure connections.
- Use from outside the university networks is only recommended via an active VPN connection to provide additional security for communication.
- The IT infrastructure is subject to the university's general security and access concepts.
- Organisational protective measures
- Binding terms of use exist for the service, in particular regarding the handling of personal data in prompts.
- Employees and students are made aware of data protection-compliant handling via training and information materials.
- The processing is documented in the list of processing activities and has been reviewed as part of a data protection impact assessment.
Reference to regulations
- Art. 5, Art. 6 para. 1 lit. e GDPR - Principles of processing and legal basis
- § 3 DSG NRW - Processing in the public interest
- Art. 30 GDPR - Record of processing activities
- Art. 35 GDPR - Data protection impact assessment
- Internal terms of use and data protection policy of the respective AI service
Traffic light rule: When may data be used?
🛑 Not permitted
- Use of external AI services to process personal, sensitive or official university data without admission
- Example: Entering names, contact details, examination results, internal documents or administrative data into external AI tools
⚠️ On own responsibility
- Using external AI services for private or non-personal purposes
- Example: Use of Google Gemini for idea collection or text structure, use of Perplexity for general research, always without entering personal, sensitive or confidential university data
✅ Secured
- Use of university-owned AI services in accordance with applicable guidelines
- Example: Use of KI:connect or Chat-AI without entering personal, sensitive or confidential university data
What do I need to consider in terms of data protection when using external AI services?
tl;dr
Background
External AI services (e.g. text, image or code generators) are operated outside the IT and data protection responsibility of the university.
A distinction must therefore be made between two fundamentally different scenarios when using them:
1. Institutional use or procurement by the university
If an external AI service is to be officially introduced, provided or used on a mandatory basis, a comprehensive data protection assessment is required in advance. This includes in particular:
- examining the legal basis (e.g. Art. 6 GDPR in conjunction with Section 3 DSG NRW),
- compliance with the principles of Art. 5 GDPR (purpose limitation, data minimisation, transparency),
- the clarification of data protection roles
(commissioned processing, joint responsibility or independent responsibility), - the assessment of storage and erasure periods as well as the rights of data subjects,
- if applicable. the performance of a data protection impact assessment (Art. 35 GDPR),
- the assessment of possible third country transfers (e.g. processing outside the EU),
- the assessment of possible third country transfers. (e.g. processing outside the EU),
- as well as - in the case of use for employees - the involvement of co-determination.
Such use may only take place in coordination with the responsible departments (data protection, IT, legal affairs, staff council if applicable).
2. Private or individual use of external AI services
If employees or students use external AI services voluntarily and on their own responsibility without the university providing or specifying them, the university is not the controller within the meaning of the GDPR.
The responsibility under data protection law lies with the provider of the AI service and with the users themselves.
The following should be noted:
- Personal data is often processed during registration (e.g. email address, log and usage data, payment information if applicable).
- Personal data is often processed during registration.
- In addition, the content of the entries (prompts, texts, images) is regularly processed.
- Depending on the provider, tariff model and type of use (web interface or API), this data can be used for security or training purposes.
- The type and scope of data processing can differ considerably between free and paid versions.
The privacy information and terms of use of the respective provider are therefore always authoritative.
Reference to regulations
- Art. 5, Art. 6 GDPR - Principles and legal bases of data processing
- Art. 30, Art. 35 GDPR - Record of processing activities, data protection impact assessment
- Art. 44 ff. GDPR - Data transfer to third countries
- § 3 DSG NRW - Processing in the public interest
- Landespersonalvertretungsrecht NRW - Co-determination in the introduction of technical systems
- Data protection policies of the respective AI providers
Traffic light rule: When may data be used?
🛑 Not permitted
- Processing of personal, sensitive or official university data in external AI services without admission
- Example: Entering names, examination data, internal documents, research or administrative data
⚠️ Requires review (prior to deployment/procurement)
- Planned institutional deployment or integration into university processes
- Example: Introduction of an AI tool for teaching or administration; mandatory use in a degree programme
✅Admissible
- Private or individual use without personal reference
- Example: Collection of ideas, text structure, general research without business or personal data
Can I share other people's personal data with AI services?
tl;dr
Background
Personal data is all information that relates to an identified or identifiable natural person (Art. 4 No. 1 GDPR; see also FAQ 4.1).
This includes, among other things Names, matriculation numbers, examination results, texts with a personal reference, but also indirect references that enable identification.
The transfer of such data to an AI service is a transfer to a third party under data protection law. It is only permitted if
- there is an appropriate legal basis (Art. 6 GDPR) and
- the principles of Art. 5 GDPR (purpose limitation, data minimisation, lawfulness) are complied with.
A mere pseudonymisation (e.g. replacing names with abbreviations) is unsatisfactory for this purpose, as the data remains personal.
Only genuine anonymisation, which excludes the possibility of tracing the data back to individual persons, would be admissible.
Two typical use cases in the university context
1. Transfer of student data to AI services by teachers
The transfer of personal data of students to AI services by teachers, examiners or administrative staff is not permitted.
Theoretically, the consent of students could be considered (Art. 6(1)(a) GDPR), but it is usually not effective in the teaching and examination context, as
- there is a relationship of dependency,
- the voluntary nature is doubtful,
- students enjoy special protection.
In particular, the following are therefore not permitted:
- AI-supported analyses of quiz or test results with a personal reference,
- support for the correction of examinations by AI if written examinations are processed with names, student numbers or other personal data (e.g. upload of a PDF scan of an examination paper, upload of an examination script, etc.).
2. input of personal data of third parties by students
Students are also not allowed to enter personal data of other persons in AI services.
Example:
"Maxi Mustermensch is sitting next to me and has just received the solution X for task 2. Explain that."
In such cases, the personal data of a third party is transmitted to an AI service.
There is usually no suitable legal basis for this - in particular effective consent from the person concerned.
The university therefore informs all users of its own AI services about their rights and obligations.
This includes the clear requirement not to enter any personal data of third parties in AI services.
Teachers are recommended to explicitly point out these rules to students when using AI in teaching or examinations.
The university provides a checklist for the legally compliant use of AI services for this purpose.
Reference to rules and regulations
- Art. 4 No. 1, Art. 5, Art. 6 GDPR - Personal reference, principles and legal bases
- Art. 9 GDPR - Special categories of personal data
- § 3 DSG NRW - Processing in the public interest
- Art. 89 GDPR - Protective measures for processing for scientific purposes
- Internal guidelines and checklists of the university on the use of AI
Traffic light rule: When may data be used?
🛑 Never
- Enter other people's personal data into AI services without a legal basis
- Example: Names of students, examination results, written examinations with personal reference
⚠️ Only with compensatory measure
- Processing of personal data with a clear legal basis, authorised purpose and appropriate protective measures
- Example: Analysis of personal research data with a university-approved AI system as part of an approved research project, including data protection impact assessment and contractual safeguards.
✅ Admissible
- Use of AI services without personal reference
- Example: Analysis of completely anonymised data sets, fictitious data or general facts.
Can student data protection issues arise when using AI services in courses?
tl;dr
Background
The use of AI services in courses directly affects the data protection of students. The decisive factor here is whether use is voluntary or mandatory.
- Voluntary use (e.g. optional learning aid) allows students to take alternative options.
- Mandatory use (e.g. mandatory in a course or examination). Mandatory in a course or examination) means that students cannot avoid data processing.
In these cases, data protection consent is not effective as the use is not voluntary.
For this reason, students may not be obliged to use external AI services or AI services requiring registration that are not offered by the university.
The university therefore provides its own generative AI services, which are operated in compliance with
- data minimisation,
- purpose limitation,
- transparency
. An overview of the specific data processed can be found in the respective privacy policy of the service.
When using AI services, data is nevertheless necessarily processed, in particular:
- prompts and text input by students,
- usage and activity data (e.g. session information).
- Session information.
Since this processing can be binding in the teaching context, higher protection requirements apply.
These relate in particular to:
- the selection of suitable (university-owned) AI services,
- the renunciation of the processing of personal data,
- transparent information for students about the type and scope of data processing,
- as well as alternatives if students cannot or should not use AI services.
Further details can be found in the regulations on data protection and the handling of personal data of third parties.
Reference to regulations
- Art. 5, Art. 6 GDPR - Principles and legal bases of data processing
- Art. 7 GDPR - Requirements for effective consent
- Art. 8 EU Charter of Fundamental Rights - Protection of personal data
- § 3, § 64 Law Governing the Universities in North Rhine-Westphalia - Equal opportunities and fair study conditions
- Internal data protection and AI guidelines of the university
Traffic light rule: When may data be used?
🛑 Not permitted
- Compulsory use of external AI services or AI services that require registration
- Example: Obligation to use an external AI tool with a personal account in a course or examination
⚠️ Only permitted under certain conditions
- Use of AI services in the teaching context with mandatory use
- Example: Use of university-owned AI services with transparent information, data minimisation and clear alternatives
✅ Admissible
- Voluntary use without disadvantages for non-participation
- Example: Optional use of an AI service as a learning aid without influence on assessment or examination results
Who is liable for data protection violations in connection with AI?
tl;dr
Background
The General Data Protection Regulation links responsibility and liability to the role of the respective acting body.
1. University-owned AI services
If an AI service is provided and operated by the university, it is generally the controller within the meaning of Art. 4 No. 7 GDPR. 4 No. 7 GDPR.
This means:
- The university is responsible for
- the lawfulness of the processing,
- the selection of the service,
- technical and organisational protective measures,
- as well as the protection of data subjects' rights.
- If a data protection breach occurs during proper use, the primary liability lies with the university.
Employees regularly act here as part of their official duties.
2. External or private AI services
If employees or students use external AI services on their own responsibility, in particular via private accounts or private devices, the university is not responsible.
In these cases:
- the responsibility for data processing generally lies with the provider of the AI service and with the user,
- especially if
- personal or work-related data is entered,
- in contravention of clear instructions or notices.
3. Breaches of duty and individual misconduct
If AI services are used contrary to applicable guidelines - for example by:
- entering personal data of third parties,
- using external AI services for official purposes without authorisation,
- or the mandatory use of external tools in teaching -
this may give rise to individual liability.
Depending on the case, the following may come into consideration:
- measures under labour or employment law,
- civil liability (e.g. compensation under Art. 82 GDPR). (e.g. compensation for damages under Art. 82 GDPR),
- in serious cases also consequences under misdemeanour or criminal law.
4. Protection of users
The university pursues a preventive approach rather than a sanctioning approach.
Whoever
- uses the AI services provided,
- complies with the applicable requirements,
- and consults early on in the event of uncertainties,
acts worthy of protection and operates within a legally secure framework.
Reference to regulations
- Art. 4 No. 7, Art. 5, Art. 6 GDPR - Responsibility, principles, legal bases
- Art. 24, Art. 32 GDPR - Responsibility and protective measures
- Art. 82 GDPR - Liability and compensation
- § 839 BGB in conjunction with Art. 34 GG - Official liability
- Service and labour law regulations of the university
- Internal AI and data protection guidelines
Traffic light rule: When may data be used?
🛑 Individual responsibility
- Use of AI services contrary to clear guidelines
- Example: Entering personal student data into an external AI service
⚠️ Shared responsibility
- Using AI services in the grey area or with deviations
- Example: Use of an AI tool without prior agreement in an official capacity
✅ Institutional responsibility
- Proper use of the university's own AI services
- Example: Use of the AI service provided in accordance with specifications and training
Note on the institutional use of external AI services
If external AI services are to be procured or institutionally deployed by the university, a comprehensive data protection review is required in advance. In particular, the following must be taken into account:
- the principles of Art. 5 GDPR
(transparency, purpose limitation, data minimisation), - the examination of the legal basis and, if necessary, the performance of a
data protection impact assessment (Art. 35 GDPR), - the clarification of data protection roles
(commissioned processing, joint responsibility or independent responsibility), - the transparency of data subject information,
- as well as the regulations on retention and erasure periods.
If AI services are used for university employees, their use is also regularly subject to co-determination.
Be sure to contact the relevant departments (data protection, IT, legal affairs, staff council if applicable) at an early stage before procuring or introducing external AI services.
What regulations exist at the university regarding the use of AI services in the employment relationship?
tl;dr
Background
The following also applies to employment relationships: AI services are not a legal vacuum. When using them, employees must comply with the same rules that apply to other digital systems at the university:
- Labour law/TV-L: Duty to perform tasks with due care, compliance with instructions, prohibition of breaches of duty due to unchecked AI content;
- Public service law: conscientious performance of duties, duty of confidentiality, neutrality, duty to make independent decisions;
- Internal university regulations;
- Data protection law: compliance with GDPR, no entry of personal data into external AI services;
- Confidentiality/secrecy: no entry of confidential documents.
Reference to regulations
- Art. 5, 6 GDPR- Lawfulness and principles of data processing,
- § 34 BeamtStG - Duties of civil servants,
- § 3 TV-L
Traffic light rule: When and how may AI systems be used at the university
🛑 Never permitted: Prohibited types of use
- AI may not be used under any circumstances because mandatory legal or organisational obligations would be violated.
- Examples: Creation of notifications, expert opinions, examination assessments or legal decisions by AI (lack of personal responsibility), uploading confidential documents (e.g. personnel files, examination documents) to external AI services, use in the sense of automated instructions to employees
⚠️ Conditionally permitted: AI as an assistance system
- AI may be used provided certain conditions are met (e.g. control, approval, data protection, no sovereign decision).
- Examples: Drafting texts or summaries, tables, support with research or brainstorming, suggestions for work processes, emails or presentations (with downstream review)
✅Admissible: Safe types of use
- AI may be used if neither data, decisions nor third-party rights are affected.
- Example: Generation of internal tools (e.g. macros, code examples without reference to data), structuring or preparation of publicly accessible information, simulation or playful learning applications without reference to persons
May I use AI services to fulfil my work tasks?
tl;dr
Background
KI services can facilitate official activities (e.g. drafts or translations) but:
- Decisions must always be made by a natural person;
- KI may not make legal or personal assessments autonomously.
- Automated decisions with legal effect are prohibited (Art. 22 GDPR)
Reference to regulations
- Art. 22 GDPR
- Arbeitsvertrag/ TV-L
- BeamtStG
Ampelregel: When may AI services be used to perform official duties?
🛑 Not permitted
- AI takes over decisions with legal effect or personal effect
- Examples: AI creates job references, exam evaluations or personnel decisions, AI automatically determines performance grades or notifications, AI gives instructions to employees (automated control)
⚠️ Conditionally permissible
- AI supports but does not replace human decision-making
- Examples: Drafting of texts, templates, translations, suggestions for arguments, formulations, structuring, pre-analysis of information (with downstream review)Introduction of an AI tool for teaching or administration; mandatory use in a degree programme
✅Admissible
- AI provides purely technical support without decision-making relevance
- Examples: spell check, formatting, style analysis, generation of non-binding ideas, summarisation of non-confidential content ideas, text structure, general research without official or personal data
Is the use of AI services voluntary?
tl;dr
Background
Supervisors can instruct employees to use certain internal, verified AI tools such as KI.Connect. However, employees cannot be obliged to use external platforms with an uncertain legal situation. An obligation requires training, familiarisation and data protection-compliant systems.
Reference to regulations
- Supervisor's right to issue instructions
- Organisational law of the university
- GDPR
Traffic light rule: When may data be used?
🛑 May not be mandated
- External, untested, data protection-unsafe or privately operated AI services
- Examples: Use of a private ChatGPT account, use of AI platforms with mandatory registration and unknown data processing, obligation to use tools that transfer personal data to the outside world, examination results, Written examinations with personal reference
⚠️ Can only be ordered under certain conditions
- Internal AI tools whose use is technically justifiable but requires training/familiarisation
- Examples: Introduction of an internal AI assistance system as a work tool, use after prior training/instruction, tools without personal or decision-making reference
✅ May be mandatorily ordered
- Tested internal AI systems within the scope of service and organisational law
- Examples: "KI:connect" or comparable internal university platforms, tools that have been checked under data protection law, systems that are part of the workflow/process
How can damage caused by inappropriate use of the output of AI services be prevented?
tl;dr
Background
Risks arise in particular from:
- False and invented content ("hallucinations")
- Distorted results
- Data protection violations
- Copyright infringements
- Faulty automated analyses.
The responsibility always remains with the person employed!
Reference to regulations
- Art.5 GDPR
- Duties of care under labour law
- § 36 paragraph 1 BeamtStG
Traffic light rule: How may I use the AI output?
🛑 Not permitted
- Unverified transfer of AI output to official contexts
- Examples: AI generates text that is transferred 1:1 into an email, a letter, an expert opinion or an internal memo, transfer of AI analyses with personal references, use of AI content that triggers legal consequences (without review)
⚠️ Conditionally permitted
- Use after active review and own decision
- Examples: AII summaries are checked for content and corrected if necessary, AI translations or draft texts are edited, AI analyses are compared with sources and checked for plausibility
✅ Admissible
- use without external official or legal effect
- Examples: Collection of ideas, brainstorming, templates, formatting, checklists, general information, purely technical support without professional significance
Who is liable for data protection violations in connection with AI?
tl;dr
Background
The General Data Protection Regulation links responsibility and liability to the role of the respective acting body.
1. University-owned AI services
If an AI service is provided and operated by the university, it is generally the controller within the meaning of Art. 4 No. 7 GDPR. 4 No. 7 GDPR.
This means:
- The university is responsible for
- the lawfulness of the processing,
- the selection of the service,
- technical and organisational protective measures,
- as well as the protection of data subjects' rights.
- If a data protection breach occurs during proper use, the primary liability lies with the university.
Employees regularly act here as part of their official duties.
2. External or private AI services
If employees or students use external AI services on their own responsibility, in particular via private accounts or private devices, the university is not responsible.
In these cases:
- the responsibility for data processing generally lies with the provider of the AI service and with the user,
- especially if
- personal or work-related data is entered,
- in contravention of clear instructions or notices.
3. Breaches of duty and individual misconduct
If AI services are used contrary to applicable guidelines - for example by:
- entering personal data of third parties,
- using external AI services for official purposes without authorisation,
- or the mandatory use of external tools in teaching -
this may give rise to individual liability.
Depending on the case, the following may come into consideration:
- measures under labour or employment law,
- civil liability (e.g. compensation under Art. 82 GDPR). (e.g. compensation for damages under Art. 82 GDPR),
- in serious cases also consequences under misdemeanour or criminal law.
4. Protection of users
The university pursues a preventive approach rather than a sanctioning approach.
Whoever
- uses the AI services provided,
- complies with the applicable requirements,
- and consults early on in the event of uncertainties,
acts worthy of protection and operates within a legally secure framework.
Reference to regulations
- Art. 4 No. 7, Art. 5, Art. 6 GDPR - Responsibility, principles, legal bases
- Art. 24, Art. 32 GDPR - Responsibility and protective measures
- Art. 82 GDPR - Liability and compensation
- § 839 BGB in conjunction with Art. 34 GG - Official liability
- Service and labour law regulations of the university
- Internal AI and data protection guidelines
Traffic light rule: When may data be used?
🛑 Individual responsibility
- Use of AI services contrary to clear guidelines
- Example: Entering personal student data into an external AI service
⚠️ Shared responsibility
- Using AI services in the grey area or with deviations
- Example: Use of an AI tool without prior agreement in an official capacity
✅ Institutional responsibility
- Proper use of the university's own AI services
- Example: Use of the AI service provided in accordance with guidelines and training