Responsible Use of Generative AI
Generative artificial intelligence (GenAI) has become part of everyday academic practice at ETHZ, UZH, and the UNIBAS. Tools such as ChatGPT, Copilot, and similar Large Language Models (LLMs) can support doctoral researchers in drafting texts, structuring ideas, coding, data analysis, and exploring research questions. When used thoughtfully, such tools may enhance productivity and facilitate learning. At the same time, they require careful, responsible, and secure handling.
Doctoral researchers must ensure that no confidential, strictly confidential, or personal data are entered into external AI systems. This includes unpublished research data, sensitive project information, contracts, strategic documents, protected intellectual property, or personal data relating to identifiable individuals. Institutional data protection regulations and information security policies apply equally when using AI tools. Before using AI services, researchers should verify the classification of the data they intend to process and ensure that no restricted information is disclosed.
AI-generated outputs must always be critically reviewed. Generative systems may produce inaccuracies (“hallucinations”), semantic errors, fabricated references, or misleading conclusions that appear convincing. Researchers remain fully responsible for the accuracy, integrity, and originality of any content they submit for publication, teaching, or assessment. AI tools must not replace scientific judgment.
Copyright and authorship considerations must also be respected. The use of AI-generated text, images, audio, or code may raise questions concerning intellectual property, attribution, and originality. Researchers should follow institutional guidelines regarding transparency and appropriate disclosure when AI tools contribute to academic work. As part of the registration for the doctoral examination, doctoral candidates are required to declare that they have written their doctoral thesis independently. The use of generative AI tools does not replace this requirement. Doctoral candidates therefore remain fully responsible for the intellectual content of their thesis and must ensure that any use of AI tools is transparent and compliant with applicable institutional regulations. Where required, candidates may need to disclose whether and to what extent generative AI tools were used in the preparation of their thesis.
Doctoral candidates are encouraged to experiment responsibly with AI tools for routine or exploratory tasks, while adhering strictly to institutional regulations and ethical standards. Detailed university-specific guidance is available through ETHZ, UZH and UNIBAS guidelines.
|
Institutional Guidelines and Further Reading on AI |
|
Guidelines for the safe use of AI at ETH Zurich |
|
Recommendations on the Use of GenAI at UZH |
|
AI in learning and teaching at UNIBAS |
The PhD Program in Plant Sciences supports doctoral candidates in this area through dedicated training sessions such as “Explore the responsible use of AI in generating scientific texts, images, audio and code”, which address both the opportunities and the risks associated with generative AI in research.