Be transparent when using AI
Inform others about AI involvement and ensure its outputs are appropriate and accurate before use.
Do
- Label AI-generated content: Always disclose when content is entirely created by AI. Include an "AI-generated" label to clearly indicate no human input was involved in the final output.
- Draw attention to AI-assisted systems: Always disclose when AI systems significantly assist or make decisions impacting individuals. Provide clear information to help people understand how much AI was involved.
- Consider labelling AI-supported content: You are strongly encouraged to disclose when AI has significantly contributed to the creation process. Using an "AI-supported content" label to clearly show this level of AI's involvement.
- Follow official government guidance: UK Algorithmic Transparency Recording Standard (ATRS)
Don't
- Pass the buck: If you choose to use AI tools, remember, you can't "blame" the tool if the output you use turns out to be problematic or below expectations.
- Assume AI content suitability: Generative AI often produces content that may not align with our university standards and values. It is your responsibility to ensure its appropriateness for your work.
- Sidestep meaningful human control: Systems using AI should regularly be reviewed and checked for ongoing suitability.