
Artificial intelligence is everywhere; it powers almost everything in our modern lives – it can make life simpler and safer in many ways, but it also has the power to do harm and produce biased information. From personalised recommendations on streaming platforms to AI algorithms diagnosing diseases and predicting patient outcomes, AI is designed to streamline our lives.
However, the same technology that offers these benefits also poses significant risks. AI systems can perpetuate and amplify existing biases present in their training data, which leads to unfair and discriminatory outcomes. For example, biased algorithms in hiring processes can disadvantage whole groups of people, while biased facial recognition technology can result in wrongful identifications.
The potential for harm affects everyone - even if you don't use AI yourself. Recent years have seen an exponential rise in the spread of misinformation through AI-generated content that goes viral on social media. Deepfakes, AI-generated news articles and doctored images can be used to deceive and manipulate public opinion, undermining trust in media and institutions. Additionally, the reliance on AI for critical decision-making processes, without proper oversight and accountability, can lead to errors with severe consequences.
In our academic context, ethical AI use boils down to 3 main principles: transparency, integrity, and accuracy.
Clearly indicate when you have used AI tools to assist in completing assignments or undertaking research. This includes specifying which tools you used and how they contributed to the assignment or research project. Being transparent in your academic work and research both protects you from accusations of academic misconduct and promotes research best practice by allowing other people to see where you've got your information from and how.
For example, if you used an AI tool to generate ideas, summarise articles, or check grammar, you should mention the tool's name, describe its function, and explain how it was utilised in your work, complete with an in-text citation. Additionally you should include a reference to the output in your reference list, with a link to the transcript or other output where possible. This allows the people marking and reading your work understand the extent of AI assistance and see the "behind-the-scenes" workings of your AI use.
Although you are not asked to provide written documentation about or references for tools you use in your personal study, transparency is still relevant to using AI for personal study. It is a combination of being honest with yourself about how you use AI (are you treating it like a tool or like a shortcut?) and talking to your course leader, research supervisor, or personal tutor honestly about AI so that they can provide you with support and guidance that ensures you are using these tools in ways that are productive and helping you reach your learning outcomes rather than replacing your development of critical, reflective, and analytical skills.
For example, using AI to create flashcards from lecture slides and asking ChatGPT to act as a study coach can be really effective interventions for developing your academic language skills, breaking complex content down to manageable pieces of information, and retaining information better - but if the AI-generated flashcards contain errors you might be reinforcing the retention of incorrect information, or if you enter personal or sensitive information to ChatGPT to help you manage your time, do you know who has access to that information, how it will be used in the future, and how long it is retained for? Your personal tutor might be able to suggest tools that have rigorous privacy policies or templates that allow you to retain control of your information.
Think of AI as another tool in your arsenal rather than a shortcut - it should enhance and complement your learning rather replace or bypass it. Do not pass off AI-generated content as your own, original work. Additionally, make sure you clearly signpost any tasks or parts of the assignment you used AI for, and cite the tools in your reference list. You can use AI-generated explanations to build your understanding, but you must ensure that any work you submit is your own interpretation and analysis. Always evaluate, verify, and validate AI-generated information like you would any other source - you are still accountable for the accuracy of the work you submit.
It is important that AI tools are used to support genuine research efforts. Academic misconduct doesn't just mean plagiarism - you could also be accused of falsification. To use AI with integrity, you shouldn't use AI to generate new data or distort existing data, or manipulate AI outputs to make your data fit preconceived hypotheses or desired outcomes. Clearly document AI-assisted decision-making and the steps or methodologies you used AI for so that you can maintain ethical standards in data handling and reporting. Think also about the fair and equitable treatment of both groups and individuals - following rigorous research practice allows you to mitigate unwanted bias and ensure your research doesn't cause harm to anyone.
Use AI tools to support personal effort and learning in ways that encourage critical thinking and problem-solving rather than relying solely on AI-generated solutions. You could combine traditional learning methods with using AI for repetitive learning, use AI to sanity check your reflections and insight about learning content, or simply use AI as a starting point for things like breaking down complex academic text. But, just like you shouldn't use AI to skip steps in assignments or research, your AI use for personal learning should keep you in the driver's seat of developing your academic skills instead of making AI do all the work while you passively watch learning opportunities go by.
We often use the term "garbage in, garbage out" when we talk about search terms and finding information. AI operates in a very similar way - if you put in vague and generic prompts, you will get vague and generic outputs. To improve the accuracy and reliability of your AI outputs, make sure you use tools that are fit-for-purpose and have been trained on high-quality, relevant data.
That also means using AI for its intended purpose - most AI models have been built to be helpful, which means if the tool can't determine the answer, it's more likely to "hallucinate" i.e. make up an answer. To minimise AI coming up with its own reality, you can use different prompting techniques such as echo prompting to first establish what the tool knows about your given subject, ensure it understands the task at hand, and only then ask it to respond to your query.
First of all, choose your tools carefully. Keeping on top of developments in AI can be exhausting and tricky, but when it comes to personal learning and development, using the latest models and tools for that provide more reliable answers are more likely to lead to better results. You will also want to ensure the information you're spending a lot of time revising is correct and useful - you can always talk to your course leader, personal tutor, or one of our academic skills advisors about useful tools and their applications or ask them to double-check the revision aides you've created with AI.
To make the most of AI while still mitigating the risks associated with it, it is crucial to implement ethical guidelines and robust regulatory frameworks supported by legislation. Ensuring transparency in AI systems, promoting fairness and inclusivity in AI development, and establishing mechanisms for accountability are essential steps to a society that uses AI as a force for good.
Read below about the five key pillars of ethical AI development from IBM, a leading company in advances in AI and automation.
Fair AI systems must be designed to mitigate unwanted bias and ensure that they do not perpetuate or amplify existing inequalities. In AI development, bias is a systematic error that, intentionally or not, might generate unfair decisions or inequitable treatment of groups or individuals. Bias can emerge in an AI system because of cultural expectations, technical limitations, or unanticipated deployment contexts, and it can be present in the AI system, in the data used to train and test the system, or in both. Ethically designed AI systems use diverse and representative datasets during the training process and are continuously monitored for biased outcomes.
As AI in increasingly integrated to critical decision-making, it is important that these systems are both secure and robust. Robust AI systems are designed to effectively manage unexpected conditions, such as anomalies in data input or malicious attacks, without causing unintended consequences. Ensuring robustness in AI not only enhances its reliability but also builds trust among users, making it a crucial aspect of AI deployment in sensitive and high-stakes environments.
Explainability refers to how well a person without any special training in AI models or systems can understand how the system works. By making AI processes and outcomes clear, users can verify and trust the decisions made by these systems. Explainability is like showing your work on a math problem so that everyone can see the steps you took to get your answer.
An AI system should be not only understandable but also transparent. AI transparency involves providing detailed information and documentation about how AI models are created, trained, and deployed, and how they make decisions. Transparent AI systems makes clear the origins of the data they have been trained on, as well as how they process and store the data you input.
AI process vast amount of data to learn, identify patterns, and make predictions or recommendations. Therefore, an ethical AI tool prioritises and safeguards the privacy of all the data that it is trained on and that is inputted into it by users. Ensuring data privacy not only protects sensitive information but also fosters trust and compliance with legal and ethical standards.

The below guidance has been put together by Mark Edwards from the BA(Hons) Photography course team. This guidance is predominantly aimed at students on the Photography course, but it is relevant for and represents best practice for all course areas in which students may use AI as a creative tool or to create imagery or art using AI.
The use of Artificial Intelligence (AI) for image generation presents complex legal and ethical considerations for students in the UK.
Legally, students must recognise that many AI tools are trained on vast datasets containing copyrighted material, meaning outputs may inadvertently reproduce or reference protected works. Under UK copyright law, authorship and ownership of AI-generated imagery are ambiguous, as current legislation (Copyright, Designs and Patents Act 1988) attributes authorship to a “human creator.”
Ethically, students should consider the authenticity, transparency, and integrity of their creative process - acknowledging when and how AI tools are used, and avoiding misrepresentation of machine-generated work as wholly original or documentary truth. Additionally, issues of bias, consent, and data privacy arise from how AI models source and process imagery, particularly when generating likenesses of real people or culturally sensitive subjects. Responsible use requires critical reflection on both creative intent and social impact, aligning practice with institutional ethics policies, copyright law, and emerging frameworks for digital responsibility in visual culture.
1. Legal Considerations
Students must ensure that any AI-generated imagery or material complies with UK copyright law (Copyright, Designs and Patents Act 1988) and UK GDPR / Data Protection Act 2018.
AI-generated content cannot automatically be considered original work, as authorship and ownership of such outputs remain legally ambiguous.
If an AI system reproduces or closely imitates existing copyrighted works or identifiable individuals, it may infringe on the rights of the original creator or subject.
Students are responsible for verifying the terms of service and data use policies of any AI platform they engage with.
2. Ethical Considerations
The use of AI should be transparent and clearly acknowledged in project documentation, reflective writing, and presentations.
Students should not misrepresent AI-generated imagery as purely documentary or as direct photographic capture.
Care must be taken to avoid generating or sharing images that perpetuate bias, discrimination, or harmful stereotypes.
AI tools must not be used to create misleading, defamatory, or exploitative representations of real individuals or groups without consent.
Students are encouraged to reflect on how AI influences authorship, authenticity, and their own creative identity within photographic practice.
3. Academic Integrity
All AI-generated elements must be explicitly credited in written submissions (e.g., “Image generated using [tool name], guided by original prompt authored by the student”). For more guidance on how to reference AI tools and AI-generated images, click here.
Failure to acknowledge AI contributions may be considered a breach of academic integrity or misrepresentation of authorship.
Supervisors and tutors may request process documentation or prompt logs as evidence of authorship and critical engagement.
4. Responsible Practice
Students are encouraged to engage with AI tools as research instruments and creative aids that complement the image-making process and demonstrate understanding of the creative and critical context they are working in.
When exhibiting or submitting work containing AI-generated imagery, students must reference either in their bibliography or in the written component of their submission which tools they used.
Projects involving generative AI should be reviewed under the University’s Research Ethics Framework to ensure alignment with institutional and sector standards.
It can be tricky to know what you are and aren't allowed to do with artificial intelligence without explicit and tailored advice, so your module leader or research supervisor should always be your first port of call when checking whether the task you're using AI for is acceptable, but there are two quick checks you can do yourself as an initial best practice place to start:
The "learning outcomes" check
Think about the learning outcomes of your assignment or research - what are you being asked to do and what skills do you need to demonstrate through your academic work? Does what the AI would do for you overlap with either the thing you're being asked to produce or the skills you would need to develop during the course of your academic work.
For example, if your learning outcomes specify that you must understand academic referencing conventions and be able to format references for different types of sources, then using AI to create references for you is most likely not okay.
The "ask a friend" check
Think about plagiarism and academic integrity more widely. Asking a friend to proofread your essay or check it for grammar are generally acceptable tasks to ask for help with, but asking them to entirely rewrite sections of your essay or do you research data collection for your would be considered academic misconduct.
For a large part this also applies to AI. A good rule of thumb to ask yourself is "would it be considered cheating if another person did this for me" - if the answer is "yes", then it's probably still cheating if AI does it for you instead.
As best practice, you should always save your prompts and AI chat transcripts, and document your non-generative AI use so that if you get questioned about your AI use, you can show exactly what you did, when, and how you applied AI in your academic studies. To learn more about referencing AI tools and seeing ethical AI use in practice, make sure you explore the next page on this hub, which is all about acknowledging AI use to ensure your AI use complies with our academic policies and promotes transparency, integrity, and accuracy.