Artificial intelligence is everywhere; it powers almost everything in our modern lives – it can make life simpler and safer in many ways, but it also has the power to do harm and produce biased information. From personalised recommendations on streaming platforms to smart home devices and voice assistants that automate household tasks, AI is designed to streamline our lives. AI algorithms in healthcare settings already assist with diagnosing diseases and predicting patient outcomes, and AI-powered vehicles aim to improve road safety and travel efficiency.
However, the same technology that offers these benefits also poses significant risks. AI systems can perpetuate and amplify existing biases present in their training data, leading to unfair and discriminatory outcomes. For example, biased algorithms in hiring processes can disadvantage whole groups of people, while biased facial recognition technology can result in wrongful identifications.
The potential for harm affects everyone - even if you don't use AI yourself. Recent years have seen an exponential rise in the spread of misinformation through AI-generated content that goes viral on social media - can you think of a recent example? Deepfakes, AI-generated news articles and doctored images can be used to deceive and manipulate public opinion, undermining trust in media and institutions. Additionally, the reliance on AI for critical decision-making processes, without proper oversight and accountability, can lead to errors with severe consequences.
Ethical use of AI means different things in different contexts - ethical AI use in your personal life looks entirely different from ethical use of AI in an academic context, whether that's for an assignment, research, or just developing your overall academic skills. AI ethics is also a field of study within artificial intelligence and machine learning as an industry - ethical use and principles will also look different for you as a student than they do for an AI developer. But there is some overlap too.
In our academic context, ethical AI use boils down to 3 main principles: transparency, integrity, and accuracy. Explore the tabs below to learn more about each principle.
What does transparency look like for assignments or research?
Clearly indicate when you have used AI tools to assist in completing assignments or undertaking research. This includes specifying which tools you used and how they contributed to the assignment or research project. Being transparent in your academic work both protects you from accusations of academic misconduct and promotes research best practice by allowing other people to see where you've got your information from and how.
For example, if you used an AI tool to generate ideas, summarise articles, or check grammar, you should mention the tool's name, describe its function, and explain how it was utilised in your work complete with an in-text citation. Additionally you should include a reference to the output in your reference list complete with a link to the transcript or other output where possible. This allows the people marking and reading your work understand the extent of AI assistance and see the "behind-the-scenes" workings of your AI use.
What about for personal study?
Although you are not asked to provide written documentation about or references for tools you use in your personal study, transparency is still relevant to using AI for personal study. It is a combination of being honest with yourself about how you use AI (are you treating it like a tool or like a shortcut?) and talking to your course leader or personal tutor honestly about AI so that they can provide you with support and guidance that ensures you are using these tools in ways that are productive and helping you reach your learning outcomes rather than replacing your learning.
For example, using AI to create flashcards from lecture slides and asking ChatGPT to act as a study coach can be really effective interventions for developing your academic language skills, breaking complex content down to manageable pieces of information, and retaining information better - but if the AI-generated flashcards contain errors you might be reinforcing the retention of incorrect information, or if you enter personal or sensitive information to ChatGPT to help you manage your time, do you know who has access to that information, how it will be used in the future, and how long it is retained for? Your personal tutor might be able to suggest tools that have rigorous privacy policies or templates that allow you to retain control of your information.
What does integrity look like for assignments?
Think of AI as another tool in your arsenal rather than a shortcut - it should enhance and complement your learning rather replace or bypass it. Do not pass off AI-generated content as your own, original work. Additionally, make sure you clearly signpost any tasks or parts of the assignment you used AI for, and cite the tools in your reference list. You can use AI-generated explanations to build your understanding, but you must ensure that any work you submit is your own interpretation and analysis. Always evaluate, verify, and validate AI-generated information like you would any other source - you are still accountable for the accuracy of the work you submit.
What does it look like for research?
It is important that AI tools are used to support genuine research efforts. Academic misconduct doesn't just mean plagiarism - you could also be accused of falsification. To use AI with integrity, you shouldn't use AI to generate new data or distort existing data, or manipulate AI outputs to make your data fit preconceived hypotheses or desired outcomes. Clearly document AI-assisted decision-making and the steps or methodologies you used AI for so that you can maintain ethical standards in data handling and reporting. Think also about the fair and equitable treatment of both groups and individuals - following rigorous research practice allows you to mitigate unwanted bias and ensure your research doesn't cause harm to anyone.
What about for personal study?
Use AI tools to support personal effort and learning in ways that encourage critical thinking and problem-solving rather than relying solely on AI-generated solutions. You could combine traditional learning methods with using AI for repetitive learning, use AI to sanity check your reflections and insight about learning content, or simply use AI as a starting point for things like breaking down complex academic text. But, just like you shouldn't use AI to skip steps in assignments or research, your AI use for personal learning should keep you in the driver's seat of developing your academic skills instead of making AI do all the work while you passively watch learning opportunities go by.
What does accuracy look like for assignments and research?
We often use the term "garbage in, garbage out" when we talk about search terms and finding information. AI operates in a very similar way - if you put in vague and generic prompts, you will get vague and generic outputs. To improve the accuracy and reliability of your AI outputs, make sure you use tools that are fit-for-purpose and have been trained on high-quality, relevant data.
That also means using AI for its intended purpose - most AI models have been built to be helpful, which means if the tool can't determine the answer, that is when it's likely to "hallucinate" i.e. make up an answer. To minimise AI coming up with its own reality, you can use different prompting strategies to such as echo prompting to first establish what the tool knows about your given subject, then make your request, which you also ask it to repeat back to make sure the tool has correctly understood your meaning before generating its response.
What does it look like for personal study?
First of all, choose your tools carefully. Keeping on top of developments in AI can be exhausting and tricky, but using the latest models and tools for learning and development that provide more reliable answers are likely to yield better results. You also want to ensure the information you're spending a lot of time revising is correct and useful - you can always talk to your course leader, personal tutor, or one of our academic skills advisors about useful tools and their applications or ask them to double-check the revision aides you've created with AI.
To make the most of AI while still mitigating the risks associated with it, it is crucial to implement ethical guidelines and robust regulatory frameworks supported by legislation. Ensuring transparency in AI systems, promoting fairness and inclusivity in AI development, and establishing mechanisms for accountability are essential steps to a society that uses AI as a force for good.
Read below about the five key pillars of ethical AI development from IBM, a leading company in advances in AI and automation.
Fair AI systems must be designed to mitigate unwanted bias and ensure that they do not perpetuate or amplify existing inequalities. In AI development, bias is a systematic error that, intentionally or not, might generate unfair decisions or inequitable treatment of groups or individuals. Bias can emerge in an AI system because of cultural expectations, technical limitations, or unanticipated deployment contexts, and it can be present in the AI system, in the data used to train and test the system, or in both. Ethically designed AI systems use diverse and representative datasets during the training process and are continuously monitored for biased outcomes.
As AI in increasingly integrated to critical decision-making, it is important that these systems are both secure and robust. Robust AI systems are designed to effectively manage unexpected conditions, such as anomalies in data input or malicious attacks, without causing unintended consequences. Ensuring robustness in AI not only enhances its reliability but also builds trust among users, making it a crucial aspect of AI deployment in sensitive and high-stakes environments.
Explainability refers to how well a person without any special training in AI models or systems can understand how the system works. By making AI processes and outcomes clear, users can verify and trust the decisions made by these systems. Explainability is like showing your work on a math problem so that everyone can see the steps you took to get your answer.
An AI system should be not only understandable but also transparent. AI transparency involves providing detailed information and documentation about how AI models are created, trained, and deployed, and how they make decisions. Transparent AI systems makes clear the origins of the data they have been trained on, as well as how they process and store the data you input.
AI process vast amount of data to learn, identify patterns, and make predictions or recommendations. Therefore, an ethical AI tool prioritises and safeguards the privacy of all the data that it is trained on and that is inputted into it by users. Ensuring data privacy not only protects sensitive information but also fosters trust and compliance with legal and ethical standards.
Why do people use AI to generated art?
AI has revolutionised many fields of work and study, and art is no exception. At surface-level, AI-generated art has the capability to democratise access to art that traditionally has only been the domain of either people with the artistic skills to create their own art, or people with the financial means to buy art. This opens up the world of art to individuals who may not possess traditional artistic skills but have a creative vision or eye for design and composition that they wish to express.
AI can analyse vast datasets of existing artworks, learn from different styles, techniques, and historical contexts to blend elements from various genres. Additionally, AI can significantly speed up the creative process, enabling artists to experiment with ideas that can help them break through creative blocks.
Issues with AI-generated art
Despite its advantages, AI-generated art also raises several concerns, particularly in the areas of copyright and intellectual property. When an AI tool is used to create a piece of art, the current legislation is unclear on who holds the rights to the work - the developer of the AI, the user who generated the art, or the AI itself. This ambiguity can lead to legal disputes and challenges in protecting intellectual property.
Even more critically, AI models often rely on existing artworks to learn and generate new pieces, which can result in the (un)intentional replication or modification of copyrighted material. This practice can infringe on the rights and livelihoods of original artists and lead to ethical dilemmas regarding the use of their work without proper attribution or compensation. The widespread use of AI-generated art also presents a risk to human creativity and craftsmanship, as the ease of producing art through AI can devalue the effort and skill involved in traditional artistic creation.
The algorithms and models underpinning AI image generators also often reflect and reinforce societal biases and inequalities. If you ask an average AI image tool to generate 10 images of a doctor, a CEO, a politician, or a lawyer, 9 out of 10 images will be middle-aged white men - which might reflect biased perceptions of the world around us, but it does not reflect the reality. Until the diversity and inclusion issues of these tools are addressed and they are trained on ethically sourced material that their creators consented to and were fairly compensated for, justifying their use is difficult in a just and equitable society.
AI images and art in academia
To mitigate these concerns and ensure you are using AI in a constructive manner, you should take great care before using AI to generate art - this is relevant for everyone who includes images in their assessments, but especially for students on our arts courses. Just like any other assignment, relying solely on AI to produce finished artworks would violate academic integrity and cheat you out of both the skills and learning that your assessments are in place to help you develop.
If you need to include images in your assessments, there are plenty of free stock photo websites available on the open web and if you need help finding stock photography to, for example, accompany presentations, you can always book in with one of our Librarians, Academic Skills Advisors, or the Digital Skills Advisor to help you. Additionally, referencing AI-generated images is still not very well understood, precisely because of copyright and intellectual property.
If you do use AI in arts-based assessments, it is important that it is either a.) done on the directive and instruction of your module tutor or b.) the AI art serves only as a foundation upon which you build your own creations. For instance, an AI-generated image can serve as a preliminary sketch that you refine, modify, and personalise, adding your unique touch and creativity to the final piece. You could use AI tools to experiment with different compositions, colour schemes, and visual effects in order to gain insights and test techniques that you then apply to your assignments using more traditional methods.
Your AI use in any context should be thoughtful and balanced and bear in mind the learning outcomes and marking criteria of the work you use it in. Consider also the real-life impact it has on the artists, photographers, videographers, filmmakers, and more who laboured to create the original works on which these AI tools have been trained on.
The question remains: is AI the right tool for expressing your unique personality, creativity, and vision or is it devaluing those skills and accepting the exploitation of original artists in favour of cutting corners?
As best practice, you should always save your prompts and AI chat transcripts, and document your non-generative AI use so that if you get questioned about your AI use, you can show exactly what you did, when, and how you applied AI in your academic studies. To learn more about referencing AI tools and seeing ethical AI use in practice, make sure you explore the next page on this hub, which is all about acknowledging AI use to ensure your AI use complies with our academic policies and promotes transparency, integrity, and accuracy.