🔧 The emphasis is on practical exercises; short teaching segments introduce each activity.
This workshop builds on the principles introduced in the lecture. Responsible AI use requires transparency, verification, documentation, equity and continuous monitoring. Participants will practise constructing clear and specific prompts and learn to record prompts and outputs for transparency. Remember not to include confidential or proprietary information in prompts.
Participants should form upto five groups
Groups will remain fixed throughout the workshop.
https://m365.cloud.microsoft/chat/
Formulate a prompt such as:
“Summarise the UoN policy on the use of generative AI tools”
Each group will be tasked with investigates one AI policy area:
Using Copilot, collect information about your assigned policy area. Document the outputs in a shared sheet or board.
Groups should prepare to: - share the prompts used
- summarise Copilot’s interpretation of the policy
- identify gaps, hallucinations or incorrect claims
- evaluate Copilot’s cited sources (if any)
Groups present initial findings and discuss prompt strategies and observed weaknesses in AI responses.
The these are the official policy documents:
Compare Copilot’s claims with the actual policy text. For at least two AI-generated statements, verify:
Hint Note possible reasons for inaccuracies: outdated training data, vague prompts, or ambiguous policy language.
The tone and content of Copilot’s output can vary depending on the role specified. Asking Copilot to answer “as a researcher”, “as a policy adviser” or “as a science journalist” can change the depth, formality and focus of the response. Prompt engineering strategies emphasise that the quality of outputs depends on the clarity and context provided.
Re-ask the same policy question used in Task 1 with three different role instructions.
Record differences in tone, structure and certainty.
Agents allow you to automate multi-step workflows—such as reviewing AI policies—so that the same procedure can be repeated with minimal prompting.
Once an agent is configured with clear instructions (e.g. how to retrieve, summarise and check policy information), you only need to supply the topic (e.g. “Nature”, “Cancer Research UK”, “SAGE Publications”) and the agent will run the full process.
Create an AI agent configured to:
Test it on at least three organisations.
Note where it succeeds, where it struggles and where human oversight is required.