Hands‑on with AI for Research

Module Overview

  • Module: EDS‑EXT2 – Hands‑on with AI for Research
  • Course: Essential Digital Skills – AI for Research Extension
  • Audience: Early Career Researchers (PhD, Postdoc)
  • Duration: ~20 minutes directed teaching (+100 minutes practical exercises)

💡 Learning Outcomes

  • Practise designing effective prompts for policy‑oriented research questions and document the results.
  • Critically assess and validate AI‑generated summaries against authoritative policy documents and identify inaccuracies or omissions.
  • Explore how role prompting influences Copilot responses and appreciate the limitations of AI advice.
  • Reflect on responsible AI use through group exercises and develop plans for integrating AI into your research practice
  • Learn how to use agents for repeated tasks


❓ Questions

  1. How can you leverage Copilot to explore policy‑related questions effectively?
  2. How do you verify and critique AI outputs against official sources?
  3. How does the tone and content of Copilot’s response change when you specify different roles?
  4. How will you integrate AI responsibly into your research workflow after this workshop?

Structure & Agenda

  1. Orientation and Prompting – 5 minutes introduction followed by a 5 minute prompt practice.
  2. Group Research with Copilot – 5 minutes orientation followed by a 15 minute small‑group task and 10 minute plenary.
  3. Critique and Validation – 5 minutes setup followed by a 15 minute group critique and 10 minute plenary.
  4. Prompt Framing and Role Effects – 5 minutes introduction followed by a 10 minute role‑prompting experiment and 5 minute plenary.
  5. Using Agents for Repeated Tasks – 5 minutes introduction followed by a 15 minute role‑prompting experiment and 10 minute plenary.

🔧 The emphasis is on practical exercises; short teaching segments introduce each activity.

Orientation and Prompting

Introduction

This workshop builds on the principles introduced in the lecture. Responsible AI use requires transparency, verification, documentation, equity and continuous monitoring. Participants will practise constructing clear and specific prompts and learn to record prompts and outputs for transparency. Remember not to include confidential or proprietary information in prompts.

Group Formation

Participants should form upto five groups

Groups will remain fixed throughout the workshop.

Task 1: Prompt practice

https://m365.cloud.microsoft/chat/

Formulate a prompt such as:

“Summarise the UoN policy on the use of generative AI tools”

  • Was your prompt specific enough?
  • Did Copilot cite sources or mention limitations?
  • What uncertainties remain about the policy?

Group Research with Copilot

Assigned Policy Domains

Each group will be tasked with investigates one AI policy area:

  1. Publisher – Nature / Springer Nature
  2. Academic Society – PSR / SAGE Publications
  3. Research Funder – Cancer Research UK
  4. National Guidelines – UKRI
  5. Professional Body – COPE

Task 2: Small-group work

Using Copilot, collect information about your assigned policy area. Document the outputs in a shared sheet or board.

Groups should prepare to: - share the prompts used
- summarise Copilot’s interpretation of the policy
- identify gaps, hallucinations or incorrect claims
- evaluate Copilot’s cited sources (if any)

Groups present initial findings and discuss prompt strategies and observed weaknesses in AI responses.

Critique and Validation

Comparing AI Outputs to Official Documents

The these are the official policy documents:

Task 3: Group critique

Compare Copilot’s claims with the actual policy text. For at least two AI-generated statements, verify:

  • whether they appear in the official document
  • whether Copilot added, omitted or distorted information

Hint Note possible reasons for inaccuracies: outdated training data, vague prompts, or ambiguous policy language.

  • How accurate were Copilot’s outputs overall?
  • What patterns of error or bias emerged?
  • What verification processes should researchers apply?

Prompt Framing and Role Effects

Role Prompting

The tone and content of Copilot’s output can vary depending on the role specified. Asking Copilot to answer “as a researcher”, “as a policy adviser” or “as a science journalist” can change the depth, formality and focus of the response. Prompt engineering strategies emphasise that the quality of outputs depends on the clarity and context provided.

Task 4: Role effects experiment

Re-ask the same policy question used in Task 1 with three different role instructions.

  1. policy adviser
  2. researcher
  3. journalist

Record differences in tone, structure and certainty.

  • Compare outputs across groups
  • Discuss how role prompting affected perceived authority
  • Reflect on risks of mistaking stylistic confidence for accuracy

Using Agents for Repeatable Tasks

Automating Policy Lookup with AI Agents

Agents allow you to automate multi-step workflows—such as reviewing AI policies—so that the same procedure can be repeated with minimal prompting.
Once an agent is configured with clear instructions (e.g. how to retrieve, summarise and check policy information), you only need to supply the topic (e.g. “Nature”, “Cancer Research UK”, “SAGE Publications”) and the agent will run the full process.

Task 5: Build an agent to automate AI-policy review

Create an AI agent configured to:

  • accept a single input: the name of the organisation (e.g. Nature, UKRI, COPE)
  • retrieve and summarise the organisation’s generative-AI policy
  • identify disclosure and authorship rules
  • flag uncertainties or possible hallucinations
  • present information in a consistent structure

Test it on at least three organisations.
Note where it succeeds, where it struggles and where human oversight is required.

  • Compare agent designs
  • Discuss benefits of automation vs. risks (e.g. propagating errors)
  • Reflect on where agents can fit into research workflows responsibly

Further Information

📚 Key points

  • Major publishers (e.g. Nature) prohibit AI authorship but allow limited use of AI tools with disclosure.
  • National and international bodies (UKRI, European Commission) provide explicit guidelines on responsible use of generative AI.
  • Universities issue their own integrity guidance — always check local policy.
  • Prompt engineering skills help improve clarity and reduce ambiguity in AI responses.
  • Ethical use requires verification, transparency and acknowledgement of limitations.

Hints