Data Science and AI Education local interest group at UoL: 2nd meeting notes
===
###### tags: `2nd meeting` `Data Science and AI Education UoL`
:::info
- **Call time and day**: Thursday 8th September, 12:00-13:00 (GMT+1)
- **Meeting host**: [Luisa Cutillo](https://eps.leeds.ac.uk/maths/staff/5526/dr-luisa-cutillo)
- **Meeting facilitator**: [Paul Baxter](https://medicinehealth.leeds.ac.uk/medicine/staff/117/professor-paul-d-baxter)
- **Call joining link**: [MS teams meeting](https://teams.microsoft.com/l/meetup-join/19%3ameeting_YmY3NTg5YjUtYzhmMy00OTRmLWI2YjktZDEzODA3M2JkZGI1%40thread.v2/0?context=%7b%22Tid%22%3a%22bdeaeda8-c81d-45ce-863e-5232a535b7cb%22%2c%22Oid%22%3a%225059c8a0-43a5-44f0-b487-8ec42fe4bea5%22%7d)
- **Private Github repo**: [DS and AI Education Local Interest Group UoL](https://github.com/luisacutillo78/DS-and-AI-edu-UoL-private)
:::
<br />
<br />
<br />
<br />
<br />
# 2nd meeting
## Agenda
:::success
| Agenda | Speaker | Time |
| ------------------------ | ------------------------------- | ----------------- |
| Intro and overview | [Luisa Cutillo](https://eps.leeds.ac.uk/maths/staff/5526/dr-luisa-cutillo) | 12:00 - 12:05 | Complete
Presentation of the New MS TEAM | [Luisa Cutillo](https://eps.leeds.ac.uk/maths/staff/5526/dr-luisa-cutillo) | 12:05 - 12:20 | Complete
Intro to Topic Discussion: How To Assess Coding Assignments | Everyone | 12:20 - 12:25 | Complete
|Collaborative exercise | Breakout rooms | 12:25 - 12:50 | Complete
| Reflections, discussions and Q&A | [Luisa Cutillo](https://eps.leeds.ac.uk/maths/staff/5526/dr-luisa-cutillo) | 12:50 - 13:00 |
:::
### Collaborative exercise
*up to 25 minutes in breakout groups of 3-5 people*
#### Group 1
*Please use this space to take notes from your breakout group discussion.*
- How do you efficiently assess programming coursework? Please tell us best and worst approaches based on your experience.
- In the school of computing, we use Gradescope which only assesses the output not the code or the style of coding.
-
-
-
-
- What challanges have you faced with programming assignments and how to address them?
-
-
-
- How to deal with issues of plegiarism/academic integrity with programming assignments?
- Ask the students to explain what is happening in the codes, although this may reduce the chance/scope of automatic grading
-
-
- Can you share any resources that could be useful for setting and assessing programming assignments (ideally in Minerva)?
-
-
-
- Can you share any authomated assessment examples that worked well for you?
-
-
-
- What would you like this group to discuss in the next meetings?
- feedback to students, how to give specific and efficient one. Very time consuming.
-
-
#### Group 2
*Please use this space to take notes from your breakout group discussion.*
We considered the difference between formative and summative assessment, but mostly concentrated on summative assessment.
- How do you efficiently assess programming coursework? Please tell us best and worst approaches based on your experience.
- Use of teaching assistants to support marking
- Design to allow automation (not open ended) - unit testing
- Peer to peer assessment
- This is only possible for formative assessment.
- Provide non-functioning code that the students have to fix. The fixed code should then pass tests.
- What challanges have you faced with programming assignments and how to address them?
- Automating assessment is often hard as:
- How then do we make subjective judgements on how well something has been done.
- There can be error carried forward issues (e.g. part b relies on answer to part a)
- A lot of effort can go into designing automated assessment which is only likely to be worth it for large cohort sizes.
- Students can sometimes find the work of previous cohorts more easily as these are more often shared e.g. on GitHub.
- How to deal with issues of plagiarism/academic integrity with programming assignments?
- Randomised version of data set etc. to different students
- Timed closed book examinations
- Jupyter notebook to describe code ensures understanding
- Use of repository history
- Viva style assessment
- Can you share any resources that could be useful for setting and assessing programming assignments (ideally in Minerva)?
- Resource on plagiarism in coding
- https://www.geog.leeds.ac.uk/courses/computing/info/plagiarism.html
- Java Maven tool is good at checking documentation
- Can you share any automated assessment examples that worked well for you?
- No
- What would you like this group to discuss in the next meetings?
- Setting learning objectives in data science courses
- Marking groupwork
- Cross faculty teaching / research projects
#### Group 3
*Please use this space to take notes from your breakout group discussion.*
- How do you efficiently assess programming coursework? Please tell us best and worst approaches based on your experience.
- Work in groups (5ish)
- Short presentation of their work instead of marking code directly
- Unit tests and student write the functions?
- Submit or not submit code? Is "pretty" code and good RSE practice something to focus on in teaching?
- Testing functionality vs the maintainability / cleanness of the code
- Peer assessment (pass code to your neighbour for comments)
- Find examples of "bad" code and ask students to find the issues.
- What challanges have you faced with programming assignments and how to address them?
- Numbers in large courses
-
-
- How to deal with issues of plegiarism/academic integrity with programming assignments?
- Change assignment slightly for each group (i.e. different datasets)
- If code itself isn't part of the submission, but results are, this becomes less of an issue?
- Peer evaluation?
- Can you share any resources that could be useful for setting and assessing programming assignments (ideally in Minerva)?
-
- Kaggle examples (loading, training, metrics etc)
- Advent of code
- Can you share any authomated assessment examples that worked well for you?
-
- Find examples of "bad" code and ask students to find the issues. Could be multiple choice
-
- What would you like this group to discuss in the next meetings?
-
- Robust testing? Internal / external validation? Overfitting? Pre-registration of analysis plans?
-
#### Group 4
*Please use this space to take notes from your breakout group discussion.*
- How do you efficiently assess programming coursework? Please tell us best and worst approaches based on your experience.
- One of us has taught very simple first-year undergraduate "How to use R"-type material. Programming is tested by checking simple "one number" answers -- marking the answers rather than the code itself.
- Different datasets for each student and slightly different tasks (randomised)
- Worst experiences: Failure of autumated marking - check it works!
- It is possible to give much quicker (although less detailed)
-
- What challanges have you faced with programming assignments and how to address them?
-
-
-
- How to deal with issues of plagiarism/academic integrity with programming assignments?
- For low-stakes assessment (coursework worth a small proportion of the module mark): do almost nothing to combat plagiarism.
- Random datasets avoids the very most naive copying (but not much more).
- We thought about how easy it would be to randomise the task -- sounds tricky but maybe doable.
- Can you share any resources that could be useful for setting and assessing programming assignments (ideally in Minerva)?
- If it's possible to mark simple numerical answers, then a Microsoft Form is sufficient. (It's probably possible to do this natively in Minerva if everyone's answers should be the same.)
- I've used a MailMerge to return answers and very basic feedback to students by email.
-
- Can you share any automated assessment examples that worked well for you?
-
-
-
- What would you like this group to discuss in the next meetings?
-
-
-
## Reflections and Q&A
If you have any questions or thoughts, please feel very free to contribute these below:
- Questions/thoughts:
- Links:
- Chat: