Inf1B Proposal - comments by P.Stevens ===== - I am not reassured by the further detail, especially as regards the integrity of the assessment. For example, because: -- When I took over Inf1OP, it used to have lab exercises that were submitted for a small amount of credit, as proposed here. At the end of the course I looked at the marks, and also at the code, comparing coursework with exam. It was very clear that *many* students had, somehow, submitted coursework exercises containing correct use of language constructs they had *no idea* about how to use in exam. In other words, such submission does not effectively check student learning. It does teach students they can cheat with impunity. -- MOSS similarity detection software finding similarities, among 400 (or even 100) submitted solutions to a short exercise suitable for beginners, will not reliably detect cheating. Even trivial changes will blur the line between "these students copied" and "these students took similar approaches to an easy question" to the point where we can prove nothing. -- Indeed, in this course it is hard to see how you would even explain to well-intentioned students exactly what constituted plagiarism. How will a student who struggled to get their code to work, and eventually succeeded with help from teammates or others, even know whether the degree of help they have had is allowed? If they think on reflection that they have had too much help, because they couldn't get their code to work without it, then what should they do? -- Moreover, MOSS is no defence whatever against purchased solutions. See for example https://www.fiverr.com/gigs/java-assignment (Prices start at £4.08) -- "Require students to explain their code and answer questions during the demonstration" -- i.e. in the 20 minute "group demonstration of the running code to a tutor". But most tutors will themselves be students - maybe even undergraduate students, certainly including MSc students who are with us for only one year. It is simply not a reasonable expectation that they will be either able or willing to identify students who have not written their own code in such a setting. -- Note also that the online code-selling services often include detailed code comments and/or separate explanations, so that a student seems to understand how their code works is no guarantee that they wrote it. -- As many investigations have found, it is *not* only a small minority of students who will cheat given the chance. Even if you don't care about it in this course (and you should, because we will all suffer if students pass a programming course without learning to program) you should care about the general lesson we're teaching students, i.e. that we don't much mind them cheating. - The proposal is to spend more student time on learning OO programming, and to have correspondingly expanded learning objectives. Proposers argue that these expanded LOs require assessment methods beyond exam. Fine. But what is the argument that programming exam should not be used *in addition to* whatever else is done, to ensure that all students do learn basic programming? I would be considerably reassured if every student who passed this course had definitely demonstrated that they can program independently under exam conditions, to at least the standard required to pass Inf1OP. That should be a low bar. - Moreover the resourcing required here is a major concern. Even if it is possible to recruit the extra TAs/tutors/markers required, that will surely be at the expense of other teaching support roles. (I say this as someone currently struggling to teach a 20 point course alone, with two of the three agreed teaching support positions unfilled for lack of applicants.)