WitCH 48: Thesis Not Good

This combo WitCH comes courtesy of mystery correspondent, tjrb. They flagged three multiple choice questions from the 2018 Algorithmics exam (here, and examination report here), and we’ve added a fourth. tjrb also remarks, “There are probably a lot more errors in this paper (and the other algorithmics papers), but these were the most strikingly incorrect”.

For Q2, the examination report indicates that 41% of students gave the intended answer of A. By way of explanation, the report then remarks,

“Cobham theorised that problems that are feasibly computable (also known as easy problems) are those that are decidable in polynomial time.”

For Q6, the report indicate that both A (51%) and C (33%) were “accepted”, but is otherwise silent.

The report is silent on Q12 and Q16, except to indicate the intended answers: C (94%) and A (66%), respectively.

20 Replies to “WitCH 48: Thesis Not Good”

  1. The examination report comment is wrong. Cobham’s is that problems are feasible only is they are of class P. It does not imply that all feasible problems are of class P. Not only is it wrong, it’s also just impractical. Just think about it…. a problem in class P may still be incredibly difficult to solve on current hardware. There is no control on its exponent, number of terms, etc.

    This is similar to a question on one of my exams for a comp sci subject during my undergrad, by the way. None of the answers to Q2 are correct.

    Q6 is just bizarre, none of these best state the difference between a list and a dictionary. C is just wrong, there’s no reason at all why the keys of a dictionary can’t be integers. Iitem A needs more qualifiers before it can be considered correct. Is the dictionary sorted? Is the sorting of the sorted list with respect to whatever the “value” of the objects are? Why are we talking suddenly about sorted lists when we are supposed to be talking about the differences between two abstract data types?

    Where on earth is the statement that a list is a numerically indexed collection of objects and a dictionary is a collection of key-value pairs?

    I suppose whoever wrote Q12 doesn’t understand the Big Theta notation. Resource requirements? “Best”? “Worst”?

    Decidability and verifiability… that’s some pretty hard logic going on there. They really teach this stuff? Goedel and all? Or are they just collecting a bunch of random facts without giving the unifying theory behind it all?

    The complexity classes are classes of decision problems, they are automatically all decidable, P, NP, all of them consist purely of decidable problems. I’m assuming they used the word “verified” but they don’t mean to verify the program, they just mean to check that the output is correct (in a given set), i.e., that is the decidability time. Yes, then the answer is A.

    However it really is very stupid that they use the word verified, yes, even when they specify the output. To verify the output of an algorithm and method of proof is to check that the output it produced was correct using that method of proof. That is, if the program does not implement the method of proof correctly, then verification can return FALSE while the decision on correctness (if the output is in a given set) could be TRUE.

    This annoys me.

    1. Thanks, Glen. I’m not a comp science guy and I always consult my local expert, Mister Combo, before posting. But even I could see Q2 was screwed. I’ll do some homework and checking, but I think your criticisms are consistent with tjrb and Mister Combo.

      I assume that your “They really teach this stuff?” question was rhetorical.

  2. Yeah the Algorithmics papers have a history of bizarre or totally wrong content. That DNA computing question, anyone?

    Q2 A is just bizarre. P \subseteq NP, so saying “all NP problems are hard” while “all P problems are easy” is completely nonsensical. Perhaps the author intended to say NP-complete or NP-hard, which would make a little more sense, though the issues Glen discusses still very much exist.

    Q6 continues in the theme of the author clearly having very little idea of the actual content of the subject, given that they patently fail to grasp the difference between an ADT and a concrete data structure. Not to mention, of course, VCAA’s idiotic mechanism of admitting fault without admitting fault.

    Q12 isn’t actually quite as bad. Maybe I’m missing something, but “best case” and “worst case” are perfectly normal terms in this context, and the fact that 94% of the cohort got the intended answer seems telling. Of course “resource requirement” is a bizarre phrasing, but not that big an issue in itself.

    Q16 is just a natural consequence of the subject’s particularly clumsy treatment of this topic.

    1. Thanks Nondescript. What was the DNA computing question? And, perhaps this is a naive question, but if the Algorithmics exams are so predictably infested with error how has the VCAA not been hammered for it?

      (Just to make the distinction, Methods and Specialist don’t have *that* many errors, or at least not that many large ones. The main problem is that Methods and Specialist exams, and syllabi, are ludicrous. That’s perfectly hammerable, but the whole VCE maths culture is so weak, most teachers and students are not even aware of the ludicrousness.)

      Your point on Q2 is the clear howler, which I think is what prompted tjrb’s email to me. Your comments on Q6 seem pretty much the same as Mister Combo’s, although I’ll have to look carefully. The issue tjrb had with Q12 and Q16 was, I think, pretty minor; Mister Combo missed it on a first reading, but agreed when the issue was pointed out. That’s not to say, of course, that there aren’t other issues.

      1. Well, IMO, the whole discussion of computational complexity in algorithmics is pretty awful. The study design, the exam questions, the assessments… and it was painfully clear to me that my particular teacher had absolutely no idea what they were talking about. And from conversations with other students, it seemed like neither did other teachers. I think it’s a subject that tried to be too many things, and in typical VCAA fashion, succeeds at almost none of it.

        As to errors, IIRC the enrolment for Algorithmics is only around 100 students, and when I did it last year, most students/teachers I spoke to had never even heard of the subject – so there’s simply not much attention paid to it (seemingly by VCAA too).

        1. Yeah my teachers were at one point very clearly confused about the difference between best versus worst versus average case complexity and O/theta/omega. This is not helped by the field’s general tendency to use big-O when they really mean theta.

          I think you’ve articulated the main fundamental issue with the subject. At an outline level, it tries to address an impressively ambitious spread of concepts, but without the appropriate depth or rigorous grounding to actually succeed (particularly in the more theoretical topics). It doesn’t even cover a rigorous definition of what asymptotic complexity even really means.

      2. The DNA computing question was Q11 of section B on the 2015 exam. 8 marks, with a state average of 2.9. The main issue, I think, was the overly-specific (and unstated, of course) requirements to achieve the full marks, and in a question with such a high mark value — not to mention the absolute vagueness of the phrase “a threat to computer security”. It was presented to my cohort as an example of the kind of oddness one might expect from the exams.

        To be clear, when I say “a history”, I just mean that you can point to a reasonable set of examples. I wouldn’t say the papers are _infested_ with wrongness, but given how small the subject is (as far as I know, only two schools teach it) and how relatively new, I would suspect that the resources and expertise directed at its exams is perhaps not as significant as some others, leading to the kind of issues we see here. And, of course, that small cohort would contribute to a lack of outcry. The bizarre-but-technically-not-incorrect (like the DNA computing question) is more common than the totally wrong.

        Ah, is the issue with Q12 the fact that \Theta(n \log n + n) should really be \Theta(n \log n)?

        1. Thanks, nondescript. Maybe four dodgy MCQ on one exam isn’t an “infestation”, but if they were termites I’d definitely be calling the bug man.

          That wasn’t tjrb’s issue with Q12, but I’m not saying it’s not an issue. (These questions, except for the Q2 howler, are outside of my world, and I haven’t tried to decipher them yet.)

          1. Hm well I’ll keep guessing. Is it the fact that it is specified nowhere that A, B, and C solve decision problems?

            Or perhaps the fact that complexity classes apply to problems, not specific algorithms?

    2. My issue with the words “best case” and “worst case” is the following.

      In complexity theory, yes, these exist as jargon. We use “best case” for bounds from below, and “worst case” for bounds from above. There are some other standard ones too.

      When talking about Big O notation, it is a bound from above. That means it is automatically the worst case. It is just redundant to say “worst case O(whatever)”. I hope they don’t use that when teaching the material.

      When talking about Big Theta notation, that’s a bound from above and below. For that one, they just say “Theta(2^n) best case time”. What is that supposed to mean? It appears to be saying that the algorithm is not bounded on both sides by a constant times 2^n, which is wrong.

      For the third point, it’s just redundant (like the first).

      This all goes back to my amazement that whoever wrote the exam (and checked it?) doesn’t even know the very basics of complexity.

      1. My understanding of the terms is that they are somewhat orthogonal categories. Big O/theta/omega refers to an asymptotic bound on a particular function, whereas worst and best case refers to particular characteristics or situations which may produce different behavior — i.e., entirely different underlying functions. You can see this both in Wikipedia pages for common algorithms, and in every algorithms textbook I’ve encountered thus far.

        Worst-case O(whatever) means that the worst-case behavior of the algorithm is described by some function, which has that particular upper bound.

        Similarly, Theta(2^n) means that the function describing the best-case behavior is bounded above and below by 2^n. It suggests that the worst-case behavior might actually be a completely different function.

        For a real example, quicksort is usually described as having O(n^2) worst-case time complexity, O(n \log n) average-case complexity, and O(n) best-case (for a sensible choice of pivots).

        I am given to understand that this is the common definition. For example, it is used this way in Introduction to Algorithms (CLRS — a pretty canonical first-year algorithms text), which describes in chapter 3 “merge sort, with its \Theta (n \log  n) worst-case running time, beats insertion sort, whose worst-case running time is \Theta ( n^2 ).”

        1. That’s interesting, and worrying. I still think it is redundant, even with this take on the definitions. For example, you’d say O(quicksort) = O(n^2), right? So then saying “quicksort has O(n^2) worst case complexity” is redundant (like they did in the question).

          As an aside this is not the way I’ve seen the terms used in talks I’ve been to on complexity, and I think there is a real issue with this kind of definition: it is not algorithm-independent. What is “best” data for one sorting algorithm is not “best” for another. How big should this category be? I can make the fastest sorting algorithm in the world, watch this. My sorting algorithm is O(1) best-case complexity. My “best” data includes all lists that are already sorted. My algorithm returns the input in the best case. In all other cases, hey, it can do bubble sort. I already have the fastest algorithm, so I don’t care.

          Of course this is a contrived example, but you can see the issues that would emerge when you use these notions to compare a new algorithm to an existing one. That’s why the definitions of big O etc are (as far as I knew) understood without reference to an algorithm-dependent space of data. (If you take a space of data that is uniform across algorithms, that on the other hand makes sense. What I have seen in active use is this, where a researcher might say that their sorting algorithm on a certain subspace is big Theta of whatever.)

          I’m not an expert in the field but I suppose this is why it isn’t so useful and I haven’t seen the terminology used very often.

          Maybe they do it like this to give students an idea of the “different performance across different data sets” idea that *is* common practice. But I think this is a very clumsy way to do it. Better to just say explicitly what the data set is that you are using your algorithm on instead of saying “best case”.

          The exception to all of this of course is “average case”. That notion makes sense for any data set, and is defined at least as far as I am aware using distributions, which seems way too advanced for a student. How are they supposed to make technical sense of this term at all?

          In practice you are given an algorithm A on a data set D, and you can talk about it being big O or big Theta on this data set. You can also talk about the average being big O or big Theta (with possibly different estimates). It’s really nasty lumping in this “best” and “worst” case stuff with the average, making it look like they are similar when they aren’t at all.

          Makes me sad.

  3. I know almost nothing about Algorithmics, but I am curious to learn more.

    What sort of background is expected of teachers? Teachers with a degree in computer science must be harder to find than teachers with a degree in mathematics.

    Do any Master of Teaching programs offer this as a method?

    Is Algorithmics really another mathematics subject in disguise?

    Could I complete my VCE with Specialist Mathematics, Mathematical Methods, Further Mathematics, Algorithmics, and English?

    1. > What sort of background is expected of teachers? Teachers with a degree in computer science must be harder to find than teachers with a degree in mathematics.

      At least something relevant to computer science, as a student who undertook the subject along with a dash of mathematics. Landau notation is poorly taught, even at a university level and someone who knows how to program, teach mathematics and problem solve would be good for the subject. This subject deals with the theory of computer science, which is heavily rooted in mathematics (Turing Machines, Graph theory, the correctness of algorithms etc). My friends have told me horror stories about “IT” teachers attempting to teach coding when the end result leaves a lot to be desired. Is algorithmics another maths subject in disguise? I suppose there’s a bit of overlap when I studied it alongside SM, but the subject had its differences to a mathematics unit.

      > Could I complete my VCE with Specialist Mathematics, Mathematical Methods, Further Mathematics, Algorithmics, and English?

      This is the route I took (+- a subject). I undertook this exam (2018) and then proceeded to do SM, MM, Physics, MUEP Maths and English. So yeah, very possible.

  4. Re the questions:

    Q2 is a very very large oversimplification of what is being said. Hard and easy problems are a strange descriptor for being feasibly computable, which itself is a vague meaning. Of course, as the others have pointed out, their wording is lousy to the point where the question loses meaning.

    Q6 refers to the List ADT which comes with a specific set of operations (indexing, head, tail and the like) and a Dictionary ADT. The implementation may be very very different from what one may assume could be a sensible implementation for each data structure. The typical example used is having nodes that are linked to each other (linked lists) versus a contiguous region of memory representing an array. That’s not the end, one could do something ridiculous to implement a List with the correct specifications. The point is, the implementation determines the complexity of operations, not the ADT itself. On a pretty funny side note, this reminds me of a coding assignment (uni) where we had to iterate over a hash table (similar to a dictionary) meaning we effectively accessed the elements with an integer index…

    While we’re at it, Algorithmics was supposedly made with Monash University and UniMelb. Take that as you will…

    1. Thanks, Sai. Your point about Monash and Unimelb being involved in creating the subjects is important, and it would be good to know what (and whether) they think. Involving academics, however, is no magic wand. Plenty of the “mathematicians” at these unis are fifth rate, and I would assume the same is true in all disciplines. When I call for mathematicians to be involved in VCE, I always make it a call for competent and attentive mathematicians.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

This site uses Akismet to reduce spam. Learn how your comment data is processed.