They are better.
You end up having to compare two-digit decimals to one-digit decimals.
My major objections now boil down to pedagogy, on which point I understand that I will make no progress with Khan Academy, so I won’t make the effort. I’ll leave that to Frank Noschese.
If you accept that people learn mathematics by doing lots of multiple choice exercises, then all I have left are technical details.
They are these:
- In the U.S., money is a good enough model to get students through two-digit decimals. It is not uncommon for children to be able to reason about two-digit decimals, but not generalize to three- and four-digit decimals.
- These are randomly generated, I assume. And the probability of getting two-digit decimal>one-digit decimal seems artificially low. As I ran through a bunch of these, I began to build a model in which I could (1) treat comparisons with same number of decimal places as whole numbers, and (2) claim that the one-digit decimal is larger. So I ran an experiment. Twenty exercises using my model. I got 90%. (See video of a repeat of this experiment; I don’t know that I did 20 this time, but I did a bunch and only got one wrong)
- Related to this, there is no need to click through the hints. None of the decimals came out equally (i.e. no 0.1 v. 0.10). So when I got a wrong answer, I just chose the other inequality. Pattern matching and process of elimination allowed me to avoid instruction of any kind and to get an A.
See, here’s the thing. Teaching requires a mix of knowledge and assumptions on which to base decisions. When everything is pre-programmed, deeper knowledge is required in order to create meaningful instruction, not more analytic data.
Carnegie-Mellon is working on a deep model for diagnosing student misconceptions with decimals [pdf] (and presumably many other domains). Again there’s the pedagogy thing, but I am impressed with the effort to build a solid theoretical foundation for their work. Here is a sample of a taxonomy of decimal misconceptions they have developed.
Without that deep knowledge base, all that’s left are assumptions. Which is fine, as long as the assumptions are not flawed.