Richard Skemp wrote, in “Relational Understanding and Instrumental Understanding,” about *faux amis*—those pesky words in other languages that *look like* words you are familiar with, but which mean something else entirely. Skemp argues that the word *understand* is like this—different people use it to mean completely different things. This leads to misunderstanding

And so I fear it is with *the standard algorithm*.

I have heard it said that the use of this phrase (repeatedly) in the Common Core State Standards was a compromise (although I cannot find a source for this—leave any breadcrumbs you can find in the comments, won’t you?) It would satisfy some parties who believe that the standard algorithm is an essential seawall against the encroaching fuzzy math tide, while leaving the precise nature of *the standard algorithm* unspecified would appease those who argue that alternative algorithms are helpful in developing and maintaining children’s number sense.

But if a compromise owes its precise nature to the fact that different parties will interpret the terms of the compromise differently, has there really been a compromise? Have we really made an agreement when we disagree about its meaning?

### What is an algorithm?

Karen Fuson and Sybilla Beckmann, in their “Standard Algorithms in the Common Core State Standards” cite a CCSSM Progression document.

In mathematics, an algorithm is defined by its steps, and not by the way those steps are recorded in writing.

Hyman Bass, in his article from *Teaching Children Mathematics*, “Computational Fluency, Algorithms, and Mathematical Proficiency: One Mathematician’s Perspective” agrees.

An algorithm consists of a precisely specified sequence of steps that will lead to a complete solution for a certain class of computational problems.

So far, so good. We have accord on the meaning of *algorithm.*

### What is the standard algorithm?

The definite article in the phrase *the standard algorithm* seems to be important to the alleged compromise I referred to.

Here, for example, is Hung-Hsi Wu on standard algorithms.

[T]he essence of all four standard algorithms is the reduction of any whole number computation to the computation of single-digit numbers.

Wu states the following steps for the standard algorithm (.pdf) for multidigit multiplication.

To compute say 826 × 73, take the digits of the second factor 73 individually, compute the two products with single digit multiplier— i.e., 826 × 3 and 826 × 7 — and, when adding them, shift the one involving the tens digit (i.e., 7) one digit to the left.

He explicitly allows for moving left-to-right, as well as inclusion of zeroes instead of *shifting*. But explicit attention to place value in the process of working the algorithm seems to be proscribed.

Contrast this with the following figure (click for full-size version) from Fuson and Beckmann.

This figure is labeled “Written methods for the standard multiplication algorithm , 2-digit x 2-digit”. Note in particular methods D (lower left) and F (upper right). Method D shows that we are thinking *6 x 9 tens* as we work the algorithm. Method F suggests that we are thinking *6 x 90* as we work.

But wait. The lattice method is an example of *the standard algorithm*?

Recall that *an algorithm is defined by its steps*. In Wu’s standard algorithm, you may proceed from left to right, or from right to left; either is acceptable. The lattice has both left/right and up/down steps, and *you may do the single digit multiplication steps in absolutely any order*.

I cannot imagine that Wu would count the lattice as a standard algorithm, and I seriously doubt he would count partial products (method D) in that category.

All of this got me thinking about whether there are any *non-standard* algorithms for multi-digit multiplication in the viewpoint that Fuson and Beckmann present. Pretty much every multiplication algorithm I know is in that Fuson and Beckmann figure. Every one except the Russian Peasant Algorithm, that is.

### an alternative

I have argued that the compromise of using *the standard algorithm* but not specifying *the standard algorithm* in the Common Core is problematic because different people mean different things by it. The lattice is explicitly counted in *the standard algorithm* by Fuson and Beckmann, but our agreement on what constitutes an algorithm (*a precisely defined series of steps*) implies that the lattice constitutes a different algorithm from (say) partial products. Both cannot be *the* standard algorithm.

But here is an alternative. What if Common Core, instead of using the language of *the standard algorithm* used the following construction: *an algorithm based on place-value decomposition*.

In this case, 5.NBT.B.5 would read:

Fluently multiply multi-digit whole numbers using an algorithm based on place-value decomposition.

This construction would seem to include all of the algorithms in Fuson and Beckmann’s figure; it would make clear that the Russian Peasant Algorithm does not count; and it would be more transparent than *the standard algorithm*.

Until and unless I receive cease-and-desist notifications, I will go ahead and use this version in everything I do.

For your convenience, I have rephrased the various citations below. You can thank me later.

4.NBT.B.4 Fluently add and subtract multi-digit whole numbers using an algorithm based on place-value decomposition.

5.NBT.B.5 Fluently multiply multi-digit whole numbers using an algorithm based on place-value decomposition.

6.NS.B.2 Fluently divide multi-digit numbers using an algorithm based on place-value decomposition.

Thank you! Thank you! Thank you!

The mention of “the” standard algorithm in the CCSSM has bothered me since I first read it. Since most of the CCSSM content standards focus on conceptual understanding, I was surprised when those words were used. I figured that it was a cave in to pressure from traditionalists (whatever that means).

I much prefer your wording and will follow your lead and using your version of 4.NBT.B.4, 5.NBT.B.5, and 6.NS.B.2. If “they” tell you to cease and desist, will you let me know??

@Christopher, what is your objection to saying that the lattice method is a manifestation of the standard algorithm? I studied all the items in that figure carefully, and they appear to me to be mathematically equivalent. i.e. each involves computing four products (9×6, 9×3, 4×6, 4×3) and adding the results — with attention to place value.

Random thought: I’ve seen Algebra teachers at my school use a version of the lattice method for teaching students how to multiply binomials — picture (x+4)(x+6) rather than (90+4)(30+6). What is especially neat about that way of writing the work down is that the like terms are pre-organized for you — just add along the diagonals. This is transparent when working with polynomials, but opaque when working with numbers. For that reason I would teach “the lattice” to students last.

If I were a K-5 math teacher, I would favor this order: 1) using unit cubes/blocks of ten/1000 cubes, then 2) using the very explicit area/array at the top left of the figure, and upon mastering that proceeding to 3) the method shown in Figure F, essentially an abbreviation of the previous method, then 4) doing partial products as shown in figure D, then (maybe) 5) showing the lattice method and discussing connections to #4.

Method E, which is closest to “the standard algorithm” as I was taught, is my least favorite. I hate the carries. And I feel that it is the least natural of the methods presented. No one thinks in terms of “94 times 6″ — but we all know 9×6 and 4×6. Yes, students should develop an understanding that 94×36 means “94 sixes and 94 thirties,” but that understanding does not have to dictate the “main alrgorithm” we teach students.

As an aside, I will say that I have very little use for multi-digit algorithms as a high school teacher. (Note: I didn’t say they are not important, just said they are of little use in my high school classroom.) Most of the time multi-digit arithmetic is not required in the first place to the problems I assign them, and if it did, I would, without hesitation, tell them to wheel out their calculators. This is exactly the sort of thing calculators are good for, in my judgment. I would consider it a poor use of my class time to watch them compute 94×36 by hand. My concerns are much more basic: the percentage of students who are unable to 1) perform single-digit multiplication accurately and quickly in their heads, and 2) perform integer operations (-20 < x < 20) accurately and quickly in their heads is…less than 100%. Much less. I won't even mention fractions…

Final thought: as an adult, on the rare occasions where I have to perform multi-digit arithmetic by hand, I favor method D (partial products). I can't stand to carry, and as I said it feels unnatural to me to do so. But I've only started doing this because teaching algebra has given me additional insight into the nature of these algorithms. :-)

I agree about rewriting “standard algorithm” in the Common Core standards. I think using place value to decompose makes so much more sense. I have been working for my district to write 6th grade math curriculum and I just could not force teachers to teach what I call the “old school” algorithm for division. Partial quotient just makes more sense and helps students keep the value of the numbers in mind rather than simply doing a “digit dance” they really don’t understand.

@Stephanie: tell me more about “partial quotient” vs. the “old school algorithm.” I don’t know what you have in mind for either of these.

Cool post!

Opinion: I don’t like Wu’s take on standard algorithms. Take 23 * 4. Since he defines the standard algorithm as requiring use of only one-digit multiplication, wouldn’t he exclude an algorithm that sees this as 20*4 + 3*4 from the camp?

James, two things. First on the lattice…if an algorithm isdefinedby its steps, the lattice seems to me to require different steps than the partial products algorithm. In partial products (asMichaelpoints out), we multiply 20•4 in the process of calculating 23•4. In the lattice, we multiply 2•4. That’s a different step. In what Wu and I agree to bethe standard algorithm, we combine multi-digit results of single-digit multiplications in the midst of our multiplying. In the partial products and lattice algorithms, the results of the single-digit multiplications are not combined until all of the multiplication is complete. The steps to describe these methods are different, so they are different algorithms by definition.If what we really mean to point to, though, is their similarity, then let’s name that similarity. All of these (unlike the Russian Peasant Algorithm) are

algorithms based on place-value decomposition.Second thing…you can learn more about the partial-quotients algorithm that

Stephanierefers to here.Michael, I actually agree with Wu on this point. Here’s a thought experiment (which could turn into a real one): Have adults solve the problem 94×36 (the one in the Fuson and Beckmann diagram) with paper and pencil. I venture a guess that 90% make the marks on paper that match Method H on p. 25 of the article. Then demonstrate partial products (Method D) and see whether they agree that this is thesame algorithmwith thesame stepsas the ones they just performed. Then do the same with the lattice.@Michael: I think he might argue that 20*4 is really just 2*4. i.e. 20 is “2 groups of 10,” so the answer is 2×4 tens.

Chris, I plan to reveal the complete history of this just as soon as I’ve been dead for 10 years. In the mean time, I did want to point out that there is an intentional evolution of language 1.NBT.4 -> 2.NBT.7 -> 3.NBT.2 -> 4.NBT.4, going from “strategies based on place value, properties of operations, and/or the relationship between addition and subtraction” (1.NBT.4, 2.NBT.7) to “strategies and algorithms based on place value and the properties of operations, and/or the relationship between addition and subtraction” (3.NBT.2) to “the standard algorithm” (4.NBT.4). The phrase “algorithms based on place value and the properties of operations” was intended to capture what you are trying to capture with the phrase “algorithm based on place value decomposition” (I think). Note the distinction between strategies and algorithms here; strategies work in special situations, algorithms are general.

I agree with you that the lattice algorithm and the partial products algorithm are both algorithms based on place value and the properties of operations, but not the standard algorithm.

There is a similar evolution for multiplication and division, 3.NBT.3 -> 4.NBT.5 -> 5.NBT.5.

Pingback: Summertime (or anytime) reading recommendations | Overthinking my teaching

Thanks for stopping by,

Prof. McCallum. I agree wholeheartedly with the equivalence of the two phrasings you point to.This serves to make the original point, right? Namely that the

standard algorithmcompromise isn’t really a compromise at all. It’s a phrase that is quite clearly being taken to mean different things by different parties (yours as an example of one meaning, and Beckmann and Fuson’s as an example of a conflicting one, and all three of you are on the Progressions writing team!).This would seem to serve to obfuscate rather than to clarify. Unavoidable, I am sure. But regrettable nonetheless.

Reblogged this on Khuram Ali.

Pingback: The Faulty Logic of the NYT’s Misunderstanding of the Math Wars (Part 2 of ?) | i^i

Pingback: Wrapping up the [#algorithmchat] | Overthinking my teaching

Pingback: The latest “Common Core” worksheet | Overthinking my teaching

Just wish to say your article is as surprising. The clarity in your post is just

great and i could assume you’re an expert on this

subject. Fine with your permission allow me to grab your RSS feed to keep up to date with

forthcoming post. Thanks a million and please carry on the rewarding work.