by

Map, then reduce

Added the Appendix section on .

I've been thinking about map and reduce lately. Nothing too erudite though. Something that I have realised is that it's sometimes too easy to conflate the two into just the reduce, when instead it might be clearer to have a very dumb map followed by a very dumb reduce.

Here's a simple example:

1['a', 'b', 'c']
2  .reduce((acc, k) => ({...acc, [k]: []}), {});
3// => { a: [], b: [], c: [] }

It's not obvious at first, but that reduce is doing a the job of a map as well as its own:

  1. Map: it generates new data for each element of the list (the empty arrays []).
  2. Reduce: it aggregates the data into a new piece of data (the resulting object).

These can be separated as follows:

1['a', 'b', 'c']
2  .map(k => [k, []])
3  .reduce((acc, [k,v]) => ({...acc, [k]: v}), {});
4// => { a: [], b: [], c: [] }

This is longer to write, but I think it makes the reduce more readable by turning it into a common idiom, a "pairs to object" reduction if you will. With that in mind you can focus separately on the mapping function and possibly understand better what's being produced.

Functional programming tools can help me explain better. For example, using Ramda I could implement the above as follows:

1const createLists = R.compose(
2  R.fromPairs,
3  R.map(k => [k, []])
4);
5createLists(['a', 'b', 'c']);
6// => { a: [], b: [], c: [] }

With the use of R.fromPairs in this example, we have turned the "pairs to object" idiom into a single, self-describing function invocation. Now that's easy to read. Compare to the Ramda version of the initial code:

1const createLists = R.reduce((acc, k) => ({...acc, [k]: [] }), []);
2createLists(['a', 'b', 'c']);
3// => { a: [], b: [], c: [] }

I think now the separation between map and reduce becomes more apparent, and so does the benefit of enforcing it.

Appendix

After I first published this post, I discussed this topic with my friend Rosario. He reminded me that map, filter, forEach, and all those fellows are just special cases of reduce. For example:

1const reduce = (f, init, coll) => coll.reduce(f, init);
2const map = (f, coll) => reduce((a, b) => a.concat(f(b)), [], coll);
3const filter = (f,  coll) => reduce((a, b) => f(b) ? a.concat(b) : a, [], coll);

Thinking about this, I reached another conclusion: don't use reduce unless you must. It's possible that your library has already a function that will implement what you need. Some other special case of reduce, same as Ramda's R.fromPairs filled my reduction needs above. Use that instead. It will be easier to read and understand, both by means of saving you a few braces and brackets as well as by providing a more descriptive name.

And if you don't have such a reducer at hand, create one. A descriptively named function, tailored to your use case. Wrap the reduce in a named piece of code and use that instead of sticking it in a longer chain that may be already difficult enough to follow. Like that long sentence I just wrote.