Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right.. all of these functions require more memory. Allocate is wrong... let's use shallow vs deep (significantly different expense). All pointers use a bit more memory than the original, shallow copies use more, but only the size of a primitive (best case) or pointer to an object (worst case.. often the same size), deep can use much, much more.

> arr.slice(10, 20).filter(el => el < 10).map(el => el + 5)

  slice  = shallow
  filter = shallow
  map    = deep
(2s+d)

As you point out, many engines optimise shallow with copy on write (zero cost).. so just 1 allocation at map.

> arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()

  values = deep
  drop   = shallow
  take   = shallow
  filter = shallow
  map    = deep
  toArray= deep
(3s+3d) .. significantly worse performance

Note the shallow copy:

  const arr=[{a:{b:"c"}},{d:"e"}];
  const [s]=arr.slice(0,1);
  s.a.b="f";
  console.log(arr);//=[a:{b:"f"},{d:"e"}]


You’re counting completely the wrong thing. Shallow versus deep is about the items inside, but we care about the costs of creating the collection itself. As far as structured clones are concerned, none of the operations we’re talking about are deep. At best, it’s just the wrong word to use. (Example: if you were going to call it anything, you’d call .map(x => x) shallow.)

Array:

• Array.prototype.slice may be expensive (it creates a collection, but it may be able to be done in such a way that you can’t tell).

• Array.prototype.filter is expensive (it creates a collection).

• Array.prototype.map is expensive (it creates a collection).

So you have two or three expensive operations, going through as much as the entire list (depends on how much you trim out with slice and filter) two or three times, creating an intermediate list at each step.

Iterator:

• Array.prototype.values is cheap, creating a lightweight iterator object.

• Iterator.prototype.drop is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.take is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.filter is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.map is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.toArray is the thing that actually drives everything. Now you drive the iterator chain through, going through the list only once, applying each filter or transformation as you go, and only doing one expensive allocation of a new array.

In the end, in terms of time complexity, both are O(n), but the array version has a much higher coefficient on that n. For small inputs, array may be faster. For large values, iterators will be faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: