I agree, but if you look at my slowest implementation, it looks O(n log n)

```
const nextCollatz = n => (n % 2 ? 3 * n + 1 : n / 2)
const naturals = n => [...Array(n + 1).keys()].slice(1)
const max = s => s.lengths[s.maxIdx]
const collatzLengths = n =>
naturals(n).reduce(
(state, current) => {
const calcLength = idx => {
if (idx === 1) return 1
if (idx < current) return state.lengths[idx]
return calcLength(nextCollatz(idx)) + 1
}
const length = calcLength(current)
const lengths = state.lengths.concat(length)
const maxIdx = length > max(state) ? current : state.maxIdx
return { lengths, maxIdx }
},
{ lengths: [0], maxIdx: 0 }
)
const longestCollatz = n => {
const { maxIdx, lengths } = collatzLengths(n)
return { max: maxIdx, length: lengths[maxIdx] }
}
const result = longestCollatz(1000000)
console.log(`Longest sequence at ${result.max}, length ${result.length}`)
```

All the complexity is in this line:

```
// grows the cache. Costs MUCH more than it saves in lookups
const lengths = state.lengths.concat(length)
// replace with the following to get around 1000 times faster for 1000000
// const lengths = state.lengths;lengths.push(length)
```

]]>

100% agree with this point of view

]]>IMHO the algorithm is everything, not just the top-level sketch. That's why complete different algorithms have emerged when sorting in memory, sorting data on disk, or sorting data on tape. Because the algorithm take into account if access to a single element is constant or just linear or even worse.

Just my point of view, but that might be totally off.

]]>That's what computer scientist like to say. But different hardware can be several order of magnitudes faster.

In my case the hardware didn't change, only the code

version 1 (not the actual code)

`lengths.push(x)`

version 2 (also modified, was not mutating any data ):

`lengths = [...lengths,x]`

Now, you could argue that since the array grows and I copy it every time this became quadratic, but only if you give up the pretense that only the algorythm matters and not the implementation. BTW, the vanilla quadratic version ran in around 2.7s, so this O(n log n)/O(n^2) version was a full 1000 times slower than that

]]>I think the O() notation is something that tells you if a good computer can help you. If you have a complex algorithm it doesn't make much sense to improve your hardware.

On the other hand if you have such a big difference in your timing I would think, that the algorithm is not the same since memory handling/moving is part of the algorithm at least if you look at sorting algorithms.

Cheers,

Peter

That depends on the algorithm

And O is not everything: my fastest solution runs in 50 ms, the slowest about an hour (on a core i9 ). And they use the same algorithm, except the latter uses immutable arrays and spends 99.9% of the time copying them around ðŸ™‚

]]>Add

`// noprotect`

to your code in jsbin.com, it'll work now.

]]>Thanks for the tip! I've removed the jsbin.com example just in case ...

]]>