I'll switch to bools for the calculation flag in my memoization system. I weighted the cons and pros, but I can't seem to find any disadvantage except higher memory consumption. And that bitset stuff requires just to many operations and datatype size depencies and thus more required operations to make it up. And in the end, it doesn't solve any problems well, rather insufficient. So I'll stick with normals bools instead, that's ok. But I didn't give it up totally. The idea to convert a 0...1 range to an a number that's splittable und this thus sub-indexable is just too interesting. Think about an array of pointer to other arrays: you have your index variable with a set number of bits and then you split it into two parts - one for identifying with array pointer to choose and the other for the index in the pointed array. If you don't allocate all arrays before, you can allocate them on the fly and thus expand on demand. That'd be useful for stuff with a HUGE range of possible values but only a few ones used. So splitting it into a bazillion of arrays for safe a lot of memory you'd never need to allocate at all. I like the concept and it's function, so I'll instead make a simple memoization array and then continue to integrate the technique I just explained. It's somehow the "missing link" I was looking for. It doesn't consume lots of operations and only the memory you really need. It's not as fast as a previously allocated array but smaller, so it stand between realtime calculation and a memoized lookup table.
In order to integrate it as efficient as possible, I think I'll need to make a set of classes with mathematical function that share the same interface but differentiate in how value get calculate. Number one would be the C function variant. It just calls the functions you'd use normally, nothing else to be done. Then we got get lookup-table based variants which use C functions to precalculate them (and other things like factorial and power functions to a previously set range). The last one would be the memoized variant, again using the C functions for precalculations. I dropped my previous plan of writing taylor approximations cause it's simply a boring thing to do if you precalculate this or that way. I doesn't really matter which one EXCEPT that using C's standard functions you'll have the possibility to optimize your code with special assembler commands replacing the original functions. I learned about that a while ago that are actually already sine and cosine functions build into processors. So it's in all cases better to use these and not you custom ones. Same for square roots and everything else. May the powers I don't know be with me cause they do more than a lazy nature as I am.