A while ago Wikipedia brought a technique called memoization to my ears. Since then I found it always fascinating, but never really had an idea if it'd be useful. Memoization in general is when you have to return result that's always the same depending on the input parameter but have already calculated it. But before you calculate it again you check if you already calculated it. At first I wondered if it's just an "evolutionary not-so far" step from precalculation, hardly seeing any case where it'd be useful. On one hand you can always perform the required calculation again. That doesn't features anything, but depending on the calculations to do, it can be faster than looking into the storage where you've put it in before (usually a sort of hash array). And then on the other is precalculation - depending on what to calculate, this can be very, very slow. You get immediate access to the results, but depending on how many parameters you'd need to feed your storage with, you get a not-to-totally significant amount of memory to allocated and access.
And that's where memoization kicks in, like a mix of both with equally many advantages and disadvantages. At first, you don't need to take the time for precalculation. That's a bonus, each parameter would either blow up the problem and slowdown calculation even more. When calculating normals on demand for example, you may end up with a million of polygons or so and then you'd need all the time to wait, generating annoying delays you could use otherwise. So using memoization we don't need to wait, we calculate on demand and can look them up later to avoid recalculation. Though depending on what storage type you use (hash array, array for each possible value or so), it can be slower in random access since we would also need to calculation the position inside the array. If you don't use a hash array, you can increment the number of already done calculations and decide whether to call the memoizer or directly take in from the array. So if you have to calc n values and the array is full, you don't need to call the memoizer everytime but instead becide before you "brute-force" all calculations inside the block. Further splitting could result in segmented arrays, though you only benefit in special cases where you really only access specific portions of that array. Even more, store the average of how many cells you already precalculated for the last n accesses. If the average doesn't change at all, you can fill all not already precalculated cells to get the array complete and thus further access is faster for the average accesses if the function accessing the memoization array checks for array completition before.
So yeah, it's an interesting technology to reduce precalculation delays and iteratively calculate huge amounts of memory for random access - as long as you choose the correct ways of storaging and accessing. It wouldn't always to determine the right combination, but every bit helps. The more you check before you proceed in form of further "brute force" blocks, the more complex but parallel it becomes. And then effectively faster.
However, another good use of it is getting search results. That's when you search for specific strings of even simple numbers having no efficiently precalculatable range. That'll speed up alot, especially cause strings comparisons are veeery slow compared to simple numbers. I'm trying to implement a versatile system for that, I'm sure this will help for further assignments or even for games! I can image a lot cases where I'd have to make really heavy calculations I don't use all the time, but still beeing slower to calculate in realtime. And well - precalculation isn't always the most useful solution in this case. Even if it's not really for the game itself but for the data management system. There is surely a place for it, I trust in it beeing useful eventually. Plus I'm interested I'm interested in having a simple technology you can explain in or two words to my lecturers. I can remember that this semester was full of explanations cause I don't use standard technology. In the end it was faster and more versatile, otherwise I wouldn't do it!