2.21.2011

Awesome

After a while of excessive grumpyness and translation of math alghorithms into useful functions, I discovered that it's possible to take a look at all standard C function defined in gcc's libc. It is good to know how it approximites the sin, exp, cos, etc values. I thought I could find a nice alghorithm I could implement that's a bit quicker than the usual approximations, but that's a total myth. It isn't much faster except with complete precalculation. And that's where the magic kicks in. As I explained in a previous post, a system of memoized values of mathematically complex and CPU-consuming functions could be a good tradeoff between speed from precalculation and flexibility during runtime. Yesterday I created a really interesting concept of a dynamically expanding storage that's able to give and calculate values with more or less resolution. For example, you can could precalculate 360 sine values but maybe would need more for special cases - possible with my solution, it would simply create missing gaps on the fly. It requires some more checks and jumps, though. At the moment, accessing an already calculated value would require one *, one /, some null-checks depending on how you want to dig, some bit operations and finally an array access. The little bit of basic maths you'd need there is also required for normal look-up access, so the overhead is already there. What comes additionally is a series of nul checks and bit operations. And these are really not the slowest ones. Guess it's much faster to use this system than doing it in realtime... Plus you don't need to make HUGE tables for all values you could need during execution (including all the values you simply don't access). Instead, you can make a small resolution first and expand it like you want, if you want. And step by step you notice that you can chain all kinds of operations in a way that you speed up almost every calculation you'd normally do with lookup tables. Yes, it is faster to just lookup tables, but a 1000x1000x1000 cube of formula-based data is a bit too much to calculate or even store. So let's a assume this cube is a three-dimensional, highly detailed wood texture you can use to create dynamic wood surfaces for furnitures, cut trees etc. You don't need ALL elements from the cube, just the ones affecting the polygon areas. So why precalculating it on-the-fly with a low resolution before with a high one if you can randomly create the details you want while accessing them later? That's the idea and I really need to flesh it out a bit more. The basics are there and it should work fine without any problems in reallife. Though I haven't yet tested if the numerical base for accessing is 100% correct and safe. However, this is probably no big problem anymore at all. I'm glad I've discovered this solution and I won't share unless I decide to make all my codes GPL. Of course do I need to test it at first. It's certain that it will work. And I can imagine it looking awesome in a videogame that's only made of polygons and vertex colors. However, it's also something I can use my raytraced roguelike if I ever want to continue developing it further. I can imagine altering the system in a way that can also make the memoization more flexible, getting completely different data than just numbers and it will try to approximate what ever you fed it with. That's highly experimental in the current stage and shouldn't praise to much here. I know stuff can go wrong.

No comments: