Surprise, surprise

Raiding Wikipedia for more information about how to approximite sine and cosine, I discovered that I don't need to do that - modern processors already HAVE sinus and cosinus built-in! That's astounding, seriously! I didn't knew that. And I didn't knew that GCC is able to detect that and insert fast assembler for that. Just amazing, really simply amazing! I didn't knew and I'm glad I don't have to implement a faster approximitation. I simply need to use some special flags and everything's faster, like magic I guess. Wow. Simply wow! And the best is that I can keep using C's standard libraries for that. It's to know that this will work without any problems.

Geez, I already thought I'd need to start making some CORDIC stuff (weird rotation-based thing for calculating all kinds of things) or so. But that's just too much for my head and I'll skip everything related to more complex theory if it isn't already in form of an alghorithm. Anyway, I wanted to start my memoization experiment and what did I do? Read about how to make your sine/cosine functions. Hmpf. I even implemented a pretty decent alghorithm for Pi approximation (BBP if someone's interested), but I really shouldn't use it since it doesn't make much sense to calculate a constant if there's already one defined for me. I should really put more effort into learning all of GCC's command line options, it's worthwhile discovering more interesting things to make your program faster! But since I wanted to make stuff on my own (it feels good to know that there's something NOT depending on special hardware!). So I can port to whatever thing is out there and be happy with it. Plus the concept of my memoization idea is to not worry about one-time calculation, rather about the access of already know values. So it'd only matter later when I want to improve things. And thinking about how I'd need to chain all outputs, it wouldn't make much sense to design all these functions standalone since I want to profit from what's already precalculated. I think it's better to first create the system and then add the maths to it. There are many things I'll need to consider, especially for "open end" functions like factorial or power. But well - I think I can limit these to a specific range. Atleast the factorials. I don't need much precision for what I do, so it'd be cool to have an alghorithm with low powers/factorials and a consistent amount of detail added per iteration. So I can predict how I would need and how large I should make the arrays for such less detaildriven calculations.

Yeah, that might be a good thing to consider. However, now I remember why I wanted to implement my own versions! If I already precalculated parts of, for example, the sine formula, I can reuse them and get quicker calculation. Theoretically, I could buffer everything and only add things here and there to get a complicated formula done. Not sure how this will go, someone wish me luck for that. Could be a fucking awesome tool to have for all things math-related.

But it's really a shame to sacrifice this new knowledge for such a completely different technique... I feel myself getting jealous of somebody not wanting to always discover new technique to make something less sucky. Too bad I live for that.

No comments: