In computing, there's always a limit for data, especially numbers. When huge and detailed amounts of real numbers, most of the time the calculation will suffer from immense precision loss, often compensated by using strings to store numbers. This is, of course, even more bound to memory limitations as string representations of bignumbers take considerable more memory than, for example, their integer counterparts. Personally, I think a step towards precision-less calculations would be to work with formulas directly. Taking only a few set, fixed-precision numbers and a dozen of fomulas as input, I'm an alghorithm capable to simplify a huge formula to tiny bits is more detailed than loosing precision everywhere/making the result more than wrong. Well, using formulas as numbers and thus also in calculations is an interesting way to work maths in programs. Imagine that: most mathematical objects are sets for calculations, even the result. Every operator generates only other, probably smaller formulas except the one who put out Boolean results, True and False. I'm not good at math. But with increasing CPU power we will be able to actually make this true and solve most limited number problems.
I feel so boss by thinking about this beeing my idea. Me, the everhating math denunciator. If I ever meet someone who's awesome in math and a programmer, too, I'll try to convince him to make such an "engine". Or maybe I'll do such an engine. It should be too hard cause I can simplify every more complex operation by getting it's exact formula representation such as sinus beeing just an infinitely long trail of operations (if I remember correctly).