If I think about, what I did with those is to define everything by inventing states.If you try looking at it in an abstract way, you'll notice how it's like a) formally creating all possible states of processor words and b) defining the output of elementary operations for each defined state or a processor word. As I mentioned in my prvious post, I thought about making a more or less mechanical computer using LEGO. I don't mind taking minestorms technology to translate stuff like addition, division and so on into truly elementary operations. Without my experimentations with preprocessor macros, it wouldn't have been possible for me to think about it. But grasping the stuff on my own does usually give me some kind of glamorish feelings. Solving such "secrets" is what's driving me awake all night and day. It seems that I begin to slowly unfold all things that always fascinated me, making it magic by default in the past. And still, the incredible chain of elementary operations on numbers to form formulas never stopped to... AMAZE me. Though I dislike the definition aspect of maths, I dig all the basics behind and the practical use of it. I'm a "doer" if it comes to that, a person if simply saying stuff how it works and not one could describe in a very, very formal language. It took me time to learn something different that normal human language (I mean how usually form sentences and express stuff to give informations to other people), especially the computers language. I don't know when I really started to think that a computer does speak in simpler words than a mathematical formula. Since then I always found maths... difficult. More difficult than before. When I began to figure out what I can do with computers and programming languages (under the watchful eye of my inner wannabe game developer), I started to see maths as computer commands instead. I translated a lot of "things" I knew of into commands I knew from programming and figured out that EVERYTHING is an alghorithm and that, to me, there's no real difference or falseness in this expression. Independent from the fact that "real" alghorithms are of infinitely expanding complexity (see, the universe grows all the time), I understood how much few problems a computer has in theory and how he only needs to understand his language. Geez, the time!
Anyway, this simple and always understandable language is essentially what should be easy enough to figure mechanically. Addition, subtraction and multiplication can be describe with incrementing and bit flipping (aka NOT). The point where I asked myself what's next to design for my macro set came when I tried how to simplify division. And, well, that's not that easy you know. Basically, a division is a (theoretically) possibly endless loop, depending on how big the dividend and how small the divisor is. The loop stops when the iterated difference is smaller than the divisor (or bigger than before in cause of wrapping unsigned ints) . The resulting value is called modulo - but the result of a division is the number of subtractions, thus requiring a second value to store the result of a division! I knew that division was slow - now I really no WHY. This makes my understanding of computers better by default - and gives a realy feeling for what a pain it is for the computer to calculate such stuff. So my task is clear - create macros for comparison. Before I came to that fact I also tried to create some if-like things but wasn't really successful. I must admit, I took a look at boost's macros to get an idea of how to make this decision-like thing. Had no real guess they actually made it like everything else: concatenate one or more bool results and call a different function for each of them. I also saw that they converted normal numbers into bools by setting everything to one that's not zero in a way like I defined my "incrementation table" for the INC macro. It didn't make any sense to me, so I forgot about it but took the boolean aspect of it for more interesting: how to we get a 0 for false and a 1 for true in equality tests? By subtraction! No exactly, but that made me realize the ingenuity of boost's approach (whether they actually do it this way or not, I didn't dig deeper than that). Means, you subtract both numbers and convert everything non-null to one and the rest to 0 (or in case or boost, 0 for null and 1 for one). Man, that's one awesome solution. And probably the only way to do it in such a world of minimal instruction sets (hm, seems that I'm currently designing one... man, I feel so awesome today). I'm sure there's a way cooler and more performant implementation in modern processors - otherwise this stuff would need ages to calculate! However, unequality is just the inversion of it and less/greater could be similar. However, using bit-wise operations, you can check for lesser/greater using ~i&j for i lesser j and i&~j for i greater j, using uints of course. Unfortunately, I couldn't find a way to implement bitwise ANDing cause I don't even have bits in my definition of numbers. Though I call it "4bit", it's actually just a set of literals from 0 to 15 for power-of-two's sake with no bit-relation (a NOT is pretty easy, cause you just need to invert to ordering). AND does also take two parameters, so simply copying a generated table from a bit-based computer is not very elegant. I guess there's another solution out there, based on INC, NOT and bool conversion tables - I'll find it.