3.31.2011

Free at last!

0 Comments
So now the very last exam of this semester is over and Friday will start the new one... However, this means I'm free of all this maths shit for a while (ca 3 semesters til I'll face it again) and keep my head full with other, more interesting things. But not interested in wasting all my time with aimless programming or so. I'll enjoy some days of completely blatant thinking, random thoughts and wildy unlogic actions in the wonderful worlds of homestead anarchy. Ok, a bit extreme but still - it makes me feel gooood to know it's now away for a while. Gone are the times of forcing myself not to enjoy deeply senseless reads of interesting Wikipedia topics or articles to keep myself focussed on maths. Oh and Minecraft really help not having it in head all the time. It's a bit of Zen you play it. Only blocks and you. And the fucking pig jumping your lawn.

I can remember me mentioning that I wanted to continue the works of my Lego bolt-action rifle. So far, I only need to figure out a good locking mechanic that's rather combined with the push/pull action. I love making guns! It's a so direct and forceful thing to design (ok, mine's are NOT). So tempting and pleasing for the mean and archaic part in me. Until I accidently fire it and demolish something I didn't want, but that rarely happens since I usually take them apart when they work.
But it's really difficult this time - there are so many restrictions and such a powerful spring force behing, I need to find a strong mechanic that's not easy or accidently to loose. All normal designs won't probably not work so well as it requires my currently not so compact bolt to have a design twist at the end which would probably also not work so well in terms of stability. Something different could of course also solve this problem, bringing only new ones. A straigh-pull action doesn't work - my current design has no place for gears, rotating parts and so (the bolt isn't round due the blockyness), so that's no real alternative. I had some other, more Origami-like actions in my head, but those require crazy twist in stock design and whatever not - doesn't fit the bolt-action model.

So maybe it's better to find some way to prevent to normal locking from loosing. Magnets could help there... And then there's still the question whether it's a good pull it with a level and not the back. Hey, I could combine both of them. Pulling/Pushing using the bolt and locking a lever? Damnit, that'll be nasty operate. There's no way not to use any tricky idea here.

3.29.2011

Home as castle

0 Comments
I think about moving from super sugar cane factory in Minecraft to a new home. I'd like express a bit more luxury and comfort by a more "phallic" building on top of a mountain. Shortly before making the sugar cane farm, I also created a massive, penis-like tower in the middle of the ocean but I deleted the world cause I couldn't find my home base anymore after getting grilled by a pit during obsidian harvesting. So I decided to let this stuff be and better use all materials I have to create buildings. All in all, I only want a castle for myself with quite some extension into the heights of Minecraft's cloud area. Since I wasn't able to find a mountain in my current map, I'll need to my current wooden house and hope that it's not too far away (you know, starting points and such). However, I already have for what it should feature along with the usual bed, storage and melting rooms. I did some more light-related test with redstones and it turned out that you can atleast light a room enough to see what is what and where. Not as nice as normal torches, but if Notch's making his threats real, I run better with using redstone torches as they'll probably not unlit in further updates. You need a lot of space for lighting lights of torches by a switch or pressure plate. A dome with approximitely 30 torches inside required a seperate construct around it to even feature a simple on/off mechanic... A pain in the ass but easier if I make normally shaped walls.
So this time I have a cable tray in mind that's between walls and between ceils or floors. Though it might take a huge space to fill them in while beeing able to walk around (build first, wire later), I can think of an also HUGE (in the sense of HUUUUUUUGE) manor or so to overcome the otherwise obiviously thick walls. Three blocks depth is the minimum (walls included) but claustrophobic as hell. One might say it's necessary to make Five blocks deep and conceal it with bigger doors, flower pots and so on.

Yeah, this makes sense. Combined with big room, gorgeous staircases and the like. And I think the generally darker lights can be overcome by simple delivering switchable lights here and there, optionally including hidden wiring in the cable trays if really desired (though I doubt I'd need for them different from normal room lighting).

3.28.2011

Redstone

0 Comments
After playing some days of Minecraft again, it somehow made zap and I understood how his redstone magic works. I created some NOT and OR circuits and combined both to an AND gate. So this made me wonder: can I now also create a decrementer like I did with C's preprocessor? I proved to be a bit more difficult to "invent" such mechanic, but after listing and analysing the alghorithm of bitwise-addition by hand, I noticed that I'd need an XOR gate but didn't knew how to archieve. So I took a look at Wikipedia, created my first rather large design and noticed how incredible big and thus useless for more complicated things it was.

So you get it, I figured out the right architecture but couldn't find it worth testing out. I mean seariously, how in does want to make something more complicated other than basic logic that can't be reduced to a surmountable size? However, the fan-driven Minecraft wiki brought some more compact XOR designs to me and I saw how differently you can interpret and create an XOR gate. It's quite varied in size, function and output position. In essence I was able to create some basic decrementer, but for some reason(s) some numbers didn't drecrement right and I decided to give it up making even bigger circuits. Designing is easy, testing requires simulation, creating it in the game or simple by checking whether is works on paper. But geez, that's just to cumbersome for me. It's one thing to work with Lego bricks, but another to combine them atom by atom.

So yeah, unless it get's some more useful updates, I don't see any reason to add more things other than simple logic or preasure plate-triggered lighting effects. I bet that's the reason why they implemented it, not to enable one to make an own computer in his game on a computer. Still said that there isn't any way official to convert redstone to kinetic energy except using preasure plates and dispensers or doors (though the last one doesn't work for anything else). But atleast I got an idea how to get remote minecart release to work. But that's no tested and would probably not work as well as just running.

*sigh* So back to ceiling lights... I wonder whether MC will ever get some simple logic elements with wasting huge amounts of space and redstone. The redstone repeater is only one step and I can remember seeing a mod with logic blocks and so on. Would like to pick one this time. However, today I realised how simple to grasp those computer systems could be. But the execution is error-prone and hard to realised with no ability to copy components on demand. So that's the good thing about reallife circuitry: buy some chips and it's done. No wire-by-wire placement, no atomic build-up. Simply put it in and link. Think I got closer to understanding computers and circuitry better. It's good that you can see how Minecrafts circuitry works, otherwise I wouldn't be able to grasp even a thing.

3.25.2011

Hm

0 Comments
The article posted before made me somehow realize how it's the best to never stick to only one language, depending on what and how you want to code it. If I want to code code generators (what a phrase) I better turn away from C/C++. If I wanna do something relatively platform-independent and OOP-based, I can stick to Java. I have spent so much time doing the things C/C++ weren't designed for, it's weird how I didn't notice that.
0 Comments
Found this quite entertaining article. It's basically a typical "C vs. C++" one that does, in the end, only cover the basic fact that it's not the decision between C or C++ but the decision with alghorithm to choose. I don't quite agree with what's said there, mostly cause the idea of "give me some better algho and C is faster" is plainly stupid. However, he tried to accomplish a completely equivalent C++ version but forgot that his C++ is a) probably inlined cause it's defined in the class body and b) does additional things and thus isn't completely equal. Guy, if you gonna compare C and C++, you better start off using the C++ variant and try to rebuild it using C. In a nutshell, the C++ equivalent results in quicker code to do implicitly done stuff that's not done in C but could be. Especially inlining by the compiler (I remember there are some with rather automatic inlining when optimizing) is a factor not to ignore here. If you don't specify it in C, it's mostly not added. But well, in C++ it sometimes happens in methods (if you specify them there). So all in all one should better compare an inlined C version or a version with the code directly inserted by hand instead of just doing it like they are used to.

So sensitive stuff and yet he's doing it so wrong. It's a marginal rant, yes, but I mean he even mentioned that people either write right or wrong C++... Ignoring such facts like inlining does rather result in seeing him not that capable of writing C++ if it's about pure performance. Oh man, I clearly read to many things about inlining and optimization of functions calls. I don't I'm an expert in optimizing alghorithms themself, but inlining is so far the stuff I know a fucking lot about if it comes to that. Of course, not every situation I can't decided whether it does or not, but using a differently compiled C++ program to compare an again differently compiled C program is just false by default. However, I totally agree with what he wrote about the actual features C++ gives you. And seeing how amazingly easy the use of STL seems there... Geez, I might use in my next assignment for component-based development. Atleast I hope I can do something easy this time and not a whole PGN parser and model renderer....

gcc, ld, as, g++

0 Comments
I found some nice documentation about the GNU Assembler ("as") and after some more links and articles I am now able to understand most of the commands and directives GCC generates when compiling a C program. But geez, it's a hell of specific stuff that'll be totally different if compile it for another platform. Fortunately, I can now see exactly see what's happening or you inline or not. Got to say that I that I misjudged GCC's smartness... So it turns out to be totally the same when writing an inline function using adresses or references. A small test program for changing a passed variable turned out to be completely identical in output and thus not worth seperating. So in summary, the only thing left that's in my case more useful in C++ than in C is the use of templates. And those templates are just a more strict way of inserting types and literals to me (though the template processor is able to calculate everything you can do run-time execution, too). Well, I kinda didn't expected it. I always assumed stuff's more complicated than what I had in mind and that it won't usually work that well. So I proved differently, hm...

Whatever, I learned a lot and am now interested in using this new knowledge for personal peace and later use. Whatever C-generated assembly code is popping up infront of me, I should now be able to understand the commands if supplied with an appropriate reference. Though I'm not quite sure whether I'll have to use it outside of debugging and optimization comparison in later life, it's probably improves my value as a developer. Not every programmer knows how this almost supidly simple stuff works (really, it takes you just a few hours to get comftable with understanding and reading it). Therefore you effectively level-up without much to do, even if you still need some millions of XP points to the next one. Oh and it makes me know that the resulting code is actually more advantageous than anything I'd write on my own. Yes, I could do that by myself, but I don't think it's useful in my case cause it's machine-dependant then. Take the Pandora for example. It's ARM, my laptop is x64. So in every case I'd need to rewrite the stuff if I want to port it to the Pandora. Also, everything I'd be able to do can also be done by the compiler and in a probably more intelligent context. It also comes a certain degree of computer engineering... The more you now about the inner works of your CPU, the better you can design your assembler code for it. The benefits are only marginal with hand-written code for me - there are more people with more experience in computer architecture than dirty thoughts in my head (and yes, there are many of them). So pew, I may consider using more C instead of C++ if ever want to make my own programming language real. Other than the game project, it's something with fixed alghorithmic and feature set, not a monolithic monster. I was also able to figure out the command syntax details I'd need to fill things with actions. It came quicker than I imaged, suddenly popping up during a train ride. Thinking about it, it's probably also possible to make the resulting code as quick as I'd write by myself in assembler. Looking up in what segment which C program data goes may also be valuable to know when designing the code generator.

3.24.2011

End of development

0 Comments
I came to the conclusion that I don't like personal coding if I can't control everything. Many of my open mind forks join into this conclusion, most notably that I'm doing it not for money, but for personal interest and learning aspect. I was always like that, never been satisfied with something after learning more about it's inner mechanics. So it's just natural to me that I don't want to code such a huge project without getting any money for it. My list of stuff to create for it is very, very, very long. Longer than anything I had in mind before and it becomes even longer. Using a programming languages specific features does ease the work, but it's still a huge amount of work and I completely overlooked it in the beginning. Of course where my expectation different because I wasn't as smart with it as I'm now. The more I practiced, the more I learned about it and the more I noticed that my astronomically creeping long list of features is just the top of an iceberg. The list is still there and so is my vision. But I'm afraid I'll die insight if it doesn't work as expected. I had too ambitious ideas, too many things I wanted to implement - stuff that's nowadays calculated on the graphics card and not on a general purpose CPU. Even the most optimized software renderer can't just allow another, equally demanding component while still performing good. I have to face it, my idea failed hard due to having too many ideas I wanted to see in it.

It's not about how to archieve something, but rather the work I would need to implement everything in my mind. I take personal projects personal. Seriously personal. On the side, I take commercial projects rather professional. Means I take completely different steps rather than randomly trying to archieve just some thing I have in mind. I could start right now like I did some years ago and keep on pumping code as it's required. It'd only take time but also boredom on my side - it cost my free time and don't get anything back. The results would be debatable since it's "just" an ASCII game with slightly pimped graphics but enormous amounts of code and tools required to make to the game I wanted to have. If I'd get an appropriate amount of money for that, I'd totally start right now. If I'd get more people also getting money and thus motivated enough, I'd also keep up developing even if it turns out I can't implement everything.

So whatever I do personally, I shouldn't waste it this way. I mean just look at your usual random game written in C/C++/Java/noidea using SDL/OpenGL/Allego/whatever! It's simple in shape, doesn't try to hard to squeeze every possible feature and does keep it do what was fun to the developer. Is it fun for me to play? Probably not, I'm a spoiled gamer. Was it fun for the programmer? Probably - simple code, simple things, stuff to relax and he probably get's what he wanted in his free time cause PHP programming is probably not his favourite home activity. Did he tried to meet some standards? I don't know. Do you know? He probably knows and I bet he just set some guidelines to make it fun for him, not to make it "perfect". But if you aren't even comfortable with to tools you use, you won't have fun coding it.

So there's where we were in the beginning, not beeing able to to control what you want if you want control everything. I should never get a lead programmer job, this would probably result in the same shit as I'm currently my own lead programmer, designer, normal programmer, game developer, manager and whatever not you'd need in developer studio. See, I'm someone who likes to make really, really good code. That doesn't mean I want to code everything, I just want to make the best way I can, giving the best possible result as time allows me to. So, where's the time now? Right, it's sitting in my cup of hot chocolate, doing me all favors but no pleasure. So when time's not a limit and money not existent, no one would do anything.

So what the fuck am I coding this stupid game, I'm only whining and swearing all day cause nothing works like I want it to be. I don't like it anymore, it became the bullshitty shit ever since I began it. You know what? I'll not freeze it, I'll give it up. I'll be free at last! It's a bad project with too much perfectionism in mind. I'm a code monkey and I can only deliver bigger work if there's some goal set by another person or a team behind. It's simply to much for me and should drop everything related to it... The basic concept of it was to get everything perfect and that's the worst starting condition ever for a project you want to have fun with.

I failed so hard but learned a lot. That can only be good, thinking about how much could have gone wrong if I had this attitude during an assignment or bachelor project or just in a regular job. It's crucial for me to know where my practical limits are, not the theoretical ones. Never give me the freedom to implement anything in any way I want, it'll go downhill as soon as I start... That said, I'm probably the best person to work in a small team and get direction for what is to do.

I mean just look at this huge pile of blog entries about how coding using C++ is - it's actually not, it's just not perfect. It's the same with Java - it's not perfect for anything I had in mind, but for an NXT it's quite convenient cause it enables you to profit what's already specified in the Java technology itself. I decided to settle with it cause there wasn't any way I could get it to work. But on an everyday computer? You have so many tools and languages, so the search for something perfect is endless cause it's the major platform along all other platforms (technically) and you have more decision than you can choke. On the NXT on the other side, there's only a limited set of compilers and the most worthy one in this case was LeJOS for executing Java bytecode. On the PC, I'd probably choose the most performant one which's unfortunately also what I know is not capable of doing it in such a way I'd like to want. So the circle closes and I end up beeing annoyed and frustrated about how I can't find the perfect solution for it on the PC. And since my only personal motivation for doing something on my own is music generation and pretty graphics, it boils down to either choose a limited platform or repeat endlessly along the voids of /dev/null.

But what I liked was to read a lot of stuff on Wikipedia. It was the only thing I enjoyed during those stupid jumps between getting syntactic sugar out of C++ it doesn't have and actually coding something I could use later. I only put it under the light of "game development" to say myself I'm working on it. But I didn't and never will due to this. I'm personally just not interested in making games without getting money for it. I see good games as something you paid people for cause they invested a shitload of time and nerves to bring best quality games they could come up with. That's why I'm paying for everyday I play more often and longer than just a few hours. Of course, bad apples are everywhere, but you need to pay them what they invested and keep up their good work! I see where this is going. I wouldn't start game development without getting money for it cause I do usually give money for games... Hm. That's psychologically interested. You give, you get. Circle closed.

So yeah, I give and get in normal case. But since I won't get anything for it, it's done. However, I'm interested in learning more about the inner workings of programming languages and how they they implement their features, concepts and so on. I'm currently trying to figure out how GCC's assembler works, so I can please myself with most low-end thing with maximal control I can ever get. That's basically bedrock. Oh, and I'm also playing minecraft again. It's simple zen-like gameplay did help me to get over my little programmer crisis.

3.22.2011

Forced pause

0 Comments
Whatever happens the next days, I now force myself to code anything the following two (atleast) weeks. If I want to make something I better get a piece of a paper and draw and build something LEGO instead or read a book or a comic. There is so much more out there than forced coding and creeped stuff like that. Life's not a pack of "TO THIS TIL YA DEAAAAD". Haven't had much fun for a while, so I better keep myself consistently from it.

Final decision

0 Comments
I took some minutes to think about what I tried to accomplished the last weeks, not to say months and the immediate success by choosing simple, procedural approaches is a bit creepy.
I invested so much time in making those solid blocks of code and object-related things that I totally forgot about simply coding stuff to make it work. I can remember the time I coded solely in Purebasic and started to learn C/C++ effectively cause I wanted more speed. I didn't need any objects or classes cause I was able to figure out stuff without it. I don't say I'll completely drop classes and so cause so much code insertions depends on them, but I'll try to not focus on overly multifunctional ones but rather on them for syntactic sugar or simply only memory allocation. What's the result of forcing a concept everywhere? For instance, Java's forced OOP, Lisp's bracketeered and C's strictness (though the latter is more due to C beeing not as updated as C++). I'm sick of this. I wasted the whole term break by doing stuff that was just garbage in practical use, confusing me to no end what to use and how to design my programs. I lost identity in all of this and the only thing floating in my mind is to create a mixture of many, many functions put in nested namespaces and a few classes for construction/deconstruction of memory class.
I can't help, I see no use in OOP anymore since I began this immensively shitty quest. Everything important to me now turns out to be something I can only archieve with inserting whole code blocks (emulated using function classes/class functions) which isn't possible without creating a class for each function. I miss something inbetween, something that's like passing declarations for what to call rather than from adresses and numbers of other functions. More than before I long for something that's simply not stupid in design. I don't want to do my game with anything other than this inbetween. Is it normal to not beeing able to find something like that?

I feel like falling into a scorpion pit made to capture souls like on to long journey for a programming language suitable to fulfill my very own needs. I'm tempted to completely drops everything that's causing me headaches but that won't work at all. For example, I can't just drop classes completely as I'd need still need to find a way of inserting inline functions. However, naming everything as structs, disabling rtti, holding optimization level on the max should and making only some exceptions for thing like direct insertion using class functions should do the work, or not? *sigh* I'm depressed. I'm totally depressed. World's playing dart with me and I'm the little bug crawling on bullseye during a world champion's shootout.
May it be better for me drop even more and go back to normal C? Maybe I limit myself to fixed data types and so? No. I don't want to. I'm not soo desperate to do something like that. What about the work I'd save with templates? No, that's all just plain old bullshit. Seems I'm the only person on fucking earth to be angry enough about programming to not really do it atm while still beeing quite knowledgable and insighted about it.

Fuck, there must be a strict concept to suite my needs. Even it's just to use typical OOP for memory allocation and deallocation. I better stop coding for two or three weeks and focus on my last exam while playing games and watching movies.

3.21.2011

Merge, merge, merge, merge, merge

0 Comments
Put all colors, position and other dimensional value classes in one since it doesn't really make it better or easier to seperate them. And since it has no use outside of these applications, it doesn't make any sense to keep it too seperated this time. Some peeps may think differently, but this has essential practical reason. It makes the whole thing easier to manage. Almost every operation on an n-dimensional vectors is used for color operations and vice versa, you gain no advance except class seperation. Seriously, what's the point of adding another inheritance for only three new functions... You wouldn't even need these, anyway. Feels also good to not pollute the project folder with a bunch of super small files with no real content. Better keep it all in one place and then seperate if if necessary. In case I'll split it later, I could simply typedef some alias... Hm. No idea. Maybe it helps, maybe not. But I'll probably just not do anything with it.

The more I think about it, the more I tend to think it'd be better you just do everything more or less procedural next time. Using templates and references you won't get in plain old C, it's much more versatiles and gives you equal freedom for purely public data structures. I even extended the function class/class function concept with some more ideas and tests, so that I can relatively simply create a small subset of pre-defined variables inside a loop and then working on them with no problem at all. I won't able to do any deep optimization there as its always necessary to load array content every iteration. All in all I'm more pleased with it than before cause it gets easier to link them now. Taking a look at my very, very old concepts did make me realize that inheritance did make it only more difficult, so I figure a nice of doing it differently.

Yeah, after all I'd FINALLY like to start with rendering right now... Stuff's getting boring to just think and plan instead of getting results. Another reason to not care about things like that anymore and go coding. But meeeh, study begins in two weeks and I still have to do some maths for another math exam try. I'd be easier to not worry about stuff like that all term break long. I hope I get done with it and can concentrage on more important parts in student life.

Cleaning up

0 Comments
Started to sytematically rework my old raytracer code with new ideas, better structuring etc. It's creepy to know that I was (and still am) able to code whatever stuff without putting it into comfortable classes. But since it's now now a REAL lot of time ago I worked on it the last time, noone has to judge over this. I already had good approaches, but I can now see why everything turned out so confusing and overwhelming for me. It lacked compactness and maintainability. A bunch of random and not-so well thought structures. I've had many ideas involving a complete seperation of rendering, physics and everything else. After taking a quick look at the source code, I had a lot of ideas how to optimize it and hopefully put away all the slowlyness it had before. With my n-dimensional vector magic, I can make this one a good-looking and coder-friendly to read alghorithm. For extensibility's sake it will handle contrasts better by using HDR color ranges by default. I won't use anything different from floats cause it's the most convenient format for graphics calculations (and the performance loss isn't a problem, especially cause you gain more free cycles due to fewer required operations compared to normal integers). While this is just a numeric and rather "code reduction"-related area, I'll also need to create some tools for creating game content - most notably map editors for creating test scenes, adding objects and so on. I underestimated the necessity of keeping the map, physics and game logic data distinct and combinable. It's a huge problem for me too keep things apart after creating one part. So I'll seperate everything from the beginning: one data block for colors/graphics, one for physics data and then others for... other stuff, you know! I'll try to keep the game logic itself without special areas triggering events or so, everything should work like an ingame feature to blend smoothly with the idea of having a full destructible map (in theory). It's a) less work for me to sync all others ideas I have in mind with it and b) it gives the player a special degree of freedom to tackle, like in some very open, cutscene-less RPGs or action-adventures. Two design features in one technical! If that's not a useful reduction for hobbyish game designing. Another aspect is to ease the work for multithreading later. If everythings turnin well, it's possible to seperate a set of time-consuming threads for maximum performance and only minor changes between the calculated data from frame to frame. I can't say anything experience-based about it. So it's a start into something new, possibly improving (only used threads for seperated input or so).

Also, the more I expand these n-dimensionale classes, the more classes make no sense anymore and can be replaced by it. It's all a matter of using a versatile base as it seems. Better not focus on "possibly useful one day" but ratheron "what does it make better or comfortable". I forgot about it.

Aimless coding does not make one happy.

3.17.2011

Deleting

0 Comments
I had a creative highlight last night and got a quite nice and convincing RTTI implementation of minimalistic functionality working. Unfotunately, there's a problem with deleting random object that could have any type possible. I have a single delete function for each RTTI-wrapped class I can call by retrieving a reference to the object's RTTI structure (Oh and I made the assignment operator private, so you can't overwrite it but keep it possible to access methods from there. Quite comfortable.). But the problem I have so far is that this function works like delete - it deletes the passed address. So I can't just put it into the destructor of a random object because it can't delete itself. So I thought about globally overloading (or atleast in it's namespace) the operator for RandomObject types to ensure that I can delete in the way I want. And that's the problem itself, delete only works with void* and I you can't do anything about it. To ensure that an object get's deleted properly without knowing it's type during compile time, you have to enable C++' RTTI, making the whole thing pointless. I would love to call constructors on my own, but that's also not possible.

I'm again stuck with a stupid "feature" of C++. Fine. Just fine. Sometimes I think it's besser to totally drop all C++, but then again I can't find any alternative or even want to rewrite everything I've accomplished so far. It sucks, totally. And again, it's something I can only implement in my own language. ARGH, STUFF'S PISSING ME OFF. Oh man, this language sucks in so many points. I would've have been better to not implement it in an "either this or not" way. However, I can still implement my own function outside the class and make it avaible, namespace-wide. So there's atleast one way to keep possible to delete objects depending on their real type. Hm, no, this does also not work... I can't make it a normal function as C++' still allows to implicitely cast to a base cast and thus it may be ambigous what function to call. So I guess I can't do anything about it. Writing x->rtti().deleteObject(x); instead of delete x; is inconvenient.

3.16.2011

Coding Style in general

0 Comments
The more I read Google's C++ Style Guide, the more I wonder about their sometimes just silly formatting conventions. Technically, it's very well-explained and tought me some worthwhile facts I'll obey in the future. But if it comes to naming conventions and formatting... Geez, these guys are really nasty. 80 characters length limit? Personally, I don't see any reason for that in my projects. I'm don't remotely edit files of a space station somewhere in the depths of nil using VI or so, I usually sit in front a one or two files and profit from having a possibly fat block of high density code that wouldn't be comfortably readable with only 80 characters. Ok, ok - I'm not somebody how likes one-dimensional and lengthy codes. I prefere spanning it over all dimensions my head can offer me to understand, including the informational depth itself. To their defense: I don't write as much "diverse" code as they do - I'm one person writing stuff for graphics and video games in a highly generic and reusable manor. I like to keep it condensed to it's functional core instead of making it "too readable" for others. It's not that I don't encourage commenting code - I comment a lot, mostly cause I often forget what I wrote in the past, so I can recover anything and also their means including simple delete functions I may find inappropriate later though they play an important role at this point. I also use operators a lot, mostly cause I operate with a shitload of maths nowadays, requiring some syntax you can understand by just looking at the symbols. So yeah, I probably don't really understand such "conventions" . I only keep on reducing codelines vertically if it's highly redundant code with only small changes, I use abbrevitions for always-same parameter variables etc. It's all a matter of what you code I say. If I'd write only C code, I'd probably name identifiers differently, since there isn't any way to "batch name" whole blocks of definitions/declarations. This encourages me to think more about my personal guidelines cause I tend to forget useful things I did in the past - though most of the time I discover new, better things that go better by default. But so far I'm rather solid with formatting and technique in general.

Anyway, it males your head hurt to read stuff you totally disagree. How great it is to not work for Google. I don't squeeze my code into silly 80 characters, dumbass. Especially not such repetitve stuff like mathematical operators and so. But thinking about it, I started again to think about creating a LinkedList set instead of my self-managing one. The concept of having a "Tracers" for an array, which cover a set of entries based on settings and calling callbacks on a template class base is quite convincing for other classes, too, I think. There isn't much to define when speaking about "storage classes" like I like to call arrays, lists, etc. You got Arrays, Lists of all kinds, and then special tree variants (though a list can be thought as a tree, too). So since I nailed down an interesting way of mass-altering/scanning objects in every kind of such storages with a concept useful for once-implement-and-never-change codes, I guess I can keep on designing later storage classes in the same way. It took me some time to reach a point where it's close to all-inline implementation but with high flexibility during compile time. So it's a good idea to continue this... Unfortunately, I'm kind of a lazy bum these days. And this won't change til I feel so. It's great to have term breaks!

Google C++ Style Guide

0 Comments
I found this coding style guidefrom Google a while ago. I agree with many points and said myself that some of the "new to me" ones are really worthwhile practising. It might take to some time to get used to it, but it's good say that your code follows a strict styleguide instead of changing all the time depending on what you think it good. I can remember how fluently I was able to code with a strict style guide when I used one for the first time. It's really not hard to say yourself "Do it this way!" and keep on typing. Additionally, I'm often a bit inconsistent, especially if it comes to comments. I often use them for some kind of scratchpad for initial ideas to keep it connected with the code where I got the idea from or what was related to it.

3.13.2011

0 Comments
Deciced to finish the 4bit preprocessor "computer" in one or two days and do something different for some time. I felt like I want to something more substantial than just pushing bits from a to b - something like continuing the half, single shot LEGO bolt-action rifle layin round here! I just lost interest in in over time, but I knew that some day I'll want to continue making LEGO guns! Today is the day, so it's best to finish what I'm currently working and make something different, so that I won't get grump and loose interest in it, too.

So yeah, a few improvements my preprocessor computer (I called him "PPC") and I'll got on with other stuff. Can't wait to get my hands on a LEGO gun for some reason... That must be the "latent militant" in me. From time to I break my rather peaceful attitude and turn into a somewhat strange persons who likes to have some kind of "false action" in his live, making stuff that's usually an area of compromise and ignorance in reallife. Like guns or weapons in general. I think that's something hidden in each human beeing (atleast for the male ones). And depending on how you feel about violence in general, it may result in a more or less harmful situation. I'm usually very peaceful, rather expressing anger and fury in words (like in this blog) or in systematic ignorance of selected individuals. I've never really done anything physically or psychologically bad to other persons (though I like to tell myself how pleasing the imagination could be). And there are these days where I feel a bit pissed off about all this peace and talk, just so that it me some kind of mild "thrill" to even think or read about guns. Though after getting into the dirty corners of guns (which includes killing of living entities, threating, war and whatever), I often decide to go away from and start beeing a totally peaceful person again.

Guess that's totally human to "spice up your life" a bit if everything's all-day the same. But where isn't it in industrial countries. If you're doing well and got a nice occupation that's not too stressful but still not boring, there's always time for something different. Unfortunately, that's also the reason why so many people start to watch retarted cat torturers on the internet and so on. I'm glad my imagination is vivid enough to get creeped out by even the smallest hint of harm in any way. On the other hand, I found a compromise with most videogames this way. There's a certain, manifold degree marking a good or a bad kind of virtual violence to me. But in first place it alsways depends on the quality and level of polishing. For example, I'd never include too brutal things in my own games - some yes, but only if they fit the general "look and feel" perfectly. Anything else feels rather dumb and "for violence' sake". And I have high standards not unoften connected with unique settings and massive amounts of high quality art if it comes to a good game feeling.

Theoretically

0 Comments
You could, in theory, describe all parameters of preprocessors macros as other macros and only getting content from literals and concatenations. That's quite interesting, almost like interpreting each macro as combination row/col in a normal source code file. I though about only taking such parameters and evaluating them later, but I couldn't come up with the discipline writing it this way. All in all it forces to always create different variants of basic commands like IF and so. I think about reducing it even more, giving only DEC for decrementing, NIL for null checks and JMP for jump on zero parameter. I'll backup my current work and alter it a bit to have exactly those commands under the hud. Maybe I can also replace the current FINF/FINFDP macros and simply them in notation to get a better overview about what's possible.

Cool stuff you can read on Wikipedia - learn a lot by what's theoretically possible and then notice that it's REALLY possible by implementing on your own. It's amazing what you can teach such a thing with only decrementing, checking and conditional jumping. The more I code using them, the more it becomes reality that I'll be able to a LEGO computer on a mechanical base but NXT-controlled input (you don't want to input the million commands it requires, trust me...). Only thing I don't know is how to archieve a comfortable memory system. A pure Turing machine is no real option since it requires just too many memory cells to make bigger programs. Instead, I'm thinking about another system of complete bytes as single cell with input/output "ports" to decrement and check it's value for null. That's relatively simply I think and could solve the seemingly common "where the put all the gears..." problem. Then you just need a machine in the middle that take control signals from the NXT - possibly with some memory cells on it's own you'd need to navigate to an absolute memory address.

Maybe it's a good idea to work more on how to combine the interfaces between NXT and other memory cells... Well, that's again too futuristic in terms of music. Let's put that idea away and do something different for diversities sake (though it might not last so long...).

It's getting better all the time

0 Comments
Implemented EQUAL and now I can perfectly create some small programs for output values and strings. But I had to rethink the way my macros differentiate between evaluation BEFORE macro call or evaluating AFTER the macro call. That's a big problem, especially for IFs. Using only one IF syntax, I was able to decide between one of them, so I implement a whopping total of eight different IF macros. Why so many? Well, an IF takes three parameter: condition, macro on true, macro on false. So, how do you insert them? Right, using a little sequence of three characters signaling whether to evaluate them before ("E") or call them explicitely ("C"). It seems right to do so cause it enables you write IF_CEE(CALL_ME_FOR_CONDITION,"true","false) as well as IF_ECC(EQUAL(4,5),MACRO_ON_TRUE,MACRO_ON_FALSE) - you won't need to always write a new macro or always evaluate all expressions. Of course, evaluation before does not guarantee you this, as it get's evaluated before. However, this enables me seperate more complex operations but insert smaller ones with so many supertiny macros.

I think this little macro collections needs his own place and name... It's small but very useful when used carefully and correctly. You can't just catch errors like a normal problem, all is based on whether you know what to insert and how it connects to other stuff etc. Quite complicated without any knowledge about it, but it should be quite easy to manage for peeps not use to intense macro-ing. Only thing bothering me is the fact that I'll need to prefix it one day... It looks ugly writing code this way. But as I said, I'll add the prefixes later when I finished almost anything I'll need. It's OK to use it if it's finished, but as long as most stuff isn't final, I don't think it's necessary to do so. Next step would be adding the call indicators to all other macros (not only if) and start to implement bool-controlled for loops and maybe tackle a division later. Let's see where this is going - but at first I need to get some sleep....

Those... "elementary things" make me quite exhausted from time to time. Especially if it's something so fundamental like what I'm doing at the moment. Oh and maybe I can also get rid of the FINF macro I use for subtraction...

3.12.2011

Gotcha

0 Comments
So in introduce a new FNF function that takes takes two parameter for the macro to call so that I can create expressions like m(m(m(x,y),y),y) etc. That makes multiplication pretty easy to implement. It's a bit like functional programming (atleast from what I can remember) cause you can't use control loops of jumps in a flow of commands. You can only rely on inserting functions into other functions and so on. For cases like multiplicating or adding zero, I added another parameter with an expression called when a "null iteration" is inserted. So you can safely use MUL(0,3) or ADD(0,0) with no problems or whatever.

And I was also able to remove the NOT table! I only have 6 basic macros so far: MAXNUM, MINNUM. DEC, CAT, FINF an FINFDP where the last one is the macro explained above. CAT is used for concatenation and macro evaluation before it gets concatenated with the ## operator. It seems that this is the key nested loops. You must to make sure that you ALWAYS evaluate macros before they get inserted into their own body (and thus beeing not reachable cause the compiler cuts out the currently executed macro. If you carefully code around this limitation, it's possible to define linear command sequences as opposed to FOR loops that depend on a counter values and thus on evaluation during another macro evaluation. If you can evaluate them before you get into a macro they also need to evaluate, then it's totally possible to nest them. If not, we have to define different loops. It's interesting how this also reflects the call stack of higher programming languages (atleast higher than C's preprocessor): each evaluation of a macro inside a loop bears a new jump to another place, pushing a return address on the stack. Currently, I only have one loop with n possible iterations, as every iteration is composed of all it's next/previous iterations. Therefore, you need to evaluate a huge set of macros! Even more complete in explanation: Each macro evaluation is a jump. Thus, each iteration is a jump and each iteration requires it's own place on the stack. If the stack is full, then you've probably reached your loop end. Additionally, you can't jump to a position placed on the stack, it forbids recursion by default. So what can you do then? Right, add more macros and thus more possible jumps and loop iterations. And cause you can't just call stuff that's already on the stack, you can't nest loops with only one set of loop iterations.

It's quite an interesting concept in theory. It seems that it's a Turing complete (though if limited) system. Reading about on stackoverflow.com, someone has actually proven this. Hey, I could create an esoteric language out of this! But I'm afraid there does also exist one (and no, I don't mean the preprocessor itself). However, this makes my Lego computer less complex in creation. You I only need some kind of memory tape, decrementing, conditional jumping to address x and and input data tape for instructions! Yeah, I think that should be possible to make. Maybe I can even decrease this to a pure Turing machine and let it run on belt-fed jump system. A Turing machine seems to be just a systems of two belts for reading on belt one, deciding, writing to belt two, moving belt one, moving belt two. But programming it could be hard, as I'd need to figure out how to translate subtraction etc to those commands. However, Rome wasn't build in one day. And the only possible way to comfortably feed this mechanic is to translate higher commands like adding, multiplying or even dividing (algorithm, watch out!) into those instructions.
0 Comments
Mind is weak today. Made a macro that alters a variable x my nested a single function repeatedly, generating stuff like DEC(DEC(...DEC(x)...)) and so on. So using INC as the command, it decreases a value n times, making it an effective subtraction with much stuff to write. Cool thing is, you can also nest it as often as you want cause the x get evaluated before they enters the loop itself. I also completely replace all INCs with DECs and formed all further INCs and ADDs with NOT(SUB/DEC(NOT(x))). Since I use decrementing more often in loops and comparisons, I can benefit from having less complex DEC operations. I also found out that I'll need to "catch" the overlow that occurs when subtracting a bigger (b) from a smaller (a) value to check whether b is greater than a. Luckily, astack overflow user once asked for how to archieve using gates in electronic circuits. So I got the proof that I can do this comftable with just decrementing and checks for zero.

New born

0 Comments
So I decided to leave all this macro shit for a longer while and get into more interesting things. I mean I totally drifted into realms I disliked in the end due it's limitations. So I should just ignore it for some time and continue somewhere else... I couldn't quite get past all this and found some interesting bit hacks (most notably this site) and other things. I'm looking for complete alghorithms rather than hints, so I hope finding more of these bit manipulation tricks for later purposes or just to myself warm for making a Lego calculator thingie. The most interesting things I found so far were XOR swapping, XOR linked lists and some other things about this operation. The practical use of is... debatable. But sometimes it's good to know of it - and having it handy if you may need it. And since I currently don't need to watch for deadlines or milestones, I can safely implement stuff I'd not use normally. And implementing already existing alghorithms in general should cure my currently bashed brain a bit.

Note to self: don't try too hard.

Forget it

0 Comments
Ok, forget it. It didn't work as expected. From dislike to fail, what a great thing.
0 Comments
I think I found a good solution. MY INC/DEC does wraps if you want to increment the maximum or decrease the minimum. So when this happens, you know it's bigger or smaller. I just don't know how to combine within this fucking macro system. I should at first finish the subtraction, as everything depends on it. Then, when I can use it's result, I can implement "i equal j" and go from there. The macro system kind of sucks cause you need to repeat all the time and can't really take all of your previous "efforts" and combine them wildy. It's tricky and I'd rather prefer having a recursion ability. But well, shouldn't I make my own preprocessor then? *sigh* Stuff's never getting comftable with standard stuff and so. I think I went too far today and began to dislike all this rather complicated shit. I even started to lookup Wikipedia for help, but you'll never be able to read about actual implementations but rather about this one variant and that other concept... So far, I think I tumbled into something I don't want to completely implement. Only partially. There's some weird shit going in there in the heads of instruction set designers (there even exist sets with just one instruction. yes, ONE FUCKING INSTRUCTION. there is also zero instructions, but that doesn't work like software anymore), I don't like to read more about it. And this should also make me go back from whence I came to continue my work on more productive things. If I'll ever do anything like that in Lego, then I'll keep it binary and work with ANDs, NOTs, ORs etc all the fucking time. I really never want to this with literals again. It makes your head boom and your heart break.

Never trust instruction set designers.

3.11.2011

0 Comments
If I think about, what I did with those is to define everything by inventing states.If you try looking at it in an abstract way, you'll notice how it's like a) formally creating all possible states of processor words and b) defining the output of elementary operations for each defined state or a processor word. As I mentioned in my prvious post, I thought about making a more or less mechanical computer using LEGO. I don't mind taking minestorms technology to translate stuff like addition, division and so on into truly elementary operations. Without my experimentations with preprocessor macros, it wouldn't have been possible for me to think about it. But grasping the stuff on my own does usually give me some kind of glamorish feelings. Solving such "secrets" is what's driving me awake all night and day. It seems that I begin to slowly unfold all things that always fascinated me, making it magic by default in the past. And still, the incredible chain of elementary operations on numbers to form formulas never stopped to... AMAZE me. Though I dislike the definition aspect of maths, I dig all the basics behind and the practical use of it. I'm a "doer" if it comes to that, a person if simply saying stuff how it works and not one could describe in a very, very formal language. It took me time to learn something different that normal human language (I mean how usually form sentences and express stuff to give informations to other people), especially the computers language. I don't know when I really started to think that a computer does speak in simpler words than a mathematical formula. Since then I always found maths... difficult. More difficult than before. When I began to figure out what I can do with computers and programming languages (under the watchful eye of my inner wannabe game developer), I started to see maths as computer commands instead. I translated a lot of "things" I knew of into commands I knew from programming and figured out that EVERYTHING is an alghorithm and that, to me, there's no real difference or falseness in this expression. Independent from the fact that "real" alghorithms are of infinitely expanding complexity (see, the universe grows all the time), I understood how much few problems a computer has in theory and how he only needs to understand his language. Geez, the time!

Anyway, this simple and always understandable language is essentially what should be easy enough to figure mechanically. Addition, subtraction and multiplication can be describe with incrementing and bit flipping (aka NOT). The point where I asked myself what's next to design for my macro set came when I tried how to simplify division. And, well, that's not that easy you know. Basically, a division is a (theoretically) possibly endless loop, depending on how big the dividend and how small the divisor is. The loop stops when the iterated difference is smaller than the divisor (or bigger than before in cause of wrapping unsigned ints) . The resulting value is called modulo - but the result of a division is the number of subtractions, thus requiring a second value to store the result of a division! I knew that division was slow - now I really no WHY. This makes my understanding of computers better by default - and gives a realy feeling for what a pain it is for the computer to calculate such stuff. So my task is clear - create macros for comparison. Before I came to that fact I also tried to create some if-like things but wasn't really successful. I must admit, I took a look at boost's macros to get an idea of how to make this decision-like thing. Had no real guess they actually made it like everything else: concatenate one or more bool results and call a different function for each of them. I also saw that they converted normal numbers into bools by setting everything to one that's not zero in a way like I defined my "incrementation table" for the INC macro. It didn't make any sense to me, so I forgot about it but took the boolean aspect of it for more interesting: how to we get a 0 for false and a 1 for true in equality tests? By subtraction! No exactly, but that made me realize the ingenuity of boost's approach (whether they actually do it this way or not, I didn't dig deeper than that). Means, you subtract both numbers and convert everything non-null to one and the rest to 0 (or in case or boost, 0 for null and 1 for one). Man, that's one awesome solution. And probably the only way to do it in such a world of minimal instruction sets (hm, seems that I'm currently designing one... man, I feel so awesome today). I'm sure there's a way cooler and more performant implementation in modern processors - otherwise this stuff would need ages to calculate! However, unequality is just the inversion of it and less/greater could be similar. However, using bit-wise operations, you can check for lesser/greater using ~i&j for i lesser j and i&~j for i greater j, using uints of course. Unfortunately, I couldn't find a way to implement bitwise ANDing cause I don't even have bits in my definition of numbers. Though I call it "4bit", it's actually just a set of literals from 0 to 15 for power-of-two's sake with no bit-relation (a NOT is pretty easy, cause you just need to invert to ordering). AND does also take two parameters, so simply copying a generated table from a bit-based computer is not very elegant. I guess there's another solution out there, based on INC, NOT and bool conversion tables - I'll find it.

INCremenint, DECrementing, NOT(h)ing

0 Comments
I thought about how to archieve addition and substraction with a minimum of syntactical repetition to write and immediately remembered what I learned about the power of two complement. I knew that one day I'll benefit from knowing about it! Especially cause I started to think about making a mechanic arithmetic machine with Lego... HOWEVER, implementing a NOT for my 4 bit preprocessor toolset was quite beneficial: I remembered how you could avoid implementing a special subtraction command by NOTing the number you want to substract from something and then adding both. So it came to my mind that I should be able to effectively delete all increment macros by replacing it with such power of two complement "trick". It's essentially the same as ~((~num)+1)&15u in C/C++. I can create a decrement with it using nothing but the addition, it really decreases the amount of stuff one needs to write. Only thing bothering me at the moment is that you can't just namespace preprocessor macros. I began to use a prefix, but it looks so fucking ugly and hard to read... you also can't use so many character to seperate or distinguish the calls as it'll be inserted as a character then. I think as long as there's no reason to release it, I'll stick no prefix. I just dislike reading it, that's all. Makes stuff a lot harder to differentiate.

Nested loops without recursive preprocessing

0 Comments
It's just too fascinating what you can do with it, so I couldn't just drop it without making some kind nested loop construct. I have so many things to repeat, it does make it a lot easier to generate code with two or even three nested functions. But it won't work without making multiple loop macros or so. I decided to have a 4 bit number range for everything my little preprocessor macro collection can calculate.That said I only need 16 different loop definitions, not worrying about updating too often. And it lets your preprocessor parse less than with just too many numbers you don't really need at all. Yeah, so how many loops could be useful? Two? Three? Four? Or more...? I'll stick with three I think. Sounds like an acceptable amount, though I'll probably ALSO need atleast one loop for adding and subtracting.

Besides this, I'm think about rewriting the loop a bit to enable number mapping. This is useful if you want to make for-loops or so. Hm, gonna test some ideas then.
0 Comments
Couldn't get so far with this one. It seems that the only way to nest macro loops in loops is to generate more loops... secondary loops. I read some stack overflow posts and yes, they're using a technique where they combine #ifdef and macros in some kind of woah way (I'd called ugly cause it's probably ugly to read, but since they can do more with that than me - I can't say anything!). So they can generate more and more macros with this trick and get their more more versatile thing done. I don't like to admit that, but I may consider taking a different look at it. It's so much work, not rarely extreme amounts of work I see right there from source to license. I still don't like it all, it's too much stuff I disagree with. Too many too complicated template constructs the compiler needs to instanciate and evaluate. I mean take a look at all these functionalities etc and say why this doesn't like somebody wanting to just integrate a special syntax for something you can also get otherwise... It seems forced, out of "we can do it" and completely off reliability. Reliability in a sense that you can't rely on it beeing simple enough to ever work as minimal as it could. I don't know. I still dislike it.

And that's also the end for today and probably for macro programming, too. I have better things to do.

Yessssss number 2

0 Comments
Got my macro loop and my number system done. No you can say how many times you want to repeat, what macro to call using () for the seperator and what macro to pass the index as argument and then insert it. That should be enough for today, it's quite a tricky thing to strip off all odds and understand how structurize your macros in which way. As I found it of great use, you can simply pass a macro name as an argument and insert it using (), as far as the macro was defined with an empty pair of brackets (otherwise it'd be evaluated from where you passed it - making a huge and not show mess in the preprocessor output ("gcc -E" in my case).

Yeah, I think I can with it comfortably. I really like the idea of using this macro stuff for later excursions to "C only" platforms. We don't have template there, so it might be of use to insert N vector operations using macro constructs instead! Cool stuff, really cool stuff. The more time I take to explore the in-depths of C and C++, the more I know how to archieve things that safe a lot of time and work. And as I knew from my last semester, the more of it you safe, the better you feel. And if it's also efficient beside beeing otherwise useful, you got the greatest deal there. Ok, ok - not everything is soooo perfect that it eases your work to no end while staying efficient. I didn't say that, but from time to time I like to my recent toolkit additions as something like that. And my current loop macro does a great job of keeping options for less trivial and complex constructs (in theory, atleast).

Geez, I'm so narcissistic.

3.10.2011

0 Comments
Couldn't stop thinking about it and began to play with macros instead. In case of macros, Boost has a cool set of really impressive macros to deliver. However, I didn't want me to spoil all the fun I could have on my own and tried out my own preprocessor-based iterations, loop constructs and so on. In the end, they seem to archieve it similar as C preprocessor macros do basically work with jumps and concatenation. So if you want to write a macro that generates you a sequence of characters, you can only archieve this by defining each loop iteration with a single macro. You'd end up with as many macros as you'd need iteration and then create a macro that concenates the name prefix with the number you've given. As long as you indexed all your macros in an incrementing order, it's maps it's index parameter directly to a macro names. You can essentially do nothing else cause recursion doesn't work here - it's not possible with the way how macros get evaluated in the preprocessor.

That said, you can do the same thing for creating increment and decrement operators. Something like "#INC2 3", "#INC3 4" etc - really atomic stuff you'd usually never bother about. So if you combine this with your loop, you can end with an all-same expression for each iteration and you just need to copy the line and increasing the number for more iterations (as opposed to always concatenating and hardcoding the previous loop iteration). Quite convenient I have to say. And really interesting! If there's incrementing and decrementing, you can also add, substract, divide, multiply etc... I really enjoy this kind of stuff at the moment as it's some fundamental, something that makes you scream "OH GEEZ, IT'S WORKIN" when everything's fine. And as a result, it'll enable to to generate indexed sequences, typedefs etc... As I noticed that I'll need to define a larger amount of only incrementally different template parameters to get an universal module interface done, I had a desire to create functions making sequences of C code. If bears some problems of course. You can't concatenate ",", ";" or "=" for example, those and just not valid or need to included otherwise. It's OK I think, but stuff's getting equally hard to write using cpp macros as I'd write it using templates. However, the preprocessor is the next instance above templates (as templates are the next instance above normal code). Therefore there's no way 'round cpp macros in C++ to actually create real code without optimization on. I'm trying to keep it simple and essential in functionality. Some good chances exist that it can make solve some problems I had before.
0 Comments
Oh man, that mix of macros and templates I used to get some kind of template-based formula done really is a mess. It'd be possible to define all kinds of formulas in such a construct, but I think it's just better to create a class-internal functions instead and call from there. Really, you don't want to see that. I can cope with templates in templates and so on, but if the templates aren't actually going to be templates in the template sense but templates in a sense that you'd normally solve it by using template FOR templates but can't cause there are too many stupid depencies and you are forced to used template just like parameters you already know and which you can't iterate cause you always need to give a fixed number of parameters. I thought about using stdarg, but there wouldn't be any type check and thus you wouldn't be able to get what parameters to pass then! Plus you need to invoke functions during compile-time for that, making it totally useless for strongly typed stuff...

Well, it was an idea, good to know that I now know that it wasn't a really a good one.

Another boost part

0 Comments
Reading the german Wikipedia entry about metaprogramming, I feared that my new "invention" (haha, what a coincidence...) of code generation (not execution) during compiletime is an old hat and already implemented somewhere with more effectivity than my own one (and thus a potential inspiration). Beeing extremely paranoid about breaking my head about stuff that's already great and as good as what I want, I noticed something really stupid. The meta programming can be interpreted as a) compile-time execution of code or b) compile-time code generation. Either way you can generalize it as compile-time execution, but it's essentially different in concept and use. I took a second look at Boost's Meta Programming Libray (MPL) and was kind of suspicious - what does it do? Is it really such an annoying set of extreme template nesting? Or is there something actually compiletime-based? So I downloaded it, studied the source code and was glad that it's really just a bunch of template classes with not real code relation but macros and so on. I almost fell into some kind of deep hole without knewing it. It wasjust an extremely short time feared my time was already wasted, but still... such multiply possible interpretations mix up my whole world, always all the time. Seriously, I UTTERLY HATE such stuff in programming or technical/scientific stuff in general. It's like saying your dad died but then adding that dying is a slang term for having bad sex or so... Whatever, it almost shocked me. Need to cope with that and don't just judge before looking at the actual library implementations. So I can keep on programming my really completely different approach!

Oh, and I think I found a possible solution to give some caller function a set of variables, define their input/output mode and then passing it correctly to a series of other functions depending on what you said in the template parameters. It's a bit smudgy in syntax, but should be relatively flexible. So it should be possible to implement new formulas and data flows based on template parameters and implicit isntanciation. Since all this stuff will NEVER work without inlining or basic optimization (like always replacing "(0>5)?(10):(3)" with "3" cause 0 will never be bigger than 5 and such), I can keep rely on the compilers smartness to such simple constructs. It's not possible without, so any simple rule based on constant expressions and inlining is valid here. Oh, and I also don't apply callbacks to such funcs or RTTI to their classes, so it's totally possible to get inlining "all the time" (there are still limitations, but these shouldn't be too restricting). It's good that I know how to cool me down and say myself that everything's right, that you don't need to worry about it. All this "maybe inlined, maybe not" is making me constantly nervous in moments of doubt and paranoia. That doesn't happen when everything's working fine, of course. Geez, the inner depths of programmers are quite dirty.

New concept

0 Comments
Today I had a lenghty train ride and a lot of time to think and test the best possible ideas and concepts for my operator system. I noticed how inconvient and limiting it can become to not having an implicitely parameterized operator. For example, logic operators! If you take the references A and B, you you'd normally the define the return type as A and for a logic operator as bool. So if you have your class imitating operator behaviour, you'll need to implement a new kind of operator returning a bool or bool-alike value cause the other operator functions always return type A for compatibility. That's just one side effect, but one generating a very, very inconvient to program interface (which's also annoying to use - wouldn't you prefer only two almost equal operators instead of three, or even four different ones?). The second side affect I noticed is that you maybe have division and modulo, which are basically the same operations but you can only get one value back in C/C++, making it impossible to reduce them from one to two operations. You'll never really able to change that except using inline assembler maybe - it's a just an example that may be performance-critical for string numbers, custom data types and others. So using normale operators you need to make two operations out of it or simply writing your own functions. My plan now is to create a set of functions with in, out and in/out reference parameters to always have as many implicitely parameterized ouput variables as possible. I wasn't quite if it's harder for the compiler to optimize then or not, so I did a quick and not that covering comparison of some more and less lenghty terms requiring anonymous or explicitly defined temporary variables. Both assembler outputs are exactly the same, so I really don't need to worry about performance at all (as long as optimizing is turned on of course). It's really basic and simply stuff essentially involving inline functions only, but every function is inside a class you can parameterize but still having the same function call layout. This gives an interesting flexibility in terms of chaining and per-element operations on storage classes (but I think I repeated that often enough the last posts to not again write about... I'm getting tired of it...). It's a bit like using a modular synthesizer: you have your modules, filters, etc. and a variable is a wire or generator. So in order to make stuff happen, you can create programs this way, essentially ending up with some kind of "assembler-like" syntax of, well, "set A to B" or so. Yeah, that sounds really cheap and simple, but it's only the most basic part of it. You still need to provide a full class interface for custom numerical types, operator functions for you storage classes etc... However, stuff's going more and more interesting, so I might create some very special case modules to than operations like you'd chain them with normal mathematical operators. Dunno how to generalize that, but I guess I can do that without any real problems. However, the input/output/io idea is the most final one I think. Theoretically no overhead, more possibilities than by simply using such one-directional C/C++ opertors and as much custom shit as you want.

Yeah, not much to say about that. But I can think of easing implementations for synthesizer formulas, image renderers etc. It's just as cool as the stuff I can come up with it, so nothing is won by just making available as concept. When I've finished everything to make template-generated code as quick as possible to design, I'll go back from where I came - applying it to n-dimensional vectors and then arrays.

3.09.2011

Good idea

0 Comments
So, after a day of completely NOT programming-related activities (I just too much of this stuff lately...) I feel refreshed and clear in mind, it's great to completely shuft completely off from time to time (I like the term "brain afk" for some reason). I realised than my operator system is also an almost perfect solution for all the usually iterator-based memory concepts. Provided can create, access and delete single elements by a basic storage class, one should be able to use this operator system for EVERY set with n elements. Even better, you provide a set of functions to work on a single, serial or whatever else sequence, taking an operator to apply. This makes things more interesting, it'd a possibly "equal" (though only in a statical sense) version of what STL's iterators/container can do. Yeah, I think I've finally found what I was trying to archieve in the past and up to present. It's a good feel of having a cool solution for stuff like that. It's of course still more complicated to write, but enables you to really do a lot of different things with theoretically equal performance as if you'd write it all directly. So far I've finished all basic operators, so I can move on experimenting with them for all kinds of class. In theory, I could apply it to everything that stores more than 1 elements - vectors (the mathematical one), arrays, lists, bitsets, trees... whatever stuff, I just need to write a set covering template functions with the basic input types and then the user is free to insert whatever operator he wants. I'll apply to all related classes I've finished so far and see what other basic operators other than the standard ones could be useful. Things like power-of-two sums or so would a matter of calling a sum functions and passing you power operator. Or power series in general (it's just great example of iterative formulas), simply made easy. Oh and it'd be also MUCH easier to create all my previous planned mathematical formulas for approximation AND linking them to a memoization array! Wow, that's quite a fucking thing. Shit yeah, this eases my rather visionary ideas to no end, except for fact that it's just not as comftable to write. But well, better a consistent and simply interface giving you all freedom to combine instead of an overloaded, but limited monster. I also prefer the module-like technologies, they tend to blend so much... smoother. And less hard to nurture.

3.07.2011

Yessss

0 Comments
I've done baby, I've totally done it. It's just awesome so stuff working like expected, PLUS the system is ready for serious meta programming! Fuck, I never thought of ever getting it to work in C++. My current experiment involves set of "function classes" that provide a static execution function with a set layout, possibly implicitely parameteized when using a teplate function instead. In the current test environment it's a set of operator functions, similar to globally overloaded operators. They are only compound operators, but combined with a class providing two functions that take these function classes as template parameters, you can a) call the exec() function of the provided function class and b) generate a normal operator for EACH defined compound-operator. It's like a normal operator call, just with a different name and usage.

Usage is quite simple though much more complex to read. This simple statement:

c = a + b

Would be written as the following:

c.compOp<Set> ( a.op<Add> (b) );

You see, it's basically the same but in function notation and by using names for the operator type and template parameter for the operation to execute. The awesome about is that the operation you give it can be executed INLINE, thus no overhead as opposed to calling functions RTTI- oder callback-based. Provided your compiler settings enable excessive inline use, you can pass any function class matching the calling convention inside the compOp/op method without any overhead. That's fucking great for a number advantages, not only limited to operator-style use:
  • write filling functions for arrays without repeating for-loops on and on again
  • create a couple of "shape tracers" for your array and fill it with a special filling function like
  • save time in implementing more features for the two above
  • insert any code you want anywhere
  • ensure consistency along the code that's shared by a series of functions in general (like alpha vs. additive blending or so)
  • etc etc etc
That said, you can some really interesting things, though limited but still amazingly useful. And for you debugging fetishist: You can still deactivate optimizations and use stack tracing etc for even more sophisticated debugging (opposed to "reincarnating" shared code over and again, generating mistakes if you change and need to do the same for the rest).
And yes, it's fucking typesafe, though it only complains when you instanciate the function. Much better than I expected from it. You can even parameterized a function class to execute another parameter-defined function class inside the previous function class. So it's all up to you making whatever you can chain and imagine. Geez, I love that. Using this I can continue my old work right from here where I stay! That was a huge step closer to my dream of getting the most quick and flexible software renderer I can write without getting too deep into lowlevel optimization and case-specific assembler code and so on. If I think about, there are so much more things I can simplify using this. First of all, it's the "poor man's" polymorphism during compile time, but no overhead during execution (provided you active optimization and inline of course). That means I can use to do all the stuff I was only able to do using virtual members before I create this little awesome technique. I'm not even scratching the surface, nor do I really know what doother than for array operations etc. Time will show it's real use, I bet. So far, it does seem to do a great job for what it was created. I'll writing a series of function classes I could think of and see what else comes to mind. Geez, this way I can implement so many things I couldn't before. That's almost all I ranting and beefing about before...

Wait, take abreak. Gonna choke in whole first. No good idea to feel rushed for posibilites, but rather by actually done stuff... However, I can't wait to use it!

Too many operators

0 Comments
Oh, I totally underestimated the amount of operators required to make all the operations I planned for multidimensional array. For a one-dimensional array it's easy to specifiy: a range inside a one-dimensional aarray has a starting point and a size or length. Just operate on the elements in range (or outside, if you want a reversed range) and everythin's fine. Though you'd need to call operators specifically due to more than one parameter. It's really easy at all cause you probably only have one type for the range. Using twodimensional arrays becomes more work: How do you store a two-dimensional range? Either as a vector, as an array with two elements, as a scalar value representing an all-same vector or by specifying the parameters directly. The last possibility is a bad one, cause you a) need to specifiy each parameter by hand and b) you can't implement generalized. So as I already distilled, the best way to specify such vectors is to give a scalar value, an array of values (very useful if you want to hardcore values) or using an own struct. I could implement it simply by always converting to array/scalar content to a vector, but that requires creating a temporary etc and so. A bad idea if it's done in realtime. Converting it before is not an option, some libraris use arrays instead of vectors or even their own format. It doesn't make any sense to not include - it's too versatile and keeps it open to other interfaces etc and so on blabla.

It seems that the only real "solution" to this problem is that I need specify each version of the operator. Instead of one operator with only one type, you'd need four overloaded operators for two types and eight overloaded operators for three types. That's a SHITLOAD of operators! Imagine that! C++ gives you 29 operators (maybe I forgot some or so) you can use to combine to objects. It's a total of fucking 232 operators! Even if you write them all in a line, it's a bit of shame you only need to make so little changes. So I'm thinking about another solution, probably involving some kind of template meta programming to make classes with only functions gettin inserted inline during compile time. Yeah, so I can keep it down to only 29 single, component-wise operations. Hope I can also find a way to conveniently pass settings and so on, making it useful all kinds of other, more special functions.

But geez, 232 is aBIG number... I'd rather prefer going the hacky way this time, I think it's just too much annoying work to do for many many rather simply changes.

Const call-by-references vs. const call-by-value

0 Comments
Started to rewrite some (not to say ALL) parameters of a bunch of classes to be "const". That and that I begin to doubt the usefulness of my "ConstRef" meta class (it gives you either a reference type or a normal type depending on what's smaller). I used it decide a minimum data transfer during compilation time, but I get my doubts if it's better than using const T&. Even if the resulting type is smaller, you need to invoke the template processor which's output will maybe no as optimized as const reference variant. Think about: when inlining a function, you'll probably want "directly insert" a constant reference if you only read and never change the variable or depend on a type conversion before you start the function body. So using a constant reference directly tells the compiler much more about how possibly you could insert it directly. And well a constant reference SHOULD be the optimal case: no changes made to it and theoretically no stack operation required. And what about a constant value as parameter? Looks bad. It's constant, yes, but it's a copy of the variable you got and will always be. Unless your compiler is smart enough to detect it automatically, it's still a variable to push on the stack. What is if you use the normal variable parameter for implicit type conversion? The compiler can't know it exactly, so it'll either push it a copy on the stack or insert it directly, creating something unexpected when compiling fully optimized. That's a point, eh? Also, I think of coyping something small becomes rather ineffective since you need to bitmask stuff, type conversion and so on it's an int, char or so. I learned that using the system's native word size is the most performant way t0 do operations. So if the compiler doesn't inline the function for a number of reasons, he'll simply copy an address which has the size or a word. Though you'd need to load the pointer value and then variable value... It's more work, but it's probably much more clean for optimizers to loop up. And cause I'll never again code something CPU-heavy without optimization on, I should risk it and be happy.

You can only give hints to your compiler. He still knows it better then me what to do with a large chunk of code! He knows what my computer's able to do - I do not except that I know some random Hz number, memory sizes but not much else. I didn't even knew there's a fsincos command in assembler, so I'm just a random guy beeing concerned about my code running quick and fine. In a more special sense, I'm still more some kind of videogame programmer than anything else, so I can take this as an excuse to let others do the real lowlevel optimization work.

More dimensions

0 Comments
After some bits of annoyance and anger about bit manipulation and how questionable it is to use memoization for a game's essential part, I simply decided to freeze it until I need it or find it more necessary than other stuff. I'm putting my array class into mre than one dimensions and must admit that I can't see ANY reason for having a seperate one-dimensional class then. Slowly everything unfolds how to make certain "special", usually one-dimensional operations multidimensional. In combination with loop unrolling, I hope I can use for simple tasks like adding/subtracting a smaller array component-wise from a bigger array. Such an array is basically just an ordered set vectors, thus it must be possible to define things like adding and subtracting, too. Of course, a pointer/string array or so doesn't have all number operations. I abused template instanciation in my dimension class to warnings of incompatible operators only when instanciating such an impossible functions. I really don't care about it beeing compatible with other compilers or so, but for me it's a useful behaviour I currently don't want to miss in my codebase. That said, I'll just define the same of operations I have for my dimension class for my dimension array class, too. You could use it for almost everything then. Need additive blending in your software renderer? Just call dst.add(src,pos,size) and you're done. Sounds nice, eh? It makes stuff a lot easier this way. I'm kind of re-implementing older stuff in a more versatile and effective manner. It's a cool concept I can use for everything game-related, not only or grid-based games. I'll extend to concept to feature not only rectangularly/box-shaped filling, but a filler system in general. Not sure how to exactly integrate it, but It should work once I implemented it successfully. This makes it also fundamentally easier for me to create an animation system in a grid-based game environment (it'd essentially like rendering shapes into the grid and thus getting all physics/collision information combined with data data and so on. Tackling it n-dimensionally give a lot more ideas how push and a copy stuff from A to B and so on.

However, this is a bit of future music and before anything happens this way, I need to make to class ready for setup.

3.06.2011

Random

0 Comments
I noticed that g++ is rather "free" in choice of what compiler settings to enable and what not. For example, the success of disabling RTTI with -fno-rtti does also depend on what's used in the sourcecode. I couldn't disable it with "virtual" keywords in it, just as test. Sometimes, it is convenient to force specific classes keep an interface for consistency reasons. This shouldn't be something too hurtful for the compiler to check whether a non-rtti class has defined some of it's interface methods. However, there doesn't seem to be a way to force the class bevahing so without including RTTI functions. Again, something annoying you need to manage by yourself. I like to use templates in a way most Java programmers I know like to use RTTI-based OOP. If you want to pass an object that can have a different implementations (take STL containers for example) but all of them use a single, always equal interface, you'd usually derive each implementation from a base class. You can't do this without RTTI, so since I rarely feel desire to change implementation during runtime or use them STL-like, I'm solving it by using a template parameter representing the class to use. This is a) statically know and does b) thus not use program-wide RTTI (elementary design principles ensured). The drawbacks: a) no obvious dynamic implementation possible (acceptable) and b) you can't specifiy an interface without using the virtual keyword.It's sad that they force you to do so, but I can't change it all. I need to find an alternative to OOP for my own language. As I already said, what you can do with OOP can always be done without, it's a question of much discipline you have and how comfortable it is to write an alternative in the language you've chosen. Personally, I believe it's a good way to explore template or macro-based programming in my own language.It probably results in more flexibility but also in more complex use of it. Anyway, programming more and more in C++ improves my concepts and builds a more and more clean vision of how and what to design with my own programming language. As I've seen in yesterday after a rather "forced" session of brainstorming combined with just too much anger about C++ inflexibility. I shouldn't try too hard to make it super multifunctional. I though too much about making an effecient bytecode instead of designing the language's specifications first. Don't optimize if it works great without, something I'm not always able to do. Usually cause I have too few goals to follow except elementary precepts. It's a bit of shame that I don't always direct my knowledge and abilities into something defined and set. Guess that'll happend later when I work a company or so, making the software they develop there. Maybe I'm just too strict with my personal projects. I can't really say this cause such high degree of effectivity is what I seek to archieve if I don't need to focus on deadlines, specifications or customer wishes. Nahh, fuck it. Everyone has it's kind of special personalities if it comes to personal projects. So there isn't much to worry about.

Fable 3 is coming for PC!

0 Comments
Yes! That's what I WAS WAITING FOR. May 19 is is the EU release! I'll be the one to grab my box from the first pile in reach. And I'll buy every collector's edition I can find there! No matter the cost. Atleast I wish a collector's edition since I'm such a big fan of it. And if there's no such edition, I'll be pretty, pretty disappointed! However, the game itself will help me to overcome it...

Done

0 Comments
I took the classic callback approach and it seems to work very well. I also added an incrementing value to see how many values are alreay filled and so on. Now I have to memoization arrays: one based on preallocated objects (aka simple array with bool flags) and one based on a dynamically allocated objects. The second variant uses the stored pointers as status flags. Combined with returning a pointer to a constant memory, I can use nice read-only functionality while keeping it open for really big objects to store. And again, in this case it's again rather "the C way", only if C had references... That's probably also why I prefer to code in C++ - too many nice extras you don't get in classic C code. However, since this is solved, I need to get my third variant running. It'll base on the idea I'm carrying around since I began with implementing memoization. And I now sorted out how merge it into a better systems with less frequent "chunk" allocation depending on how the binary pattern of your index changes from call to call. One variant gives less allocation with no frequent changes on the lower bits, the other one works similarly but with higher bits. So in the end it depends on what you'll want to get from it. I'll stick with the first variant, since I think this is more usual than the other version. I thought about using even more binary splits, but where's the point then? It'd be just more bit masking with no real use except in combination with a single flag indicating whether a chunk it fully completed, thus marking his child chunks as complete and speeding up such accesses. Hm, not a bad idea! But kind of lame cause no matter how hard you try, it's always faster to simply make two checks with only two splits instead of n splits and even more checks. Simplicity is the key here, don't make anything too fancy - you'll regret it if it comes to performance and compatibility.

3.05.2011

C++ template function callbacks

0 Comments
I'm having problems implementing a flexible memoization class in C++. The point is you can't make it clean without RTTI support if you want to specify a function somewhere in another class. I tried making a special memoization entry class that holds the necessary type information, but this does somehow NOT work as I'm trying to do something T::Type if T is a template parameter. I haven't seen a solution to this problem and want to keep it "as inline as possible" this time. But thinking about how rarely I'd actually the calculation function itself, I may consider using a callback in that case. At first I searched the internet if it's actually possible to do so, but found nothing of use. So I just tried it out and it works by implicitely AND explicitly instanciating a template function when assigning it's addresse to a function pointer:

#include

template
void func(T i) {printf("%d\n",i);}

typedef void (*templatefunc)(float);

int main() {
templatefunc tf=func/*
*/;
tf(-2);
return 0;
}


You see, GCC outputs a warning about i beeing a float, which means template instanciation was successful! I don't know whether it's standard, but I bet so. It works in all cases and it's ok as "fallback solution" cause the my template ideas never work due to C++' fucky template system. I need to push my own language own day.

Design change

0 Comments
Changed the colors again. Also made the post border visible, looks just better with some around. I'd have also kept it with straight lines, but the dots make it somehow lighter. So it's not all that bad to have template with rather strange appearance than actually valuable aesthetics..

Noise

0 Comments
I totally forgot about my fine but very small collection of collected noise music. I currently listening to Bad Sector's "The Harrow", I can remembering ordering it cause it wasn't available for free on his website. It's a really cool thing when artists do that, though it did prevent from buying the stuff he already made available for download... But well, I don't listen to it that much at all. I really like what he with "The Harrow", especially the opening. Short bursts, awesome sounds - that's what I love about it. It sounds cool enough to motivate myself a bit more to finally buy an ASIO-supporting sound card and start making some music again. I didn't do this for while, may it'll please me a bit more than just switching between gaming and programming (and listening to music all the time or watching speedruns). I also have a little sample snippet from Gridlock's "Scrape" lying round here (found it at Wikipedia a LONG while ago) and must say I could listen to more like that. Surely, I may not like most ofthe Power Noise out there (usually too "dumb" in sound for me), but this one kind of "intrigued" me to find some more of it, hm... Maybe hard to find good stuff by just looking at google pages, eh? Nah, drop it. I can live that. Nobody needs everything.
0 Comments
I'll switch to bools for the calculation flag in my memoization system. I weighted the cons and pros, but I can't seem to find any disadvantage except higher memory consumption. And that bitset stuff requires just to many operations and datatype size depencies and thus more required operations to make it up. And in the end, it doesn't solve any problems well, rather insufficient. So I'll stick with normals bools instead, that's ok. But I didn't give it up totally. The idea to convert a 0...1 range to an a number that's splittable und this thus sub-indexable is just too interesting. Think about an array of pointer to other arrays: you have your index variable with a set number of bits and then you split it into two parts - one for identifying with array pointer to choose and the other for the index in the pointed array. If you don't allocate all arrays before, you can allocate them on the fly and thus expand on demand. That'd be useful for stuff with a HUGE range of possible values but only a few ones used. So splitting it into a bazillion of arrays for safe a lot of memory you'd never need to allocate at all. I like the concept and it's function, so I'll instead make a simple memoization array and then continue to integrate the technique I just explained. It's somehow the "missing link" I was looking for. It doesn't consume lots of operations and only the memory you really need. It's not as fast as a previously allocated array but smaller, so it stand between realtime calculation and a memoized lookup table.

In order to integrate it as efficient as possible, I think I'll need to make a set of classes with mathematical function that share the same interface but differentiate in how value get calculate. Number one would be the C function variant. It just calls the functions you'd use normally, nothing else to be done. Then we got get lookup-table based variants which use C functions to precalculate them (and other things like factorial and power functions to a previously set range). The last one would be the memoized variant, again using the C functions for precalculations. I dropped my previous plan of writing taylor approximations cause it's simply a boring thing to do if you precalculate this or that way. I doesn't really matter which one EXCEPT that using C's standard functions you'll have the possibility to optimize your code with special assembler commands replacing the original functions. I learned about that a while ago that are actually already sine and cosine functions build into processors. So it's in all cases better to use these and not you custom ones. Same for square roots and everything else. May the powers I don't know be with me cause they do more than a lazy nature as I am.