2.06.2011

Thoughts about the dimension thing

I'm not quite sure how to proceed from here. I got all kinds of working functions for combining objects of multiple dimensions during compile time, but well - there are four different variants that should be in one single method. Problem is that I still can't just specialize template methods within classes without messy overloading constructs. And no, meta programming didn't really help as intended. It works for compile-time calculations and alghorithms, too, but not for simply calling a different method as this would require replacement of symbols etc. It's just not possible without making it too complicate to work with and thus nullifying the effective. I originally planned it to a) replace n operations for n dimensions with just one line, b) mix 1D, 2D and 3D points without always rewriting the type and c) to keep stuff generalized enough to work with other dimensionized data structures. Well, part of this comes true, but there also things I forgot over time: usability and code size. My current Point3D class is rather massive in functionality since it implements almost all operators you can overload. My new attempt for dimensionality requires to specify them again, but with no content. So I have to copy every function I have - a bit much for my taste - can't it be solved differently? *sigh*, I really don't know how. So maybe it's better to drop the whole shit? Even if I get everything done, I still need to use the current setup for EVERY method I have for my Point classes, so plenty of stuff to rewrite - and that for "just" a bunch of small functions. *doublesigh* Nothing I'll get done with any of the ideas I have. Or perhaps just a the plain, old array access with static indices? That's the only version left I can think of that's possibly easily optimized by the compiler. In the end, every object/structure member access is made by taking it object address and then adding the relative offset. So I guess the compiler can automate this for array access with static arrays, too. It's the last idea I can think of. It sucks to never really now how what is optimized and what not. In the end I still it's best to have pre-compile steps where you can write all "highlevel" optimizations such as loop unrolling, static precalculation etc by yourself. And then you can still let your compiler do whatever it thinks can be pre-calculated. This annoying thing is driving me insane, especially cause only me knows what I believe is the right to optimize. I don't trust compilers in general. They do a lot of stuff and you never know when and why - only thing you can do is to rely on set flags, specifiers in code and the stuff that's in it. I really, really hate it. I'm tempted to completely switch to C for a moment. However, I know how to write everything I code in C - though I dislike how I'd need to use them there. So maybe I should step away from my "special plans" for dimensions, stuff and so on and keep it away from me for while - as long as it will take my to write my own to C converter. Yes, I think this is a nice idea. Basically, I still don't my code as really object-oriented. I plan in such a way, yes, but I never really go beyond using OOP as a set of functions for a specific structures. I know of a bunch people designing their code with concepts in mind that require typeIDs, virtual tables and so on. Personally, I always have my doubts about the "real" use of that. But I'm biased, I learned that only fast and efficient code makes is of use. I'm not a guy for quick coding of half-arsed concepts relying on lazy implementation and so on. If I want to make something with typeIDs, I simply write a special class for it. I see no reason for typeIDs in a system other than, for example, GUIs or projects with just a few elements you don't need to use all the time. It makes programs slower, structures bigger and programmers lazy and unaware of the truth behind what they just types. And NO, don't come with this "we have so fast computers that it doesn't really matter". You know what? Everyone saying this is too stupid to understand that more speed means more possible calculations - those calculations you through away for silly shit that's just not worth having activated all the time. I understand that some of these special OOP concepts ARE useful, but not for most codes. The majority of software todays wastes CPU time and RAM. All the fucking time.

So to end this rage, you see my point and goal clear - making OOP less inefficient but more performance-oriented. My goal is to get a pure C code that's written like you'd not write in like with OOP. That means, structures for objects, functions for changing the data (taking the variable address as first parameter) and so on. I think inheritance can be done by by simply introducing a new structure with the base class as a variable. What constructors/destructors to call can be simply decided during compile time I think. Yeah, that's the most basic approach I guess. Decide it during compile time and then build minimal code with no overhead. Taking this as a base, I got all the stuff I normally use in C++. Templates can be replaced by using my compile step concept. So combining them both should give the possibility to implement OOP concepts on demand - adding stuff as you wrote a new step. Even RTTI is easy using this way. And you could also write your OOP support using such steps. So I only have to make such a system and a converter. Hmmm, didn't I say that before?

No comments: