A general linux distribution frustration

Once a while I decide to install a new and updated linux version. And geez, it's one hell of evil energy every time I do. The phrase "never stop a running system" should mark everything I do with linux, but apparently it's nothing to stop from doing stupid on and on again. So far, the only linux distribution I manage to properly flatten and cleanup all the time is Linux Mint 9. Version 13 is a total mess of bastard apps and everything else based on Ubuntu does the same. Who cares about GNOME or MATE if you can have Xfce. Then again, the majority of Xfce distributions does elementary things wrong... I don't know what to do with my system right now. I don't want a complete DIY distribution cause I it's horrible to setup all the things that make a usable operating system. There are a few live CD system I'd like to have as directly installed operating system, but these also suck at giving competent instructions how to do so. Guess I'll tinker around and grab specialist distribution with a modular system like used on the Pandora. It's the only functioning linux system I'm having right now, so I'm just gonna type all my stuff this way...


Distraction! Or rather multiplexing.

I'm jumping between my personal projects right now cause I don't want to spent my free time with chasing hard-to-find bugs (I'm already doing this at work...). No wonder why I'm having different thoughts everyday. And it's great to have a blog like this to form words around all these topics.

Anyway, I have a couple of thing I realized yesterday - especially to help me finding those annoying bugs in my multithreading system. First of all, it can only be a problem in the event code itself. I never used it except for waiting on tasks and for ending the whole system - I remember that exactly this problem troubled me before. Sooo, I can keep my existing code and just need to worry about how to solve this particular problem. The even idea itself is perfect for multithreading and tasks. It enables me to efficiently manage problems. However, I know that some parts of the event system have to work cause I've already done successful test with thousands of threads with very broad and very narrow conditions. Yep, it needs to be the exitting of threads. It works in debug mode due to more time between events (I guess). I'll probably use the next week to pinpoint this damn problem and find a solution for it. There HAS to be something I don't see or notice at all. The system's too tightly designed to allow any unknow memory access problems, so I suspect it to be something that depends on thread lifetime. Better double-check whether task waiting and so on works as intended... To bad I can't use my debug output then!

The second realization is that if I want to find a memory management system for my programming language, I'll have to find a different approach rather than trying to believe that arbitrary code merging makes it possible to properly manage memory without constant malloc use. An idea I got yesterday was to define a structure for each function or piece of code. This structure is not just made of of plain old data but also of a number of different sizes/informations required for efficient code merging. First of all we have local data like variable-pointed data and literals. Second is the code we used besides variable declaration - essential for merging functions. Third is couple of a) the amount of space needed for local variables and/or bound parameters and b) the stack size we need to do temporary calculations. That's a first step to seperate different properties and make structures and functions ready for merging. Also, this would make it possible to create a function where you can output whole structures (possibly new ones) with initializer code for each property. The idea of seperating all kinds of stuff sounds very awesome to me due to that, but it's also hard to combine with my idea of how a simple programming language should look like. I'd need to sacrifice my current minimalism for a more descriptive setup if it comes to stuff like what gets into the resulting and what not, how to mark this and that, what's about pointer and external objects and so on. The interesting and flexible it could be, the more it goes away from a simple and clean approach. While writing this I thought about how Lisp solved functing merging but I can't exactly remember except that there's no proper way to do this but just to build lists of function calls. Actually, my syntax looks sorta like listing right now, so I'll probably take another brainstorming today and design my collected ideas about my current main concern: how to combine function, structure and member/method definitions in one package. I WILL think about how I would realize a type system in a LISP-like syntax for a static and dynamic context as well as what's easy to realize and practical while coding. It's all a syntax question I think and what features I want and how I want the data flow to be is something seperate from how to describe it. The best approach seems to be to just colect all the awesome ideas that are not bound to syntactical sugar and try to build on this. The classical C way defining and operation on stuff is rather disadvantagous. I've thinking too much without focussing on what's important to me - mindmapping ahead!

Third realization - in conjunction with programming languages - is some more thinkage about standard libraries. There are different approaches to what makes a standard library and a lot of stuff I'd put under "general library" will find it's way into RAD or scripting languages while more simpler languages like C or Erlang have a very distinct definition of a standard library as well as with what language the library was written. Personally, I'd describe a standard library as something that's not only use by almost every software but also what's completely portable and does have a minimum of system-specific properties. Also, hardware dependencies are other important factors as the language and the compiler should be able to generalize away too heavy differences. Yeah, I know - my beloved C fails at this point, but not many languages can succeed there. My view of standard libraries makes it a set of stuff written in the language itself to ease common, abstracted tasks. And I/O is common for sure but very different between applications, so a better approach would be to have a middle ware code with a user-implemented interface to the underlying system. That's a bit like Java, but isn't really able to do something good with it due to the language's/interpreter's design.

Yeah, that's it for now. A lot of thinkage but I enjoy thinking about it. It makes me feel productive and more bound to what I'm ranting about everytime I program. This way I can better at it while finding ways to destroy anything that might annoy me in the future. Atleast if it's about homebrew hobby stuff...


The quest continues

I didn't have thaaaat much stuff to inline. Actually, only two headers cause the rest is numeric or algorithmic with generic typing. So I started to fix the bugs in my multithreading system again and BAM - there it was, the full miraculous deadlock problem along with a few very distinct bugs. All these problem don't occur when adding a few lines of user code to prevent too earlier tasker thread waiting. Weird. I mean I know the problem but now the pictures becomes even stranger because the code is so squeezed and atomic that I don't see ANY possibility that an error might come through. I know it worked before and it seems that it doesn't work when using itk_app as a convenience module. Sure would this imply that it's faullty, but I don't see any bad code there either! The bugs themselves occur in places they shouldn't occur and pop up randomly, setting pointers to 0x1 where they should and so on. I had a lengthy GDB session this morning and it jumps to smashed values from one break to another. All errors that happen seems to have a single source within the event code where I use a temporary local variable and pass it to another function. The point is that this object is still alive all the time and isn't used anymore in any way after triggering thread death. Even more, the local pointer inside the event using the externally local object should still have the same content, but it doesn't. Instead it's filled with 0x1's, so I have no idea why this is happening! One idea I had was that the item iteration is faulty. The only thing I fixed there was some wrong copy direction I didn't notice before, but this all doesn't explain why local content got changed without passing the local variables anywhere. So the only possibility left is a sneaky stack smash. I could try programming a new test file that doesn't use itk_app, but that will take some time - time I don't want to sacrifice right now. Also, it'd be one hell of a code block... I also thought about tackling a different approach of integrating an "I'm a dead" event taskers so that they'll be finished exactly the same way I finish them in my work-around, too. I don't like this approach as it should theoretically work without, too, but in the end it's the way around it. Will do some more tests and check what exactly works and what not. Seperating bugs from features.



I've been thinking about my progress in own programming stuff since I'm working around 8h per day of which I can use 8h to sleep and 3h to go to/come from work, leaving 3h for freetime and 3h travel time. That's not much if you want to put a lot of time into clever macro constructs and planning in general. In fact, programming with macros requires more care thinking about what to evaluate before you can pass it as an argument, so I have to think about how use it then, too.

It's a bit depressing because I don't make any progress and those day where felt just too pissed to code cause of anything else in the world makes it impossible to advance any further. Thus, I've been thinking about a different approach I dropped before: inline functions. Inline functions are not guaranteed to be inline, but I learned so much about not using hardcoded types and avoiding memory copying so that I think that it's totally possible to use them without any symbol or type passed using macros. I also believe that I should study GCC's inline condition further to know what makes it easy for GCC to optimize. There are still things like vector macros that can't be changed much, but that's totally ok. I just don't want to think about everything because starting to code.

Anyway, I became quite good at seperating actual type-independent logic from code that's always the same. I believe GCC has a very simply philosophy about inlining. For example, calling malloc in function won't inline it (my experience of course) as well as using their reference anywhere. Also, I've seen const types beeing an optimization criteria for which GCC will create unique function versions. Since I'm already prefixing macros with i_ for input, o_ for output, io_ for both and t for temporary variables, I'll rely on const beeing the input type and t can be dropped. I wonder under which circumstances GCC can optimize out function pointers. I mean if you pass const function pointers everywhere and if you're still inside the same module, it should be possible that GCC's can detect this. Though I bet that you'll have to use static functions then. Hmmm, that's difficult to say without reading about it. And that's time-consuming, too. Well, ok, just look up the internet and browse a bit but I got only 3h with internet that's inside my working hours. I don't want to do this while working, too much distraction.

On a sidenote, today I realized how well-defined a simple function call structure is. See, if you have a really huge C++ project with lots of classes, objects interfaces and whatever else combined with multiple libraries giving overridable interfaces to you, the number of possible errors manyfold due the underlying virtual table system. And that's not just some purists pointless rant - callback-based system are similar to this as you'll not always be able to properly trace stuff back to where it happened, why it happened and especially what happened before. I always try to minimize any kind of this problem giving only the minimum amount of needed callback stuff. I strongly believe that all programming tasks can be solved properly without any of this. The number of errors that can not only happen cause you don't know whether a library does checking of false data but also cause you don't really know how it'll be changed in the process - especially true for commercial libraries.

Well, once again it seems that I'm not interested in solving errors that shouldn't happen in any way but rather try to express what I'd do to not let it happen. One can of course see this as impractical, but who is it struggling with spaghetti OOP and too scattered bug indices? Well, I certainly don't since I have very program flow guidelines for only I am responsible. Oh my, I'm sounding sorta snobby right now.


Did it

I'm quite sleepy right now, but I think I did it. I designed a rather simple macro setup for a bunch of loops that either have an expression-based version evaluating into a FOR loop header with custom code following after or in a BEGIN/END-style version for instruction-based variants. It's all very open and you can pass whatever arguments you want to your macros using complete argument lists. I even managed to generalize my beloved XDIM macro for virtual endless levels simulated nested loops, so I can utilize so a greater extent, too. However, this macros has a limitation within which you can only use one array or pointer as an iteration argument. Can't do this differently if I still want to have a dynamic number of nesting levels, but I'm already working on statically leveled loop nesting using new set of variable argument macros.

Wow, I didn't think just generalizing loop before. I mean all I had in mind was this annoying OOP iterator concept which never really turned me on (figuratively speaking). But inserting loops fill exactly the gap I need which's also very inconvenient to reach via OOP concept due to non-local location of loop code.

So I yeah, I'm almost done with this and I think have to rework a lot of code using these macros. They are so atomic and simple, I don't think that I'll need more or them in the future. Or atleast I don't think that they will be any different.


Just like warm stream of irony

I've been tinkering with my graphics engine buffer for just to long and decided that it's time to do something about it and get awesome results. However, macros (or rather C's limited-for-a-reason syntax) can't do all the stuff I want to do. Especially if it's about iteration where I'd like to have a FOR loop inside another FOR loop header because I'm iterating over multiple elements at the same time but want to have a single if/while/for block where the user can decide whether he wants to use an expression or a code block for the iteration code.

I know that's sort of micro beefing, but I don't to have any of this in my very own library. Even in C++ you can't abuse class and templates to inser anonymouse code blocks inline. You could only create classes that have parts to the iteration, which's very, very incovenient if you want to keep related where it belongs: inside the function/method/algorithm that's using it. *sigh*, I can't do anything else but introduce a system that works like some sort of macro interface for loop iteration. Iterators are completely different from loop expression where every logic relies inside the code to insert. You could of course still use (possibly inline) functions if you want to keep more iterator-style code, but most of the time I need more loop statements than iteration fragments in my code.

Sooo, I'm going to regret this one day, but I can't leave it in it's current state. Just not perfect enough to please my needs!


Some thoughts about string termination

I don't like working with strings as they tend be highly important and memory-consuming by nature. However, this is only partially true - especially considerung that I'm used to null-terminated strings from C. Another concept I didn't realize until I read about it to have the first character of a string to be the number of elements in this string. This may be a good idea, not having to count string length and so on, but it also makes string definitions in C very umcomftable as you'll always have to count before using it.

Anyway, both concepts have very interesting advantages. The classic C string makes high density code in C itself along with a whole range of processing-only algorithms that don't need to care about counters or length limiters. Also, splitting strings does only require placing a null somewhere compared to length-terminated strings where you need to know where the string starts to correct it's previous size before inserting the new size. Ultimately, you can have very longs strings that don't have to care about how big the string length type is. You don't need to insert connector elements that lengthen string and you don't need to care about the final length if you create/write one - static or dynamic.

But no let's talk about the good things about lengtg-terminated strings: fast (re)allocation for concation, very quick length iteration (even with 256 byte chunks for 8bit chars), quick iteration over many sequentially stored strings and conversion from symbol-seperated strings with later iteration, same iteration as with array, no need to double-check string content on iteration as well as very well-defined string bounds for related functions.

Both are equally powerful constructs and I might consider doing a rewrite of string.h in case that they might come in handy for some stuff. Just thinking about my LegoNXT and it's tendency to be very, very slow. In the end it might be faster to do string operations this way, though your compiler can't optimize it out.


I'm such an idiot! Worked so long on my buffer thingie and now I notice how I simply can't use the many complex macros I wrote for it. Fuck! Great, now I'm suck with even more things to calculate. Well, atleast the flexibility is on my site... Geez, that's really annoying me. That's so typical: great idea you think and then - right in middle of your most creative phase - you'll notice how very not this works for you.

Oh man, I hope tomorrow is any better cause today was pretty shitty.

Another renderer redesign and a bit of a rant

Actually not a complete redesign but it's just too much code to write if you want to create optimized loop for every possible special render command. This is especially true for combinations of alpha or palette changes as I'll have to iterate through buffers very often, meaning a lot of code to check and a lot of loops that profit by dropping the rather heavy buffer macro logic.

Instead, I'll drop the palette stuff and add provide multi-stage alpha renderer that'll draw a colored background rect followed by n images taking from the image input buffer. You can of course also do multiple render commands for one image, but as it's common for color graphics or blend them with other of the same size, this process can be speed up a bit. This makes the renderer smaller and the parameters to set in a command far fewer for the user. Large grids or series of images you don't want to generate huge arrays of positions for can be done using matrix operations before each image render command. Actually I think it would be more beneficial to implement some sort of loop or command for n times long recursion so that you don't need to bind repetition to the display list or the command string. This makes it possible to use fully use matrix operations, position arrays or a mix of both.

Yep, I guess that's the best approach I can think of right now. I don't have much time besides doing rather dull QA work every day at work. I'd be glad to get to the programming part of my internship... I mean yes, QA surely is an important in this particular company (and this has more than a triple meaning...), but I'm not made for such information overload as I'm getting it right now. They'll start to use Scrum in the next few days, so I hope that stuff gets more ordered and less last minute. I'm not a simple thinker to beeing able to cope with QA work but I just can digest too much ever-changing informations or even do many things at once. I can perfectly focus one thing and get it done very well - in essence just like a Unix program. So if there isn't coming anything interesting or fulfilling out of it the next few weeks I'm gonna check whether I can start do 50/50 QA and programming or even tool programming work. I can't even properly study their code base (undocumented) or their engine (rather documented though horribly concepts) when doing QA stuff, so it takes just too much time. Ok, it's been two weeks of 6 months, so what am I complaining about... I know how fast I loose motivation if something isn't made for me and escrow phases are exactly that kind of thing where I don't like to do QA cause you really need to do QA there.

I'd rather like to find and reproduce hard-to-find bugs and point the mistakes they made during development. That's way more rewarding than once again testing whether the designers managed to hit the right commit button!!


Why && is no replacement for ?: or if

I've one particularly ugly example of C++ code where a fellow programmer used something like pre_condition && check_func(params) to prevent the function from beeing executed only if the pre_condition was successful. Well, since it's standard behaviour, I can't argue about it operating properly when doing so. But, and that's the point of this post, I find it bad style to use as a replacement for normal conditions. First of all, && is thought for series of logical checks. To reduce the number of checks to be done, it doesn't continue if the first operand is null - good idea! However, it's limited to expressions evaluating in either true or false - you can't use it like ?: after all and it's no good idea to use it for conditions on a regular base. It's also often ambigious what the programmer thought when using it - did he want the side effect of execution or just to final value? Do the associated expressions contain operations or function calls that shouldn't be called? You can't always decipher what it's meant to be and using ?:, if or switch is always a better way of clearly stating WHAT it is.

I have to admit that it's tempting to use the operator for quick checks. But honestly, I found it way harder to distinguish complex formulas when && is used for actual conditions. ?: is very clean way to tell you stuff - use it wisely!

Edit: Also, ever though about what might happen if you put this stuff into an assertion you can't catch during debug mode? Right, you'll never know what operand evaluated to false. Another reason not use it this way.


macros for bool's sake

Found some very interesting things that can be done with variable macro arguments. If everything goes right, I can do some limited, through compile-intensive template macros for generating C functions. Disadvantage: you can't debug it properly as macro evaluations always count as one line. But well, it's proof of concept after all. I'm working quite a lot with much needed clever macro contructs that utilize anonymouse temporary variables inside expressions, processing argument lists and so on. Quite a bunch of good things came out so far. May be even better to summarize this and do a complete post of good practice I've found so far. Yep, this way they may be people out there finding my blog somehow and improve their thinkage around macros. After all, macros are the only way to really add something to C. Pure C code on it's own without the preprocessor is hard to upgrade in any way. All stuff will still be direct code and any clever construct can't be reused except using functions, though this will make inlining more important than happening in C compilers. Sooo, macros are crucial to syntactical improvement of C code without writing a new language.

However, did I yet told you about WHAT I archieved so far aside from totally generic gibberish about how great this and that is? Templates in C! Yep, though a bit limited. Templates in C can be very easy to make by defining macros, but you'll have to put the whole code into one macro, waste the proprocessor space with more symbols while obfuscating the code itself with a lot of \'s and whatever else. It's also not visible to the user and you'll have to instantiate the functions before using them (really stupid if you intend to create a library with all sorts of premade functions but only provide the macros!) and can't just put them into one place and say that only these functions with these types are possible. All in all it may be of use to those wanting to provide a way for really EVERY type out there, but the more library-bound type of template instanciation was more important to me. They are rather simple to use, though a bit complicated compared to templates from macros but provide extra options for symbol generation and so on. It also requires a lot of preprocessing - maybe not optimal for a whole lot of instances. I'll freeze my work on IGE for a moment and try putting all this macro stuff into ITK, fix some bugs in ITK and it's thread syste and release a new version of it. Part of this will be my OOP system on a macro for which I can utilize my new macro sets, too - they'll definitely profit what it's capable of. Handling argument lists using __VA_ARGS__ really is the best base for truly amazing stuff; I'll do write-up of it as soon as I get the time for it. Too bad that I'll have to attend a ridiculous "meat feast" during which I'll probably starve from not beeing able eat anything due to some stupid relatives not knowing how to cope with people different from their own nut-sized brains.


Some words about code cruelty

Starting in a company having some sort of programming does usually also mean coding conventions and trade-offs between your own coding style and the one used by everyone else as well as the convention your lead programmer defined for use in every project. To be honest, I'm quite a code nazi if it comes to my own work - though I try mi minimize that sort of behaviour if it comes to code written by others. But this time I can't hold myself back, it's just code cruelty at it's adolescent state! No politeness will be spared, no convention left unkilled by logic and analyzation. First of all, member variables. Normally you don't bother because you access object.member or object->member and you'll definitely know that it's a MEMBER and nothing else. Methods will be called using object./->member() and static member called via class::member because using an object to dereference the static member is most likely prone to evaluation side effects. Thus, you'll ALWAYS see what is and can use camel case or lower case or whatever to make it out very directly.without needing to prefix or postfix anything. But you know what? They still prefix it using "m_" and "s_". Wtf? It's not argument to dublicate the obvious except if you're dum enough to not beeing able to read atleast C code (where it's the same thing). Another point is ToUseUberlyLongWordsThatReallyDontTellYouAnythingAboutTheClass. I mean seriously, it's all so plain simple dum code that it doesn't make sense at all, just adding obfuscation. It's one of the reasons why I'll never wonder about this company's non-existing list of achieved projects. I've seen more unique bugs the last week than in my whole programming lifetime (making it clear how I try to avoid anything making the code unstable and unreadable instead of obfuscating it even more). An engine should ALWAYS catch stuff, check errors and report properly. It's creepy and disgusting to imagine how programmers produce so many hack-driven blocks of poorly documented code. I'm more than glad that I'm officially in the QA department and not there for solely expanding stuff. Seriously, I wouldn't have wanted to work for their programming department if I would've known how horrible their coding conventions are. Knowledge about platforms and 3D aside, this is simply a no-go but you'll have to stick with it because you can't change it. I even ask him why they do it this way and the discussion in ended quite fast with the lead programmer stating that he's making the rules, so that he can cope with it. I'd call him a bad loser for this one. Next time, this discussion should really be a bit more fruitful, shouldn't it? Well, atleast I know atleast one source of the problem: code completition. Name me single OOP programmer not working with code completition in strong C++ or Java OOP environments. Noone, ey? Yeah, that's just it. OOP programmers tend to clutter everything while stating that minimal and short-named symbols don't express what they do. They forget that a small subset of function of and structs is designed to exactly BE small and compact to avoid literal noise. I can't believe how ignorant and typical this is. And then they don't event check errors, can't live without debuggers due to that. It's a wonder that something works. And no, writing unsafe, obfuscated code that fails to work regularly is NO sign of a good programmer. Like if someone didn't take the time to really study a programming language to see why there are people out (like me) avoiding a long list of antique coding conventions that don't make sense. I don't talk about old conventions that make sense, that's totally respectable. But the whole world of OOP programmers is like a swiss cheese fill with acid. Yeah, call me mad for that, but the sheer utter truth lies behind the shiny walls of corporate identities. It's sad. Very sad. But don't think I'm gonna drop this company. There's plenty of room for subtile change which I'll hopefully introduce in one way or another. Especially if it's about bugs needing to get fixed - which I'll gladly do to prove that a lot of existing things just obfuscate code and programmer. However, I'd never doubt their professionality to be clear. There surely is a great pool of knowledge behind, but if it comes to programming... Well, let's just say that the qualities of a pro do not necessarily result in logic, rather in predictability and knowledge about one's skill.


In the pool with you!

So I had a great very first day in the gaming industrie (and no, I won't tell anything to anybody), but looking at all the stuff in our project, the scripting, engine, the engine enhancements, shaders and associated bugs made me wonder whether it's really such a good idea to invest in all the massive tech to control anything. Just think about it: what's is greatest and most basic source of bugs? Code. It's always code. Thus, people tend to think that reducing the amount of code to write by buying and expanding an engine existing will help them to get their things done. Frankly, that's simply not the truth. The more you get from other places, the more stuff you really add, the more stuff can be as errorprone as with totally custom additions. And it won't make a game better by default. Let's take a mechanically basic game like Legend of Grimrock. I know more games than I can count which I don't prefer over Legend of Grimrock because they don't the degree of polishing done. I love polished games, especially if they put emphasis into atmosphere in whatever way. So what's the problem with other games having bigger engines, more possibilities and so on? Well, I can't tell for sure WHY a lot of those games barely reach playability while dropping polishment. I never updated Grimrock so far and it still plays like a game after months of intense patching. Any other game would have random scripting errors, animation bugs, missing stuff and so on. See the difference? Grimrock is so simple in nature that there aren't any of these at all. It's a system on it's own like most games with more classic mechanics, needing no complex ressource generation and so on. Animation tools are one thing, putting stuff together into something that's not bound to smaller rule sets but pure scripting is a bug source on it's own again. I can't say that I'm a real fan of engines that try to please everything with scripting everywhere and fully blown piles of software engineering methods. It's like telling janitor how to tell other janitors to do his work: lots of communication errors! I mean need you need programmers to let other programmers program the stuff your game designers design. No problem in small games with some limiting rules: you need to get creative on how to archieve stuff. And limitation is one of the most underestimates factor of creativity. Haven't yet seen a game that amazed me that didn't base on a rather limited rule set. It's all in your head after all, so it might be just a different preference of mine. Who knows?


Merge back

Did some more thinking and realized how useful the idea of having hardware and software representations of different buffers and display lists. Not exactly for textures, but think about models and animations: An animation is usually a couple of fixed vertices on bones and joints and a number of flexible vertices or just a number of fixed/flexible ones rotated a few times. But still, the number of flexible elements is quite low compared the the rest which doesn't change at all (even physics-bound chains only change their position and rotation, which's just a single specialized matrix multiplication for each element). Soooo, it's obvious that uploading the fixed parts as display lists will only need require us to do the joints via client-side calls. I'm not even close to beeing an expert in realtime 3D animation, but there's no real way around this except with maybe shaders or so. But to be honest, I can't quite see shaders as something that should be used for stuff like that except if the client doesn't need to do much feedback. Personally, I tend to think that it's nicer to have a bit of client-side control over it cause there are so many events controlling animation steps. Anyway, I'm even more pleased right now after realizing the possibilities of this idea. Keeping in mind that I need to use compilable display list commands will also limit command strings to a more healthy and stable set of features like not changing buffer pointers before execution and so on. I'll try to keep the spirit of this concept for 2D, too, so that everything goes the same way on both dimensions. To contrast the 2D rendering I think I'll implement some software rendering routines as well as the atoms of my ASCII raytracer, so that I only need to call a few things without having to implement this shit a million of times again. Most graphically complex programs working with grids or voxels will need a bunch of software operations I can assure you. And most of the time, these are achievable with some basic rendering routines or just some very specialized ones. So yes, that's it for today. A lot of good things that came to my mind! Always revitalizes my programming spirit.

Merge merge merge

Found some more time to clearly thinkg about management of image, texture and IGE's model for loading stuff in general and came up with a nifty idea. Usually one only needs to keep one copy of texture and then just apply different stuff to change it. I, however, prefer to utilize multiple copies for different operations, maybe even doing some things in software before uploading as hardware buffer. That's sorta exoctic but I know that there WILL be ciscumstances, where it's of use to have the ability to upload and download different versions this way. However, ever thought about doing something more seperated? In essence, there are SDL surfaces and OpenGL textures. Vertex data is of custom format, so this doesn't count until I found some format support for it (custom loader for custom or existing format, I don't care at all). So we got the general data type and we also got the location: RAM, VRAM (PBO, VBO, HW surface), external (file, net) and nil (not loaded). Binding a name/path to images/models makes it possible to give an external location to each of them and handles and/or pointers to their ram or vram presentations is also no problem at all. This setup makes it possible to do loading, releasing, converting and also copying/overwriting with just one or two functions and passing different types and/or locations for each object we work with. I really like this concept as I can do everything in one place and avoid any non-existing copy operation because SDL and OpenGL seem to be quite compatible in pixel formats. It makes me happy that I found a perfect solution for this sort of untidy thing. One solution for a lot of problems - exactly my taste. It also makes it possible to handle images and models using the same interface, having the same properties for bounding boxes, aspect ratios etc... Quite nice indeed. And render commands can have their hardware representation as display lists - so these can be put into a file format, too. Having buffers and commands simply makes it possible to describe models, skeletons whatsowever in one file where changing a few values at fixed positions will simply rotate or move parts of the model. It's like having 3D scenes and model data with joints in one file! Very nice, indeed. Once again I'm happy about my decision to take the time I want for it. I also noticed that working something like this in an randomly timed step-by-step fashion helps the perfectionist in me to accept trade-offs for everything that's not a personal projects. Very important if you ask me: would you like to explain your boss that you need more time to make something perfect though it's not necessary?