Quite a few ideas

I had a lot of ideas and concepts the last couple of days and most of them shoul make perfect additions to IGE and ITK. I haven't yet updated anything since the initial 0 versions, but I'll probably do this when I tested them well enough. Since I usually connect all the new stuff so things I actually want to archieve with it, it'll be bound to IGE as well. But where to start with all this stuff? Well, first of all I've taken my previous idea of file operations to the next level by beeing a small but completely sufficient set of multi-target memory operations supporting files, memory, scalars and repeated buffers as input and output (though with some restrictions). It's not urgent or so, meaning that I'll move it to a later point where I'm in the mood for such rather tedious programming tasks (though I the idea because it merges two or three very similar modules I already moved out of ITK due to beeing superfluous). In a straight line to this we got a final draft of some highly generalized and lightweight serialization I've been dragging around with me for quite a long time. The current concept involves three or four files to quickly dump type information, memory blocks and mapping tables without any complex memory assembly or redundant file operations. But I don't intend to add more than dumping binary data and marking pointers to be updated later. Nothing fancy, just an quick dump and restore of scattered binary data. The third concept is sort of a less program-related notation of data, similar to an extremely stripped XML file based on control bytes. The idea is to have that sort of tree-like model found in XML documents but encoding a branch's begin/end using bit flags that also include information about the data that will follow, whether an additional integer or string identifier is included in this branch, whether the data is purely binary, a string or whatever else. The idea is to not have the large amount of text data of human-readable files when storing image or vector graphics data as well as having the stuff ready for iterations with identifier support and so on. You can even store bitmaps with ease in such a format, as a single pixel line only requires the line size and a preceding control bit with both end and start flags set. I really like the concept, especially because it's very easy to generate and parse. In combination with the generic memory/file transportation I described above, it'll be easy to write this in a very compact way. Even better, you can nest any data you want within it and could even support simple RLE compression by just setting the few empty bits and other stuff I haven't yet planned. Well, it's a bit limited with only 8 possible flags, but what would you need more? And last but not least I worked on IGE of course with some interesting aspects all leading to a really nice generalization of image and vertex/position data I can use for quite a lot of things I'm not yet completely aware of. All in all, stuff's getting more interesting and I'm glad that I don't work on it most of the time. I mean I'm starting my internship in justa few days - 40h a week - so I can recharge my thinking cap while doing other stuff. After all, I won't work on the stuff I'm working on right now, so the contrast is there and I get time to think about it on my way to work. In fact, I doubt that it's good for a video game company to use the tech I use - macros are hard to debug, it takes time to combine and, most importantly, takes ages to fully design since I try to inform myself about every aspect and existing solutions to form a better one. And that's one hell of a slow process I can tell you.


Some thinkage about file operations

I had a quick idea today that involved a certain number of file operations and I wondered how well I could implement arithmetic operations on files using C++ operators. Sometimes it's really fun to tinker around with stuff like that if it'll never really weigh down the program at all, but looking at how all this would be realizable with just using operators, it made click and I realized that this is exactly how I'd have to define it using my own scripting language design. However, it's still interesting to think about just mapping file operations to C++ operators. Compound operators operate on the files themselves and normal operators will create temporary files using tmpfile in the stdio lib. Interesting, isn't it? I mean after all you often need to seperate files or just need a temporary one to do stuff. Maybe some day I'll convert all my C stuff that can be easily realized using classes and templates to C++ and utiize operators on everything. I'm having quite fun imagining the face of my lecturers to demonstrate how easily I can outmatch their lengthy image processing program with just a few lines of cleverly constructed matrix and file operations. Bwahaha, that must be absolutely genius to then also have the guts to make an assignment out of it. But I guess I'll never to so since I know quite a bunch of disadvantages occuring by doin so. Well, atleast it would make a lot of stuff more compact to read and write.


Optimization of Computer Programs in C

Very often I'm wondering whether my effort put into very specific things is realyl worth it and whether it actually saves time, whether other may be right by ignoring this stuff until the program's done and so on. This article brought me back to the fact that I definitely know what I do. I've already written programs and games with those cases I optimize for, I've already done my own part of reducing algorithm execution time and all the other things sploding your performance checking. Thus, I know what I do and can target specific problems that WILL happen when I used my own code. The linked article was very pleasing to read as it proves what's in my head for a long time. And it makes me feel better about my knowledge. Sometimes I feel bad for not having already worked with a billion of engines, a trillion of OOP patterns and a bazillions of varied projects. But most people who DID work on so many things are either old(er) (making it possible for to looking forward to) or limit them selves to upper levels where this is totally normal. But I prefer the bottom-up approach of learning since it makes my base and judgement over things more stable and closer to what's actually working more effective. I don't care about how old the article is (in fact, some higher indexed topics are sorta out-of-date since the article dates back to 1997), but programs barely change what they are made of - nobody rewrites compilers for every new CPU coming it due to the established instruction set and computation behaviour. However, did anyone notice that Valve seems to be porting Steam to Linux? Fabulous!

Things that had to be done

Did some mental clean-up and reworked some bits of my code. Looks way nicer now and mood improved by a certain amount. Also, I finished uploading the last bits of my Nox Let's Play and uploaded it to Youtube. I guess I'm gonna link it under "output" on the left side of this blog, though it's all in German and thus sorta out-of-scope for these reading my blog (if there are any out, which I rather doubt). So yeah, got a few things but felt that I need a new Let's Play to do. I really like doing comments on video games, it pleases the talker in me - something I can't completely satisfy when just talking with people because they either ignore me or don't show any interest in what I'm talking. That's why I'm having my blog, too, so I can write whatever I want... Anyway, I've chosen to make a Lets's Play of Alien Shooter 2 (Alien Shooter: Vengeance in the states) and played through the half game in one session - considerably faster than my Nox Let's Play which's in total 26 hours long for mage and summoner (though I wasn't able to finish it with the summoner due to broken save games). Previously I recorded video and sound using Fraps and Teamspeak up to a certain amount of free harddrive space. I combined the videos into one series in Avidemux, extracted the audio and merged comment and audio using Audacity with a fixed mix ratio. At last, I started rendering audio and video together for every single recording series. Fraps' way of splitting avi every 4 GB is way annoying but who cares if works in the end. In the end this process took always very long and I couldn't just do a few clicks and then something else while waiting as I had to mix and render all the audio again and again (worse for editing). So I've come up with a nice little work flow: First insert all session videos into Avidemux and extract the audio at Teamspeak's 48000 Hz sample rate. Then mix comment and video audio for the first time in Audacity and choose a aplitude mix ratio (for me, it's around -4/-5 db for the game audio and +10 db for comments). Then just insert and convert for every session, it's usually fast enough to not beeing hindered or occupied to not read or browse meanwhile. The advantage of this is that you can just grab your files from the explorer windows and drag'n drop it. This works the same way for video, so once the audio has been rendered you can theoretically remove the source audio/comments (I don't mind recording comments later if something happens - did this to cover some recording mistakes for my Nox Let's Play). Now the rest is just drag-dropping videos in Avidemux and selecting the appropriate audio render. Plain and simple, I can continue rendering next and program while it is doing it's busy work. Add a bit of music or a low-res Youtube and you can do fruitful things, too! This works way better than the old method and doesn't occupy me all day. If Avidemux would only support segment splitting. This could solve even more problems and I wouldn't need to render individual videos. But well - this also requires full-length audio files and I I know that my way of recording comments carries a certain bit of time beeing longer than the video's audio. That said, it's still wayyyy better than before!


That's all just bullshit

Nah, my renderer system is sort of bullshit right now. I don't like the way how it's done internally. I like the idea of giving a list commands and repeated inputs, but each command requires the same amount of all-same stuff done to be quick for single commands as well as all repeated ones. I'm not sure how to continue from there, but I need something that's cleaner to realize. Need to think about this. It's not like I want it to be rigt now. still believe in the original idea, but I need to organize it better. It's a combination of how I form scren content and how I imagine a proper domain-specific VM. But I should not try to hard on this - after all, I'm only getting into such moods if there's a trade-off to be done. And I'll probably go back to a more pipeline-alike archeitecture where similar things happen at the spots, so that only a few things have to be repeated. Yeah, I know, no one has this sort of problem when programming stuff, but I do. I'm a perfectionist and I need to follow this somewhere before everything gets nasty. Currently, I'm not even interested in continuing the work at this, so imperfect it is! I'll go back to redesigning everything from top to bottom in a way that it's easy to use and easy to program while still beeing efficient. A complete conceptual rewrite without forcing a reuse of old features or so. Gaaahhhh, this is driving me nuts. Also, why in hell does it take so for good games to come out this year. I've been for quality for a very, very long time.


Didn't do much productive programming the last days, which totally bugs me. One of the advantage when working on stuff completely alone with no real schedule is that you can possibly just drop stuff and start on something different without finishing it. A real problem for productivity if you ask me and I'm not of a good organizer to make strict schedules for all the stuff that I could do when working on my own stuff. Anyway, I once again realized that those moments always occur when I don't like how stuff is done in the code - in the current case, it's about no havin well enough defined structure for my renderer input. It's all quite non-structured and it's really no good idea to continue this way. It's so trivial but so easy to not do. To sorta quote the dude from TheJapanChannel on Youtube: the hardest choices are usual the better ones. Thus, I better take the time tomorrow to do something properly defined. Something so structured that you won't find ANYTHING that's not documented via symbols.

Random rants about family

I usually try to limit this blog to technical stuff and awesome video game-related and some other things I do as hobbies (Lego, Music and so on). But sometimes I can't help but need to express myself in a place where I can do so because it's anonymous and nobody of the involved persons would look it up.

So yeah, this is about annoying things happening in house of weirdos, a result of two completely different and significantly sized families. One side is conservative and specialized, beeing not very open but quite smart and rather tolerant about other people's business if it doesn't get in their way. The other is a huge pack of foolisch every-day people taking everything the medias bombard them with while ignoring everything they can't cope with or comprehend. So it's a clash of different worlds that only matches when it comes to food. Conversely, me as a vegetarian and my sister as vegan will need to sit with all them and have some sort of lunch with them in a very ugly, arch-conservative, traditional German food serving restaurant to celebrate a great-grandma's 90th birthday. Honestly, the German cuisine is a true nightmare to us two. The only option we will have is to eat dull potatos and overcooked broccoli or so. All in a terribly hunter-themed room with trophies hanging all over the place. Combined with this horrible company of stupid TV techno zombies it's a tripple horror show we won't like to endure very long. I mean I'm used to "celebrate" dull birthdays of other relatives like grandparents or so, but my great grandma is actually 90 years old and LOVES this stuff we totally dislike (with the few active brain cells she has left). I won't try to hide my own attitude towards this topic: relative, parent, random people or not - I don't care what sort of relationship people have to me, it's what they are, what they do and what they think. I know that they don't care about vegetarions, I know that they disrespect other livings that are not like them (may they be animals or humans, I don't care anymore). Thus, I don't care either and would prefer to spend my day with actualy freeday and not having to sit there, endure stupidness and also travel the whole day with my parents which are either exactly like the rest or so deviant that it hurts to live with them.

All in all it's a big chunk of frozen shit wrapped in a nazi flag. As if it isn't enough, the process of getting information about whether it's possible to get atleast vegetarian food creates bitter days of discussion with all family members. It's my great-grandma's 90th birthday after all and I can't really say no because it's sort of inappropriate. She can't doesn't really has the power to change what she is now and thus, I don't want to step infront of her and telling that I won't come because I don't like the deer heads there she can't even properly see anymore. Those moments always make me dislike those differences between generations. But totally independent from my great-grandma is the fact that all her offsprings are just completely foolisch and ignorant. I mean I could live with just doing the same and not seeing my family anymore. But this wouldn't make me better, nor would it help making their minds greater than a bean.


More total Awesomeness

Did I mention how much awesomeness is in today? First I discovered this massive VM execution performance boost for "batch" operations on many elements, then I officially joined my internship company (I got stunned by the very first moment - already loving the professional start procedure) and then I even got around defining a good type system which does sursprisingly support fully blown OOP. I didn't it like that in the beginning, it rather came out of a wish to describe structures more detailed with patterns. Stuff like telling that this structure has a memory of cells with as much entries as the value of a property of this struct. It's essentially for describing dynamic structures and how the size different when knowing certain values. But how to described how many elements an array has if it's size depends on a multi-tude of other member values? Simple: override the size operator for the given struct member (yep, you can define operators for members without storing it in the actual object, defined for the wrapping type as well) and insert a custom code that functions like a method, taking an operators prefix parameters (those beeing on the left side) as a this pointer and the postfix parameters (those on the right side) as operator input. I already got a quite detailed concept, so it's hard to explain everything here. However, having operators telling a member's count as well as element access (done via the @ operator), you can create quite a varied range of structures and avoid needing to allocate stuff by hand. Just imagine you want to allocate a flexibly-sized struct. Usually, you'd create a struct with settings and pointers pointing to the memory location. This can be avoided by allocating only the fixed portions first and then use a builder function to re-allocate it using the fixed size properties and the size operators. Isnt't that interesting? I mean if you don't want to use this feature, you can still tell that want to have a pointer (which's called avariable in this language, since I wanted to avoid RTTI on the objects themselves, leaving data as stripped as it should be) or just give a fixed size, in which's case allocation is done only one time and you can still manage it yourself.

One downside of this dynamic concept is that you will only notice size mismatches during execution (though for not fixed size, since this is known). Well, you can't really check before the real data kicks in - invent something that does and you can sell it for big fortune teller money. There's no real function call overhead compared to builtin ones since every parameter passed to an operator will be generate once on the stack and can be left/altered there using the multitarget notation I described in the last post about this topic. If you don't pass any parameter, nothing will be generated. If the function doesn't need to use or check them, nothing will happen. So yeah, it's seems to flow together in a very natural way. Still a lot of stuff to designed and define, but since a lot of basics are already done now (the type system really a major one along with the multitarget idea), everything else should come as well. Surprisingly, indeed. I wonder why.


Awesomeness on rails

While waiting on an annoyingly delayed train cycle to kick in, I did my usual random thinkage about the stuff I like to work on. Moe specifically, I came to an interesting realization about my graphics engine's VM and the way it handles input via buffers. I did mention that this concept might be useful for mor than this sort of high level calculation - and indeed: it could be quite an interesting way to handle formulas in interpreted or VM based language that need to do repeated operation that consist of a number of all-same sub operation. Just think about this: a formula with n operations, each of which you have to parse (binary or text) and iterate the command we have to execute. If you're using identifiers and jump tables, you can do this as often as you want. If not, if your compiler does not optimize cases well enough, if you're using function calls or something worse (like interpretation), you'll have quite an overhead. For the general case, function pointers and minimal memory usage is the way to go - brings acceptable performance and you'll definitely know that no identifier iteration is done. But still, if you have to do this operation a few hundret times, you'll need to repeat this process on and on again - though you simple know what to do and might go better with dynamic compilation methods if you're really after this sort of stuff.

All this is usually impossible to prevent - each command requires a matching piece of code. That's a lot stuff to check, even for simple alpha blending operations. Anyway, ever thought about splitting the whole formula into several batch operations and doing each block n times in an optimized look? No, because it may require more memory when done naively? Well, that's certainly true, but so many operations are often done on many elements, so it's up to the language designer or scripter to do the loop manually. But still: 5 operations done together 100 times requires 500 command iterations and just a few variables - maybe some stack operations, too, if it's not a register machine or requires additional data. When doing 5 operations each 100 times we get 5 command iterations and 500 operations as optimized as possible (when split and unspecialized of course). We're talking about non-native execution, so it's always better to keep portions optimized in a VM. Even better, this brings the idea of specialized commands for every thing and won't run faster than than in a real program. I need to integrate this concept when implementing my multitarget system for my scripting language.

Pre-post Edit: It works! Totally awesome. I tested this stuff with a simple VM that only knows basic commands on single values as well as on full array blocks and got an execution speed decreased by around 60% on average for simple alpha blending operations on a few million elements. It's the same ratio when using a specialized alpha blending formula, so it's even more worth doing so (in which case it's seems just quarter of the original duration). It's also roughly the same ratio when turning all optimizations on, so yeah: this stuff works very well for VMs. I decided to start designing the VM first and then putting parser and code generation on top of it. This way, I can clearly seperate what's part of syntactic sugar and what not. I'll try to keep the instruction to a the most useful minimum with a possibly human-readable format, so that it's possibly to write it on your own.

Excitement! So, once again, taking your time to think about stuff DOES bring good ideas. This also works very well for my idea multitarget concept as mentioned above, so I can directly generate proper VM instructions by splitting the the formular properly. The optimal case would be a syntax where I can completely drop handmade loops and leave most algorithmic execution time to the VM. Functions will still be sort of slow to optimize, but this makes inlining and code generation even more valuable.

Portal 2 Soundtrack

I haven't played Portal 2 (just watched a Let's Play) and almost failed finishing Portal 1 - I didn't manage to beat the bonus maps. However, what always fascinated me was the music and I was even more fascinated as I listened to the full Portal 2 soundtrack. It has the awesomeness level like Half-Life 2 and even boost with a few chiptune-esque elements. It's so damn unique, I can't repeat how highly I recommend it. Just get it from somewhere, it's worth your precious attention.


Network Attached Storage

Me bought a network attached storage! Yep, I always wanted one. Why? Well, I can directly save bigger downloads on it, can share the stuff between all my computers, can give other FTP access via own accounts etc and so on. It's a cool thing, but the transfer rates are sort of low compared what it felt with a little USB stick. I don't know enough about the actual differences between HDD, Ethernet, USB protocols and flash memories (an area totally not of my profession since I'm just a programming nut), so I can't say much about it or just need to test it. I get around 11 MByte per second when writing from on HDD or Ethernet to the NAS, which seems to be quite fast according the a review site. This speed is enough to run backups of my old HDD data which currently resides ony my laptop only. I'm always concerned about loosing my backup data since a mainboard fault, so I hope that I can move my data tomorrow while testing how many other PCs I can simultaneously use to copy data on it.

I've been thinking about using it as the target for my recording with Fraps. A 1680x1050 RGBA screen is about 6.7 MByte per screenshot, so I can roughly get 2 frames per second when having an 11 MByte rate. Not really great, but I'm not gonna record in that high resolutions. Also, Fraps probably also does some minor lossless compression on it, so it should be less. Recording at half the size would bring a factor of 4, meaning I can record at 8 frames per second with compression. Taking a real-world example with 4 minutes clocking at 4 GB and 60 FPS at sort of more than semi-half resolution, I'd need 17 MByte per second when recording. The usual size for 1024x768 seems 5 minutes and 4 GByte, so the required rate would clock at around 13.7MB per second at 60 FPS. Hm, recording at half the screen rate should decrease the required rate by half, gonna try this right now...

Anyway, I have a new toy to play and work with. Makes me much happier to finally see me having a proper backup place for stuff like that. Hooray!

Edit: Did a bit of testing and it seems that I can record Nox at 1024x768 and the native 30Hz with no problem. However, I'll still record Teamspeak audio via the internal harddrive to keep the bandwidth low. Very very fine, but increasing the resolution and framerate makes stuff a lot slower. I tried it with Legend of Grimrock at full resolution and 60 Hz. This brought me around 8 frames, so I way roughly right. However, half framerate and screen resolution was perfect for low resolution recording and playing was also absolutely acceptable.

So since I'm against full HD recording for new, ultra-detailed games (you should play the game yourself if the developers can still profit from it), I'm gonna limit myself to older games that don't run at such high resolutions framerates. Or just scale it down. I'm still looking for a new game to let's play after Nox... I'm almost through, the last recordings will be done using the NAS, so I can't run out of harddrive space. And I don't think that I'll exceed the 1 TByte in some time...


Some thoughts about expanding expressions

I've been thinking about my own programming language today and worked on the type system but also spend some time on something I call "multi target" in expressions. My first ideas was that one could create sequences like a;b;c that will be expand so that no more elements are in the sequence. So that "a;b;c = 1" becomes "a=1, b=1, c=1" and so on. But let's just think about it: how would this sequence behave when using it multiple times in an expression? Would "a;b;c = d;e;f" become "a=d, b=e, c=f" or "a=d;e;f, b=d;e;f, c=d;e;f" ? It's a question of how the "unrolling" of those sequence behaves.

I've been tinkering with this for quite a while now and came up with two different models: The first is that all sequences created with "&" that share the same expression level (no brackets seperating them) will be unrolled parallely, "a&b=1" becomes "a=1, b=1" and "a&b = c&d" becomes "a=c, b=d". To expand this, there's a second sequence type build with "|" that behaves the opposite way, working as a multiplier. In combination with brackets, inner brackets will be unrolled first and then unrolled with the outer expression. So "|" is effectly the same as a bracketized "&" and doesn't need to be used if brackets one the same expression level will be solved one after another, generating a large amount of code with just the original expression.

So the second model takes a different yet similar approach by omitting the "|" operator's superfluous function and replacing with the same as "&", but just after "&" has been applied. This generates some interesting behaviour because it gives you the multiplicative behaviour of a pure multi-bracketed "&" expression but combining each singularly unrolled bracket into one that'll be unrolled parallely after "&".

The second approach is superior because it makes additional unrolling possible. I've taken a look at differently formulas and all of them share the same layout as matrix multiplication with a fixed dimension: one completely strechted range on every column and one with a repeated pattern, both alternated as often of the matrix has columns. Think about it: if you take this setup of different operators, you could create any kind of formula extremely quickly. Even more interesting is the idea of taking not only those two operators but a bracket pair that will indicate the resolution level. This way, multiple "loop levels" are possible and you can effectively create several nested loops where each iteration can map to the n-th element of every sequence sharing the same level. Quite abstract, isn't it? Well, I'm very happy about this find as it makes it possible to create an amazing amount of large code cramped into a single function and don't need to waste to any time with loop iteration. Furthermore, this can be linked to an unroller for static content and BAM you got a lot of code merged into a single formula. with no needed iteration or so. It's not exactly about the unrolling thing makes me excited: it's the possibility to remove loop constructs that usually obscure reading. I mean just look at mathematical functions and tell me where the loop iteration is. Well, there's just no except a few sum symbols or so. Stuff can be written as it is and very easy to read. Personally, I found it more easy to understand algorithms and operations by unrolling them manually and then understanding the logic behind. It's harder to understand something the pure algorithm that doesn't have to do anything with the original formula.

So what I'm trying to explain is that I've found something truly awesome that can be used in any VM or assembly based language in equal ways, unrolled or not. So, you see, it pays off to tinker with thinking about this stuff. I'm gonna try some of this stuff with C macros and some ways of iterating variadic macros with known argument count I discovered while working on OOP macros. So this could work very nicely in C as well, though with quite a pervert syntax.



Added a bunch of useful macros to ITK along with a generic floating point matrix stack ala OpenGL for 2D, 3D, ND as well as a new "buffer" datatype that'll be used as the sole input method for IGE's render strings. Works quite nicely, but has a certain overhead for single values. However, I'm certain that this input method will be very useful for other systems as well. It's really quite a good idea in my opinion! I mean what else would you want to have for an in-code VM that will never be fed with any language but a few commands with indices as parameters. Though I have to be honest that I don't yet know what other uses might be.

ITK is growing nicely so far. Everything I want to use in some other code that's not too specialized will find a place in ITK. Especially the vector and matrix stuff is quite varied. Though I'm once again convinced that C++ templates are better for anything type-related. However, I'm still not convinced of the whole inlining thing that's imho more important than the generalization interface. After all, function calls in inappropriate places do totally suck. Another reason to code in C and get more creative about how YOU define your algorithms and interfaces instead of complaining about it. No one said that clever C programming is not about puzzling.

Also, I'm going to implement a little OOP macro collection in C and probably add it to ITK because some things sjust don't profit from not choosing classes and virtual tables. I found myself a bit rebellious when my internship interviewer insisted on C not beeing able to do most stuff C++ can do. Well, it's true that a lot of C++ features are not possible in C, but those are rather of syntactic sugar or parser steps like namespacing. There are also some compiler-specific lowlevel optmizations that can't be done in classic C, but those only add efficiency for certain paradigms. The processing will always be the same for both, independent from what you want to do. Anyway, to prove that OOP is possible (with a certain comfort of course), I'll clean up and expand the macros I'm currently playing with and try to add as much as possible. It's won't be as smooth as a builtin syntax of course, but if you really want to utilize it along with normal C code, it's absolutely no problem. The only concern I'm having is that since it's all on a macro base, the used symbols should be as short as possible. Symbols like I, V, CP, P, VV and whatever no are perfect if you no what they are and quick to use. But they're still global, you know. Nothing that can be called clean.

Some thoughts about flexibility of game engines

I've been thinking about this for a while now and for some reason I cannot help but it seems that there are atleast two different approaches common in video game development about how much flexibility is wanted in a video game engine. On one side you got the modern approach of given a simulation base connected to a scripting system and high level access to stuff like when and how to apply what sort of shaders and to which objects and so on. This requires a lot of work and brings a base for companies doing only this and nothing else or just a very small amount of games to show off engine features with adding any sort of nice gameplay (like Epic Games, just saying). On the other side there's the specialized engine made to feature a fixed set of things, like seen in countless game series by studios that only do a few different games. Personally, I always preferred the specialized approach because it brings a certain sort of philosophy, that there are things that will always be included, that it is known what will be featured in a game. I will never know what sort of engine my favorite games have besides those where the studios are more opento informing their fans or so. Or small indie studios that blog their process and so on. But especially for console games, I just don't believe that use interpreted scripting languages or stuff like that just to feature more flexibility. I simply don't believe in it. PC games can feature quickly interpreted languages with small amounts of scripted code because they are usually faster and more powerful. But since interpreting does always bring shitloads of commands to interpret (well, using a scripting language's syntactic sugar is the key to keeping it quick I guess), it's a total waste of execution time and also memory. Just take S.T.A.L.K.E.R for example: it parses stuff almost constantly and you'll notice that when modding it a bit. There are mods showing how incredibly wasteful a single script can be and it also explains why it often takes heavy lags when triggering quests and so on. I can't tell for sure but as far as I remember, there's more scripted than good the engine. The first part was sort of acceptable, the second was a true nightmare in terms of lagging events and the last one reduced every scripted AI to a minimum to have almost immediate scripting response. That's what I noticed because the engine core was still the same.

There are also other examples that base on VMs. Thinking about it, Minecraft is a good example hat's possible to "script" with a VM-based setup. Essentially all Java games can be seen as VM-based game footage. Most flexible-to-script engines either utilize their performance very well by not using scripting that often or limit themselves to very basic and fast features. There simply is not much of a different trade-off besides using non-interpreted languages - you have to sacrifice stuff in any way. That said, I'm more a fan of specialized engines that only have the stuff that's really needed for the game. This is also the way I'm thinking of video game creation: choose a set of things your engines should feature and then make a game out of it. Well, I'm also quite conservative about the games I play, so I can't say that this works for those game designers usually doing the work in professional games nowadays. Both extreme approaches have their advantages and I will not try to judge about which is worse and which is better. My personal vision of having a VM-based scripting languages with my very own set of stuff and commands is still what I strive for. Bad side of both approaches is that it takes a shitload of work and thought, so they are both equally demanding on the way...


So I played Legend of Grimrock

First of all, there are very few games I look forward to and most of the time I'm preordering them. Grimrock was no diference but it's one of those games were not customizing everything is quite a fail by default. The first hour I was rather happy to everything that happened to me including until I died because I couldn't cope with the default party. I died many more times, so I started my own set of characters with two minotaurs and two mages, which worked way better. Grimrock can be quite tough if you don't watch your movement and actions. This wouldn't be a problem to me within a round-based gameplay, but this is realtime, so it's more difficult to me. I mean I'm used to roguelikes and games where you can somehow take a pause, take a look at your possibilities and not worried about dying the next moment.

Sooo, I have mixed feelings about the gameplay, but everything else is just superb. Greatly optimized graphics output (rather easy with those large, repeated grid elements), wonderful animations and a nicely done set of interaction ideas. However, there seems no music but some dungeony growling in the background. The sounds aren't really fitting to what's happening (snails still sound like they hit you and the minotaur's grunt doesn't exactly sound like "I'M HIT"). Would've been nice to have it a bit more fitting. Sure, with no music you have to do your show somehow, but confusing sound choices are rather... Well, let's just say it could've been done better. Though, the legionairy's marching sound is totally terrifying - I bet stuff like that makes it good again.

Anyway, it feels great to play nontheless. I haven't played that many pure dungeon crawlers beside Roguelikes and the last one I finished was Orcs & Elves on the Nintendo DS. Grimrock is a totally different level, way higher and with a lot of polishing done to the graphics. It simply looks correct, that's it. Just like a modern dungeon crawler should look like. There's nothing I can say agains it, the game doesn't give me any possible weak point besides the default party to nitpick about. Then again I'm a very systematic player and can't cope with too mixed strategies. I always preferred to specialize in something if it's not too generic.


Life, Universe and Debugging

The last few days were quite eventful to say the least. Best of all, I got my contract for the intership I was waiting for for so long. It's awesome, a real-world game studio and I can be a part of it to get into the industry! Like most other people they wondered why I was mostly using C for my stuff and not C++ and I wonder myself, too, why all seem to wonder. Just because it's oldschool, it doesn't say anything. Anyway (I should not always try to random comments on this), one of the interviewer pointed out that my SDL mutex wrapper in ITK is not threadsafe. At first it felt sort of embarassing (image the situation - in an interview and you applied with this stuff as a sample), but then again it doesn't actualy change the state of beeing bugged. I removed the unnecessary, non-threadsafe bs but it's still the same result. However, using gdb once more and checking the stack for absolute perfectness, I got another rather simple point: the error happens with an list iteration in the event listener resumer (itk_event_happen). Right in the iteration, nowhere else. Since the values are totally consistent, the pointed addresses might now - they do actually use local variables from other threads. So the real problem is that I'm having this sort of problem since ages and didn't think it be could related to it. Well, debuggers are great for showing you stuff, but it still doesn't help you actually figuring it out. And in such an amount of high-density code, it's easy to overlook.

That said, everything went awesome and the future's safe for now! Also, I managed to get a nice Let's Play recording routine and video splitting/converting does also work well. The resulting image quality is a bit false but that's just a matter of time to I get it right. Too bad that the Let's Play I started (Nox) had some sort of savegame fault and so I'm not able to continue. This is highly annoying and I wasn't able to find any way to fix. A shame, totally. But this gives me enough room to start something else...

Also, if you havn't noticed, Dark Souls is coming out for PC! Wave yer flags and sharpen you spears: this is definitely omething I'll Let's Play after Legend of Grimrock (which I haven't yet touched in any way but the menu).


Some more thoughts about scene assembling and light

In my previous I already mentioned my preference of using blocks and octrees for static scenes and I wondered whether I can combine it with my ASCII raytracer. It was of rather poor performance, but even with some improvements the main problem of passing through the same tiles over and over again didn't make it faster. Thus, I was thinking about using an octree for it, too, but quickly noticed how problematic this concept becomes when using destructable terrain. For rendering, it's probably a good to use an octree with opaque and transparent walls, so that the CPU only needs to pass the stuff that's potentially displayable. But with more destructed terrain, the bigger and complex the octree becomes. So it's suitable for my raytracer, BUT (and that's the point where it's actual scene assembling), you can precalculate stuff for single blocks/scenes and reuse it later. Scenes made up of smaller scenes seem to be less common in modern video games. Well, there are equal techniques for producing assisted hidden surface detection or arbitrary polygon sets, so it's no wonder why artists prefer to just sculpture stuff. However, I like the approach of assembling maps with premade blocks. In the particular case of orthogonal ASCII raytracing, it could be used to precalculate single blocks in different angles and then just quantize the ray angles. This way, you only need to take data from pre-rendered content and no rendering would occur. Well, this won't work for dynamic colors but only for static data. Well, it might be possible to find a way of dynamically generating those precalculated chunks. But I don't think that the required work to archieve this is worth it - in the end, you'll probably end up doing more management than good for the performance.

All in all I'm wondering how I could speed of my own renderer with this. In whatever way I turn it, it simply doesn't fit and I never get much out of it. Seems that raytracing and destructible terrain is a combination no way near to speed-ups.

Completely untitled post

Added a few macros for fixed-size matrices and vectors that work the same way as ITK_VEC_L1 and it's variants. They an be used like normal compound operations (evaluating in the address of the target object) or like temporary values created sing {} brackets. This way it's possible to create temporary anonymous matrices and vectors if needed. It's really interesting to see how easily compoung and non-compound operations can be done with pure C code. Expressions really become your friends in such situations (if they aren't already). A lof of stuff should be possible with this, but it does of course increase compile time due to quite excessive macro use. Flexibility comes at a price, as usual.

Anyway, this should ensure that I can do stuff like with normal operators (though in a somewhat different syntax). Atleast I hope so because I'd like to finally continue the stuff with actual output that's not just made of pure helper functionality. Some further analysation of the current command string render system made me realize how flexible it is. You can feed the system via sorted or unsorted linked lists, via arrays with long, generated commands string or by using a repeat parameter (and thus profit from faster execution because there's just one command to execute). Very nice indeed, but it has way of visibility clipping or scene assembling, which should be done before feeding the renderer. Personally, I prefer to split stuff this way because it's already a bit blown with this command string stuff.

Guess that's another project after I've done the basic 2D/3D stuff for IGE. There surely is still a lot to do but I like to know what stuff should be done next. The more options I have to expand my own tech for future stuff, the better it'll merge with new stuff. Atleast I hope so... Whatever. Creating some sort of scene assembling will be tricky. My understanding of virtual worlds does consist of cubes in cubes or rects in rects. Some sort of boxing is always present but I wonder how visibility detection can properly be done in combination with the graphics card. In any case, I'll atleast have to know what's our frustum in 3D or screen rect in 2D. For a non-perspective, axis-aligned rendering it's in fact trivial - that's not the difficult part. The difficulty is to combine my block arrangement with a possibly arbitrary visibility volume. Hm, in theory I could just interpret the "view volume" as a convex polygon and then apply a specialized rasterization to render determine what screen is visible in the block/box space. That may be a good idea! An octree would be perfect for this: strong, all-same box alignment and a flexible number of resolutions for adding a lot of details. One could even combine animated octrees when using the matrix used for the rotation... Yep, that could work. All I need is "a bit more time"(TM). But I think that's the perfect way of creating levels and 3D scenes for the stuff I do. Guess RPGmaker left it's mark on me with it's tile-based world creation..


OpenGL Performance Optimization

There's a very useful article about pitfalls when using OpenGL and how to avoid them. Most was already clear to but some stuff was new and I'm considering all mentioned things when starting to create more high level features. Currently I'm not sure whether I want to provide many low level functionalities like those usable in direct OpenGL or rather give a set of more managed features. I'm really not sure about this, but it'll probably be rather simple in what it does. After all, there are so many ways to optimize stuff that it clearly depends on what and how you are rendering your screen output. Since I don't want to limit everything to a specific way of building screen contend, I can't afford ruling over the whole setup. Thus, I'll probably focus on drawing only and leave those features out that are more dependent than others.



I spend the whole day writing vector functions that were missing in the past (wich means that ITK is having all typically vector functions I'll need for graphics programming) and, most important, finished writing the raw code for n-dimensional, 2D and 3D matrices. Can you imagine how annoying it is expand matrix multiplications in the third dimension? Holy shit, this was some mess. However, the pain was worth it because that many nulls and ones can be optimized very nicely. This means more than half of the performance than usual. I won't need 3D matrix operations for my graphics engine, but if I want to create my own software 3D renderer in the future, I gonna need them. So yeah, there's a of macro code right now, but it's equally messed up like the bucket sort macro.

What's next is to test it properly, check functionality for 2D, 3D, ND and integrate it into the VM. I'm feeling much better now, this has bugged me for years! Not having own matrix code. After having it infront of me all day, after writing, analysing and optimizing I finally got comftable with it. I even saw the connection that never revealed themselves before. All in all this was a very necessary and worthwhile day.


Why didn't I buy another console

I've been stuck with PC gaming after the Wii came out. I realized that I'm not part of the targeted audience and that I strongly dislike gestures in video games and how they were used in almost every game. However, I love Nintendo games because they are just quality in most cases and didn't want to miss the new Mario and Metroid games. So I stuck with the Wii, used it when a good game came out (I can count them with just two and a half hand). Everything else was PC gaming and quite often I regretted my decision. I didn't like the XBox so much, a Playstation 3 was still expensive and the games didn't interest me. But now after everyone's shooting rumours about new console generation, there are so many interesting small games for the PSN network that I somehow regret not beeing able to play them. Somehow sad. I wish they would release interesting stuff in the beginning and not just the random raw starter titles. Anyway, I bought the slim PC2 version years after it's prime time and had a few nice games for it (including the Jak and Daxter and the odyssey of Okami). I hope they will release a slim XBox or Playstation 3 so that I can finally play all those games I always wanted to play but couldn't. After all, I'm a loyal fan of Nintendo, though they did decisions with the Wii I didn't like. I'm looking forward to their new console and what studios they can motivate to program for it.



Oh man, I can't remember how often I had my very special frustrations about video editing, recording and cutting as well as recording myvoice parallely. Anyway... all this is nothing compared to the utter shame of not pushing the recording button. Yep, I did. Three hours of Let's Play recording but no video and no audio. I guess you can imagine my current frustration and also the reason why I won't continue the one I was working one. I already did something similar before and with a game that only supports auto-save... no. Not to me, not to me. This made me think and I decided to only let's play games that don't have a lot of plays on Youtube. Or just those nobody was intereste in when they came out or so. There's a bunch of game that won't get or never got that much of attention, so it feels good to show the world how awesome they are.

But still, it's so damn frustrating! For sheer compensation reasons I started playing and recording Nox and got three hours of video and audio material. Lower resolution make recording more easy since Fraps's fps counter is way more visible... Geez, days like these totally suck. I wish there'd be some sort automatic recording start when speaking for the first time. Could make stuff easier to not miss.

Matrices the second

I'm writing quite a lot about this stuff (in fact, nobody's probably reading it anymore due to that) but here's something new I found interesting to mention. I've analysed OpenGL's own matrix use, research a bit about the few things I couldn't remember from my own math lectures and thought more than once about how to efficiently apply any of the matrices. See, just creating a static matrix isn't possible - they have unique parameters each time. Always creating a local array on the stack is also stupid because it wastes a lot of memory. Specialized multiplications with left-out factors etc are not readable enough (imho) and fail to be of use if the matrix to multiply is already so full with previous multiplications that you can't use a all-same code for all multiplications (well, the latter one's actually not that valid after writing it). Anyway, the essence is that you need to store this stuff somewhere. Those matrices can be used on and on again if the parameters don't change. Therefore, it's a better idea to generate a matrix one time and then apply it for all following steps. My current approaching is just like that and, combined with a direct multiplication, this does actually work well when considering the fact that my graphics engine is now more like a VM that can possibly optimize multiple matrix commands into a single one, avoiding unnecessary calculations and so on. This might be a nitpicking thing from the outside, but when you can get less calculation time for free, you should possibly take it. The more is possible with one engine, the more awesome in can be in the end - think about it! And the "easiest" way to let things look better is to make everything as fast as possible so that more details can be done in time. I'm having quite a lot ideas what to do with more free CPU time, so it's never wasted (even if it's just some more wobble effect or an additional light source).

I'm beginning to like this for reason. Finally there's a whole room of possibilities that doesn't limit itself like multithreading or so. Plain and simple! Just like I like graphics programming. Ideally, it's also straightforward and logical since it should be quick for the system to execute. Oh boy, I'm looking so damn forward to doing more of this stuff in the future.

OOP vs. State Machine vs. VM

I noticed that my render concept bears similarities to a little virtual machine. I added goto and a conditional jump so that line command indices can be inserted via buffers and so on. So you don't need to call and fill new stuff and can utilize the command string itself. I don't exactly know what to do with it right now but it probably fulfills it role a some point.

I mean just think about: a VM is quite sponsive and a fixed code written with for it is just like a shade executed on graphics cards, just done via the CPU. I'll probably put in all commands and things into it because it simplifies the whole writing of it. I'm already having buffers I can set (which funtion like repeating read-only memory slots) and everything else is just calculating and calling. I might add some more specialized commands for future stuff. Since I can, for example, also add shaders to the pipeline, this might become an interesting way to assemble them. Yeah, wouldn't require any calculation functionality, but for pure cpu calculations, I could also add commands for normalizing sets of vectors and other common minor tasks you'd need to do before starting the renderer and so on. I could also abuse shaders in the pipe line to execute those commands and flush the cycle once to get them back. Mostly interesting indeed if you ask me.

Anyway, I've yet to properly experiment with shaders, their compilation and so on. The biggest concern so far really is the base itself. Calling OpenGL or SDL is just copy and past - API usage after all. Therefore, I'll at first focus on all the basic 2D stuff, experiment with compiled OpenGL display lists and shaders, end the last bits of the matrix code and then implement all desired OpenGL features. If everything plays well, I'll have some nice little graphics engine combining everything in a more comftable and effective way. I'm just wondering whether this will work as good as if with a custom code witout this layer. More direct is usually less latency, so it'd be slower on one site but it does the same as with a typical performance loop, so it's that of a big latency added. Logical thinking tells me that this stuff can't be worse than a custom loop for standar stuff like sending a bunch of polygons and rendering it. It's just looping after all - no need care making the optimal case more optimal as it is.


So that's it or what?

Matrices seem damn trivial to implement to me for some reason. You can find all needed base operations, can multiply them for quick later calculations and multiplying it with vectors is even simpler. I don't complain at all, it's all great cause I don't like to spend most of the time caring about implementing math. If everything works out right, I should be able to implement position via SDL and OpenGL this night and spend the weekend on adding the graphics stuff. I'm very surprised about this since it's one of the major "creep factors" about math in my life. Well, it seems similar to like getting over fears like beeing alone in the dark as a child or learning how to something you always wanted to be able to. But the most awesome thing ever is that I get more and more ideas about how to optimize all the shit in my previous OpenGL projects and what new stuff is possible right now in 2D. Matrices are THE shit I tell you! If you haven't quite got around this stuff and feel you want to in some way, just do it if the moment's right. It's worth having a custom library at hand.



Pulled myself together and made most of the framework for implementing unique render commands in IGE. It's quite a challenge I have to say and the more detailed one-time commands you add for each item, the more performance it costs. It's similar to how you fill OpenGL or SDL with stuff, so in the end it's just an added convenience layer. However, SDL requires significantly more work and buffer management since it has no "bulk" commands like OpenGL to just pass a vertex array. Everything has to be done by hand, increasing the number of needed function calls. I think I found quite a nice way that's not that different from how you'd do it manually. What I'm concerned about is how command IDs will be checked. I know that GCC is smart enough to translate it to jump tables, but this will only work if every enum has a case in a switch block - or atleast a series of IDs with no gaps inbetween. I doubt they implemented something that does checks before just mapping to a jump table, so it may be a good idea to group stuff later and have two commands IDs. This would also simplify the filtering of command categories as I'm sure whether I want to allow matrix operations outside of direct display list rendering. Yep, I think two IDs is a good idea. Makes stuff light for the optimizer.

Also, it's the 1028th post! Though I should've celebrated the 1024th one, I'm gonna sacrifice my last chocolate bar for this unique event.


Swords & Sorcery for PC!

There's a bunch of Indie Games I remember from my TIGSource times, but few really made want to play them. One of them is Swords & Sorcery. Can't remember when I saw it first, but since it was in some graphics thread, I hoped for a playable version coming soon. Well, it seems that they've been busy in time and that I completely overlooked everything going on with it. Anyway, I'll probably buy it (for whatever price or quality) just because I simple dig the art style.

Low-rez awesomeness was always one of my weak spots.

Nope, not this semester

Narf, sometimes you just fooled yourself and you'll have to try stuff again, sooner or later. For me, I did a minor mistake leading to a greater problem which I probably already explained in a few post before. Anyway, now I definitely need to delay my bachelor by a semester. It makes me a bit sad, but it also means that I can make a longer internship and thus get more experience. It makes it possible to avoid certain paragraphs occuring when doing an internship in time, so the delay actually brings something good I haven't thought about before. Longer internship, more time for the bachelor, more freedom... Shouldn't worry about it too much. So any students delay their studies by even more more than semester, so I don't think that I should feel guilty now or something like that. In fact, I didn't fail any exams, my marks are quite good imho and I can even figure out a plan B in case that my bachelor topic is not enough for the profs.

All in all, stuff will go on and eventually it will make more experience with what I'm doing. Some actual references are always good when applying for jobs and internships. Not those custom engines I write. They are more like what needs to find it's way out of my head. I'm talking about the time you've spend in video game studios and on what professional projects you've worked on.


My own matrix processor

That matrix stuff really isn't as bad as I thought. It's actually quite fun to figure out some compact notations etc. Ill definitely spend some time exapand the generic multiplication formulas for translation, rotation and scaling matrices and have some less naive versions. I also found a nice site explaining the opengl matrices in a pratical way, so I can use the same calculations if I want to. I won't use my own 3D matrix operations for my engine since OpenGL will definitely provide more optimized ones. However, for 2D these are essentially and I'm once again quite excited when thinking about the new possibilities I'll have in the future! Adding projection matrices makes it possible to have my own 3D softwarer renderer in the future! Sure, this will a project for it's own, but I know that I'll need it to fully feel at home when using 3D. I mean just think about the cool stuff you can write when fully specifying everything on your own. I could even try to get it running on the Lego NXT with some very limited lofi software rendering.

Oh boy, I can't tell how this makes me feel better just by thinking about it. I know it's rather trivial stuff and completely unrelated to any typical programmer approach (you know, reading and using), but it's just the way how I work and how I can feel comftable with new stuff. That's probably also the reason why I dislike using random libraries that do the stuff I actually want to do myself: I simply haven't yet done it myself! Strange.

New IGE feature set v2

To summarize my plans and the various blog entries I've written with no apparent relation beside a few uses of the word "OpenGL" and other nerdy stuff that was not exactly related to it, here's a more clean and less 2D-centered list of the features I'm working on:

Changes to the render system in general:
  • split recursive display list rendering into display lists and display items
  • each item can contain "command strings" describing it's visual representation
  • command strings are binary representations of 2D and 3D drawing operations, position via matrices/vectors and input data
  • z sorting will still be kept for display items, so that sorted rendering is still possible
  • screens can either be in 3D mode (OpenGL) or 2D (SDL), disabling their respective commands
  • for easy rendering, not only graphical primitives will be provided but also sprites, tile maps etc

Command string features:
  • 2D and 3D drawing primitives and associated
  • position via 2D and 3D matrix manipulation like in OpenGL
  • bind float and integer arrays to buffers as command input
  • commands will effectively "pop" a fixed number values from the buffer, leaving room to fill or load buffers for a whole series of render commands

Stuff that may interesting for performance:
  • ompile command strings as OpenGL display lists that do not change after n iterations (or atleast provide some external function for them)
  • try avoiding repeated OpenGL calls by merging sequential or repeated render commands (if possible)
  • integrate VBOs and PBOs in some way


Let's make it 2D! And philosophy!

After a rather eventful but disappointing day, I decided to tackle matrices and vectors once again and have to say that, not keeping focussed of n-dimensionality, it seems very simple to realize and use. I mean most problems I'm having with mathematics is to understand the way it was created and seperate what's definition and theory and what's practical and of use. Personally, I found OpenGL's matrix, position and rotation handling as simply perfect to learn how useful this stuff can be. Now that I'm thinking about to find some useful position in a 2D system as well, I'm having a hard time finding something that's more useful than a stack of matrices and trans/rot/scale commands.

So I've decided to build some the same operations for 2D positioning as well. Only difference is that there will be no vertices but drawing position for the drawing commands. I'm feeling a bit strange about randomly writing some stuff that *may* seem useful later and not starting to creating a complete game. All in all I just know that I'm not into doin random stuff or choosing the next best lib that seems available, so I can only sit here and code for what seems the next in having a custom engine for everything.

Those days where you're confronted with your own self-induced stuff though there was the opportunity to work with someone together on a completely different base you've never done and never will is... disgusting. To say the least. I mean everyone is what he is and can what he can and I can create little engine thingies that can be used for greater things, so this is effectively what I do. And writing this right here completely unrelated to anything I've written in the first part of the post may seems like some sort of reflection goin on in a young hardcore C programmer's mind. I never did something different and I don't plan to. The way I see games and programming is so different compared to all other attitudes I've encountered so far that I wonder whether there is actually a place where I feel perfectly fitting.

That's what's driving me nuts right now and for some reason I'd once again want to have some sort isolated time capsule to go through every aspect of game engine programming and realizing them parallely. For maximum self-irony I should start calling myself a "Real Programmer". Just to illustrate the strange, lonely relationship. Hey, that sounds like good catchy phrase...

The "Real Programmer" legend

I found an article about the so called "Real Programmer" in a rather random fashion on Wikipedia. It's fascinating to think of: every generation of programmers will see the previous generation as more real programmers than what they are, in a humorous way. They make fun of them as much as those claimed real programmers make fun of non-real programmers. It's a matter of fact that different kinds of programmers tend to discriminate each other in different ways, in an obvious or mre subtile way. Just think about what a hardcore old-fashioned C programmer would say when fixing a Java programmer's super-blown class construct which actually works fine in production code. For both, it's totally annoying: the high level Java dude getting seemingly corrected by the old fart not respecting his new ways of coding.

As a matter of fact, I rather dislike this sort of thinking, but it always comes to my head when talking with fellow students. They see my code and look away. I see their code and want to fix it. The differences in preferred paradigms and positive experiences with it is something that has followed me in the past quite often. Maybe I'm just too arrogant to use the same stuff others do and see them inferior to what's in my head. Or maybe think the same of my stuff.

In any way, there's no real way of not think of possible paradigm collisions. And I bet you do your own part by not appreciating other people's methods that different more or less from your own. Either this or you just know that specific programming is not the only thing in your life and that you identify with different than what you're specialized in.

Get real!

Matrices and stuff like that

I've been thinking about using the same transformation commands/methods for 2D and 3D. In the end, I still have some getting-used-to's in the pipeline that are all about matrices and vectors, so it's probably the best to tinker with it more than just to get the job done. After all, writing GLSL seems to simple in the end that the only thing you need to know is the maths behind. I mean I had all this stuff in the previous semesters, so it's probably no hurdle to add some macros dealing with all important matrix stuff, having some premade matrix operations and so on.

However, the idea of just having the same command set for 2D and 3D simply pleases my inner n-dimensionalizer. I'm having quite a bunch of interesting ideas to easily build all sorts of nice effects with a few fixed data structs to pass. The more I think about the possibilities, the less I tend to rethink it. That's a good thing because it enables stuff to move forward as well as giving me the instant success I need right now.


Not practical right now

After reading that much about how to setup shaders with OpenGL, I felt urged to work a bit on IGE's 3D support and the general input itself. I've chosen simple functions before because it was a very quick thing to do and SDL didn't offer more stuff to support. However, having SDL and OpenGL functionality that's a bit more flexibility with the whole buffer xy index etc and so on using structs to generate screen content does simply generate a huge load of structures. I started to use anonymous unions instead and hope that this will reduce the declaration load. I mean this stuff won't be used in other places and it's actually just constructed to feed the algorithm one time with almost always constant data. This concept becomes interesting but it's "just" some sugar to work around declaration-heavy but small functions. The more I think about how hard it is to dynamically build data structures from a growing context, the more I'd appreciate a compiler going through a source file and generating structure content one may be able to modify. This does of course not lead to fixed and wel-defined data organization but sometimes, you know, sometimes it would be just awesome to have it.

GLSL shaders

I'm currently reading an interesting tutorial about GLSL shaders in OpenGL and I'm understanding why you need to know about about 3D graphics when shading compared to the plain use of built-in features. If a shader really replaces the OpenGL functionality, you have to do all on your own - that's where knowledge and skill comes in. I have to admit that I never quite got why all games requires shaders though most of them only use so basic effects and graphics so that it isn't really needed at all. Of course, render quality is definitely a concern but probably doesn't justify the additional development time needed to completely reinvent the old stuff.

Well, I'm just learning right now, so 60% of what I'm saying can be simply false. However, I don't think that I'll need shaders that soon (except when having to use it during my internship or something like that...). All I'm having in mind can be created with simple standard OpenGL stuff, so I'm happy about it! Don't make too big leaps if not required.