Let's Plays and quality

Right now while downloading my first Let's Play videos I'm asked myself about how my high the resolution should be, how long and detailed the playthrough should be etc. After all, I'm filming the content of stuff one usually needs to buy in order to see it actuall running (especially if there's no gameplay video provided by the publisher/developers). Personally, I only watch Let's Plays of games I suck at (having not fun playing it myself), games I can't play (not having the hardware) or just because I like the commentary. Thus, I'm either owning it, want to own it or never had any interest playing it.

Uploading long and detailed (sometimes even interactive) Let's Play in HD does sort of give people the ability to see everything without buying it. If a game's gameplay sucks, you can still enjoy the graphics by watching a Let's Play without wasting money. Doesn't sound wrong, ey? Well, this is true for random Youtube visitors. Bigger publishers want to prevent users from not buying their super-linear games because that's the only way to get any value out of it. But even with a complex and big game where no player will have the same experience, spoiling certain quests or parts simply either destroys buyability or replayability.

I won't try to "protect" a developers/publishers plans of getting as much people as possible to buy their games while not giving them proper gameplay video. In the end, I use Let's Plays to inform myself whether a game is worth buying or not. Thus, I stop watching Let's Plays if I want to (and can) buy a game. If someone fails to advertize a game good enough, Let's Plays are the best way to get a good gameplay experience right from an everyday consumer. Therefore, I'll just record as much as interesting to me and in the whole project. Not in high resolution of course. I'm currently recording at 840x524 pixels with quite visible artifacts. I don't care about giving too high visual quality since I think Let's Plays should be watched in the background and not as a main entertainment source. You can still buy the game if you want high quality, you know.

Nope, I just did it wrong

Ok, I dropped Virtual Dub due to it's completely incompetent codec handling and gave Avidemux another try. So far, I was able to render a comletely part with some color correction (for some reason, Virtual Dub displays it exactly how it looked while playing). Too bad that I can't just have a nice splitting feature in Avidemux and do instead have split it manually. I mean this isn't too bad and gives me some opportunities to have smoother endings. However, I think it's the best to stop parts after 14 minutes to avoid this completely. Rendering takes so much time - best thing to reduce the editing steps to a minimum.

So yeah, there's progress (though not as steady as I'd like). However, I didn't record anything in English because I'm not yet "skilled" enough in spontaneously talking English while playing. I tested it while playing Stalker and it didn't end really well I think. It might be Stalker itself. I mean the game's not really filled with stuff and I know it in and out, so it's no wonder why I'm having a hard explaining everything I do by default. I guess I'll do Let's Plays in both languages in the end, depending on whether my English is apprioriate enough or not. There are also some games I can't and want to play in English because I only have the German version or because they're too "random" in terms of language. For example, it took me ages to find out what "pewter" was until I fired my favorite web dictionary. I wouldn't want to let this happen in a live commented Let's Play.

Video editing sucks

Virtual Dub can't codec anything, Avidemux can't do anything else but spastic error messages and everything else is either too old to work, equally stupid of not free.

I'm temped to spontanously spend 50€ on buying video editing software because of that. The only thing I want to do is merge an audio and a video stream but no program can even remotely grasp that the only thing that is required is to decode, mix and code channel data! Seriously, what sort of idiot can think of something else and call himself a programmer. If something doesn't fit, CAST IT to the target type... Applyable to data types AND codecs!
For some reason Virtual Dubs tends to crash during rendering and I can't really do something about it. I tried Avidemux before but the audio insertion didn't work, so I dropped it (video however worked totally fine). Now, after exporting and adjusting the recorded commentary with Audacity, the encoding changed and suddenly Avidemux was able to read it! If that's not nice - now I just need to find some good codec to store the videos lossless and then mix in my commentary. Awesome, after Virtual Dub's repeated fail I thought that I'd have to buy a commercial video editor and lost faith in open source alternatives.

I can't repeat how much I dislike the whole multimedia editing software stuff. I mean it's not that it isn't the same for graphics and IDEs, but video editing is the most annoying thing ever. Anyway, I hope Avidemux' results are good enough for my needs. I don't mind storing Gigabytes of videos but I want to upload as short as possible - having good compression is necessary.


Simple Let's Play recording with Fraps and Teamspeak

Long title, short story: I found a very comftable way of recording Let's Play audio and video that requires minimal effort and synching. The idea is to choose the same record hotkey for Fraps and Teamspeak, so that both files will be in sync when combining. Using Virtual Dub to extract the ingame sound, Audacity to adjust both channels and then mixing it back in with Virtual Dub works really fine I have to say. I'm been playing with different way of recording for a while now and this seems to best one so far. Too bad that I started recording stuff before I found it out, so I'll need to post-edit my old videos... However, you can even properly split the rendered using the "Save as Segmented AVI" menu option in Virtual Dub 1.9.11. For 60 fps video, 54000 frames is the limit for 15 minutes recording, so just remove a few and you should meet Youtube's length limitations (yep, I'm not interested in giving Google my phone number just for longer videos).

So yeah, recording setup is ready to go and I only need to cut the old videos. I mean I really just have line them up and shift the audio a bit. Shouldn't hurt I guess?

Oh my gosh, not again

Second time a day! Further night time reading of OpenGL stuff revealed that there are also so called display lists to compile OpenGL drawing sequences in a more efficient, repeatable way. At first I thought that this would make my stuff redundant, but it's actually not and I'm thinking about using it automatically in the background for certain commands that do not change much between frames. Gonna test this and check whether it is useful or too slow for realtime operation. Once again, the more I read about it, the more I'm getting ideas what's archievable and how. I thought it's more about writing all the stuff by yourself and ending up with more maths and shaders than it is healthy for your brain. Sure, that's the case when specially preparing model data, split it up into several spaces and so on. But still, I'm amazed how simple it seems to not worry about using the hardware more directly. Guess I have to face the fact that everything is as simple as I imagined when thinking about how such a render system has to work to archieve this amount of 3D. I'm very glad that I really don't to worry about it. Makes the whole area more appealing.

What? They did it the same way?

I'm doing my best to not find it embarrassing that I found my "own" concept of just specifiying user buffers as coordinate inputs right integrated into OpenGL. It does of course not break my idea of specifying display contents but I feel a litle guilty not reading it before. Can't say whether this would've changed my design decisions but, you know, I could only make me realize the great idea faster. So, I'm actually feeling better knowing that I can map this stuff almost 1:1 without any hurdles or so. Having this sorted out, the next question is whether I can nicely integrate Vertex Buffer Objects (VBO) and Pixel Buffer Objects (PBO) to avoid constant bandwidth usage.

The OpenGL Programming Guide is a great ressource for all related stuff bringing your lights up each time you read it. I guess I'll simply use a single function converting all the color and vertex data generated by a display list string into a set of buffer objects on the graphics card. To draw it, one would just need to specify it via a command in the render string. Yep, that sounds like a good idea I guess! I'm in a rather delicate situation right now since spending my time on programming OpenGL stuff is a heavy distraction to me cause have to do some other organization stuff. I'm actually not completely able to keep different things running that aren't focussed on the same greater goal. Thus, I can either go on with a bunch of different programming things or do some organisational stuff. Not that I didn't try to overcome this in the past, but my head just doesn't work like that (sadly).

Anyway, it comes in more handy if it IS about programming, though.


Organisation and you

I like to say that I'm "just a crummy programmer", humorously indicating that I don't have many strengths beside my own programming skill and a few things I archieved my programming/abstraction ability (like, for example, creating handmade pixel fractals). One the many things I'm just horrible at is organization. It's not that I can't remember dates or deadlines, I just need some place where I can read and remember them. So I'm trying to write it down all the time, which usually works - even for stuff that's for me but also for others. One can even tell me to write down stuff from meetings and events and I'll of course do and tell them later. Thus, I expect others to do so, too.

However, today was one of these days where I once again forgot that it's usually very rare that other people also have the same behaviour. And as result, I'll probably have to do my bachelor thesis in the next semester. I should've remembered my risc management lectures. I definitely failed at having a Plan B and now I am having the problems. It happens from time to time that my usual construct of Plan A, B and C does not cover those small but important events or organisational nature. Especially when expecting other people to be as reliable as me when I'm promising something. Friendships have broken in the past because of this. If something goes wrong, it's usually something vital and irreversible - for which I blame the one beeing responsible for. But the worst of all is that in such cases I should blame myself, too, because I didn't "risc-manage" everthing properly.

I'm not into project management, I'm not into organization or similar stuff but try to cover it by doing risc management for the stuff I do. But this does only work in the areas I'm proficient in and will result in fail and chaos otherwise. I'm just a programmer after all and this follows me everywhere. I not happy about it and the place for me is probably somewhere I can just focus on my work and getting things done properly.

All this won't stop me from getting an internship, of course. I should see it more positive - late bachelor means more internship time. Plus that I won't need to mention that my university expect paid internships, which are not part of game business' reality (and I don't expect money since I want experience and insight into professional game development). If my internship will be taken into account, I can still do another one and get even more insight or just lengthen the old one if that's possible. The studio I wanted to go to didn't answer so far and I doubt that they will do so. Darn, I was looking forward to it.

Non-Firefox browsers and ad blocking

Sometimes, as a Linux and Windows a user not Firefox and associated Adblock plugins, you'll run into some highly annoying advertisements on Youtube and Google products in general. I haven't used Firefox for a while now and am happy with Epiphany on Linux and Opera on Windows. Too bad that sometimes things just go wrong and ads will just be there. You can simly not avoided it and all browsers except Firefox with plugins slowly react in such cases. Epiphany is usually quite lovly in terms of Adblocking - haven't had Youtube ads in years, even without applying available updates. I don't know why, but it seems the lack of certain web tech support and I'm more than happy with it. But then again, on Windows, strange stuff pops up that doesn't occur when using Epiphany and Linux. Today was particularly annoying because of Youtube's extremely loud advertising I couldn't get rid of using Opera. I just had to wait for it and then it was gone. I mean the channel I was watching does just not have enough watchers to seriously get advertisement I guess. But still, Youtube tends so send out stuff if it thinks it may be of use.

How pathetic. So much money they got and still no end. Someone should really kick their balls.


Legend of Grimrock preorder!

Legend of Grimrock preorder is online! How fabulous is that? You get lots of awesome concept arts and wallpapers when doing so, plus sketches and the soundtrack's main theme! Totally awesome.

Now that's nifty

I've come across a very nice solution for my graphics engine, combined with the previously explained render command string of course: given a set of commands with a fixed-size list of integer numbers, one can define special commands that set the source of all used colors/vectors/texture, each associated with an identificator found in a command parameter list. What is means is that you can precalculate all your colors and vectors in a custom memory somewhere and then assemble the whole scene by setting these buffers and what to shapes should be drawn next how often. It's hard to let this sound clear and convincing, but it let's the programmer specify stuff in a very short and simple manner - especially useful when generating screen contents from map data or effects. Adding stuff like repeat options for those buffers makes it possible to let all following objects to, for example, share textures and colors implicitely.

I'm quite excited about this and feel even more convinced that this is the best way of feeding my graphics engine. No superfluous function calls, no uber-defined stuff reducing flexibility and definitely no non-generalized content. Very pleasing, indeed. Some more time and I do a few first tests I guess. However, I had to drop a few things because I'm not quite sure how to impletement clipping with such a kind of hardly defined screen content. For tracing and collision detection, this can be done by using the fragment buffer (I think it was called like that) and associating an id with each graphical object. Oh boy, that's definitely what I was looking for!


Limited English

I've been watching the The Japan Channel for quite some time now and the more I compare English by non-natively English speaking person to German by non-natively German speaking persons, the more I wonder why the fuck I get their English but not their German. It's not just about the Japanese stuff (it's quite hard to actually find German speaking Japanese persons where I live) but also Russians, Chinese and whoever else. Just taking my last visit in a sushi restaurant as example, I did not understand even word before realizing that the staff was speaking German to me. Another couple of guests clearly spoke English and I was able to understand the staff's English but not the German. This makes me feel rather uncomftable. I mean English is simple compared to German and probably the language that's the easiest to learn, internationally. So either their English was just better than their German or I'm just not used to accents in German language. I don't know, it's like not knowing one's own language anymore.

Confusing as hell I tell you. Plenty of food for stereotypical thinking.


Some more thoughts about IGE and OpenGL

I have to admit that I didn't code much because I didn't quite knew what's the best way to continue without needing to rewrite stuff later. See, OpenGL's API is almost exclusively made up of functions calls and I guess that a lot of functions just fill some buffers with no real function code at all, just stuff that could be done inline. Sooo, just adding stuff and properties and function calls to graphical elements becomes more and more slow and also removes the possibilities that come from just mixing OpenGL calls in random ways. Previously, I described how I thought about using some sort of command string as a more dynamic way of reducing requiring memory and function calls. The idea is interesting enough to completely drop any number of fixed settings for display objects, keep the z sorting to seperate and order different depth buffer clearings (for rendering seperate scenes or sprites) and provide arrays/lists with command identifiers and parameters as the sole input for describing the stuff that should be rendered. I don't know how's OpenGL implemented under the hood and whether this may just be another layer of the same functionality, but that's not important right now. I'm always looking for interesting ways of feeding programs and compilers in my freetime, so this just perfectly fits. I'm having quite a lot of interesting ideas that would make the whole state machine thingie OpenGL is made of to work a bit more nicely and less state-dependent. It may even integrate well enough to completely drop all inefficient atom-like OpenGL function calls and just rely on given array addresses. Don't know what exactly I can do with OpenGL this way but I trust my idea to be the perfect way of feeding a render system (yep, I'm quite arrogant today).

Soooooo, I have to design some command sets and related trees/constants to let stuff happen more efficient! Yep, this includes some more reading and reasearch. But well, more time spent on developing good solutions does usually pay off in some way.


A bit of guiltiness

I've been thinking about using OOP for IGE, just because it becomes more and more of a difference what data to save for which kind of graphics. I mean especially for 3D there's a lot more to save than just in 2D and the potential graphical objects are not endless but quite a lot. Color multipliers for each quad edge, additional vector data, whether to draw a single big texture or make sprites of much, whether to draw multiple graphics with the same properties and so on. All of this affects memory consumption (who doesn't possibly want hundrets of sprites on the screen?) and partly also performance when creating them. I would've simply created a base display structure with some homemade derivation and type-dependent size but well - I get the feeling that this is no good idead since a bit of good ol' C knowledge tells me that another approach may be better. I've been thinking about using "property strings" to identify the way objects get rendered. The idea is to have a buffer with commands setting properties and graphics to render as well as for example as different matrix operations not behaving like root-relative translation and rotation etc. This means the user can create all sorts of stuff the way he wants by just lining them up. Not necessarily the most performant but still quicker than applying a shitload of nor needed information for each element to draw. Most objects drawn hundrets of times are simple to describe with a command string - weighing effectively nothing, just a simple condition. So I'm gonna rewrite the way of drawing stuff with some sort of OpenGL command string. It's simple to design, OpenGL commands usually do not have a lot of different parameter formats.

Yep, I'm more comftable with this decision. It's not that I'm working too hard on it right now - term break after all! But way better to keep the original spirit and clean, well-thought C code than just choosing OOP out of randomness. I actually can't find any reason to use OOP in my personal projects. Sometimes coders need their little custom projects will all sorts of peculiarities to be happy. Better than doing false decisions on serious projects just because of one's own love for self-declared code purity (that does sound like an argument, doens't it?).


First plan for converting IGE to OpenGL

I've set up a list of good features I've started to implement that should enable me to fully utilize OpenGL for 2D acceleration. I was a bit careful about them to also enable the use of full 3D models and scenes. I'll at first implement a less performant variant using glBegin/glEnd and use this as a base to expand it with buffer objects later, so I won't have to call functions for each vertex or color change. Anyway, here's the full list of changes so far:

Image properties:
  • load it directly as an OpenGL texture
  • create color-keyed imaged when loading, apply fixed per-surface alpha before uploading
  • calc scaling factor < 1.0 depending on which sprite side is the greatest (for aspect ratio correcting scaling/rendering)
Screen properties:
  • define physical screen size (integer) and virtual screen size (float)
  • add OpenGL subsystem initialization
Display properties:
  • drop palette changes, color keying and alpha values (for now)
  • translation and rotation relative to the root element (changable via flag)
  • use OpenGL viewport to set clipping area
  • add new flags for interpreting translation/scaling relative to the clipping area (for GUIs etc)
  • replace source/destination coords and add translation, scaling and rotation vectors
  • keep integer z value along with float z value to seperate rendering with different depth buffer (allows scenes rendered on scenes without intersecting their model data)
  • display with same integer z will be rendered in the same scene/depth buffer moment (allows more complex scenes and even models made of quads)
  • reduce recursions by only calling new display render attemps for those display with sub displays
That's basically what's to do next. I won't have to add anything new ITK for this one, it's all just OpenGL coding (something nice for a change). Color and alpha setup as well as anti-aliasing and so on will come later - plain rendering is the most important thing right now. I know it's quite a change but not much will be change for the general interface except some different parameters and function calls. Interestingly, displays with the same integer z value can be rendered as quad lists - possibly whole scenes that'll can be precreated if they don't change that much. Will probably need add a bit more details about controlling what will be rendering in different passes to also form scenes out of multiple display list branches, but well - that's a bit too much implementation detail stuff I'd like to think about when I know more about it.

So that's it, off to coding!


Oh, that easy?

I've started rewriting IGE to utilize OpenGL and I'm surprised how much can be redued when just using only it's rendering instead of SDL. I mean it's direct, you don't need to cope with palettizing to get some effects and I cam do the scaling and rotation etc with needing to implement it on my own etc, etc. Also, OpenGL seems simpler and cleaner than I had in mind before. I guess it should be no problem to expand it with complete 3D scenes some day. However, until then I'm step-by-step upgrading it until I need/want to implement more. You know, I like a well-thought system that does do all the things I can imagine beeing nice and useful for my project(s?).

I'm wondering what has happened to my assignment. Been two weeks since, hm...
Well, if they don't want me I'm sure that I'll have enough stuff done meanwhile to get more points elsewhere. Though this would make me quite sad since I am/was really looking forward to this.

Some plan changes

You know what? I got a bit sick of rendering polygons by hand once again. I mean I'm having an algorithm and that's fine and so on but I'm currently more interested in getting some things done, so I'll freeze my writings once again and do I always wanted to do since I created it: expanding IGE to feature full 2D sprites with scaling and rotation! I'm having a bunch of interesting ideas and I may also be able to nicely integrate full 3D scenes with all required pre-processing done like the z-ordering. Have to think about how to handle coordinate and so on. It's good that I finally start with all OpenGL-related things. I mean if my bachelor topic won't find any fertile ground, I'll have to do something graphical cause everything else is rather boring to make as a bachelor thesis. Well, the renderer/display list thingies is also interesting I have to say. A bit of a problem if you're only thinking and engines and stuff for games, so there's nothing really different to do than this. I'm probably just thinking too much about this and should finally get my expose done...

You know I'm not the one seeing necessity for paper work or formal descriptions unless is actually required. And an expose for a bachelor thesis is... Well, let's just say that I'd like to do a project and present it in the end without fiddling with those formal scientific bullshitteries which're so popular around universities.



I'm getting closer to an n-dimensional iteration for polygons, though I don't quite know whether it'll work the expected way. I'd rather prefer having a limited, n-dimensional polygon filler or border tracer than something that works totally perfect in all cases. My idea is to combine the iteration or a polygon's wireframe, surface, volumen and their appropriate upper dimension variants into one macros since their iteration differences are rather small. Now my XDIM macros comes in handy! Macros a lot of stuff possible with sacrificing n-dimensionality. I like cracking nuts step-by-step if the time allows it - normally I wouldn't spend time if more important stuff has to run, too. I wonder what my assignment's doing, haven't heard anything yet.


Describing shapes in computational geometry

I read a bit about computational geometry a longer while ago and also studied a few source codes related to this topic and some sort of "ahhh!" stuck when coming to the point where the actual operation between two shapes happens. Everytime I'm sthinking about generalizing shape and position descriptions, I'm remembering computational geometry and how cleverly written this one implementation was. They took the idea of describing polygons either clockwise or counterclockwise and only altering their direction when doing logic operations. Though I'm not programming all day right now, I'm taking my time to think about it cause it may be beneficial for alle my later projects where I want to describe polygons of artifical or procedural origin. I always wondered how to this and now I finally got the intention. In essence, I guess I'd only need to add line intersection and I can create all kinds of shapes. Another point is polygon collision detection. If one can create new shapes by using logic operations on other shapes, there's definitely the possibility to check whether some shapes collide. Moreover, a detailed collision shape can provide much more information about a collision than a set of true/false-returning mathematical calculations.

It's like getting the time to tackle a few old unpleasant stories once more but becoming successful in the end. I like how creative I began after switching to C. It gives me more time to focus on my actual problems and their appropriate solutions instead of falsely tinkering with describing theoretical models and constructs rather than producing useful stuff.


Polygon tackle

Before starting to work on my exposé, I want to have a bit of different coding and started to work on a generic way of rasterizing and rendering data. Don't know whether I mentioned the two macros ITK_CELL and ITK_COMP in itk_misc.h of ITK, but they pretty give the basic idea of how I'm trying to combine multiple buffer formats with homogenously typed, randomly arranged data cell. With a little mapping array I can hopefully put aside most code and rely a not highly performant but otherwise optimal way of combining multiple buffers into more complex image with alpha and bump mapping data without doing any combining, copying or just render code repetition. I'm trying to pull it all together and also found an found an interesting way to abuse my usual n-dimensional array iteration by just altering start and end values, directly mapping both to some polygon border positions. This could make rendering polygons the same as rendering rectangles and even provide a first step to basic stretching and rotation of graphics with 3D acceleration. Feels a bit like discovering mode 7 on the Super Nintendo. I'm still figuring out the details interesting for generalization but the idea itself could actually work. If I start working on software rendering once again, than with some more style and reusability than before. And I'll pass the focus on triangles and do the full polygon program cause I can imagine more interesting things with them than just with polygons. Whatever, it's simply easier to think of than trying to squeeze the loop more and more. Plus, n-dimensionality has it's price and won't give away optimization for free. Optimizing n-dimensional loops, anyone?

Quick hardware fix

I found the reason for the overall instability of my Pandora unit: swapped screws! The right side had a screw not completely fitting and gripping correctly, so I swapped it with a single centered screw looking suspiciously close to the screw that should be somewhere else. I'm thankful for the fact that I don't need to do any hack to disassemble it. However, one should just not ram in screws as tight as possible because this will fix the shoulder buttons and prevent normal operation. Well, since the shoulder buttons aren't a breeze to operate (it's more of a "CLICK" with no comfort like the other keys), I don't care that much about it. The plastic becomes worse when when screwing all day long, so I'm once again not found of the material choice. Anyway, this all will be forgotten when clocking the CPU higher and experiencing games in full speed within needing to plugin a TV screen or console. Really awesome, I'm totally sold. Also, the fact that I can just program and compile stuff on it directly makes it even more interesting to me (though I'll need to train myself with the unusual keyboard).


My own Pandora

How long has it been? Four or five years? I didn't except to arive it, but a recent mail informed me about my unit beeing ready to ship! I confirmed and got a lovely labeled package. Now I'm sitting here after a day of experimenting and have to say that becomes quite clear what the Pandora is meant to be. Well, I didn't knew what the Pandora would feel and operate like. A quality handheld? Like a miniaturized laptop? Or a calculator? To be honest, it's a mix of all three. Resembling a laptop at most, the joysticks can't hide away all the bulk and weight. Feels like a compact heavy book with a certain "crackle" every once in a while. The unit doesn't feel very stable on right side (due to the stylus beeing stored there) and for the screws weren't thaaat properly tightened but after retightening them I noticed that the shoulder buttons won't work anymore. So it seems that this is "part of the plan". In combination with the strange plastic quality, if feels sort of cheap. But well, nobody can expect something of Nintendo quality for an open source handheld that's a mix of many, many ideas from a lot of people without too much money or whole teams of experts behind. Not that I want to call the designers bad or so. I just think that the idea of this project would come it's full glory with a company behind delivering the highest production quality possible. Another real problem of which I'm quite unsure whether it's related to bad drivers or hardware is how the nubs behave over time. The left nub is harder to move than the right nub and even gets harder over time. Leaving it for a while will partially remove it until used more. Using it as a joystick in emulator seems fine, though. So I guess it's part of the generally buggy joypad-to-mouse software they used. I've read about similar problems when moving the nubs during boot and some tother things that should work in the future with better firmware and integration.

Besides this hardware property, I was a bit confused by the software provided by default. After some initial setup, there's either the more more or less console-oriented start menu (which looks very nice but feels just like another app trying to hide the slow reacting Linux setup behind) and an Xfce desktop with an admittedly amazing feeling due the fact that it DOES exactly feel like my own Xfce setup. However, only a small part of a Linux distribution will be accessible. There aren't even man files, probably removed to have more memory on the small internal memory. Installing pidgin and Firefox alone would fill it up and external memory is required to do anything game- or desktop-related. On the plus side, this makes the system more a stable base you don't to modify since the installation of new software is just copypasta to memory card - plug and play like with cartridges for ANYTHING you can run on it. Impressively to say the least. This made me quickly realize that this so called desktop is more of a skeleton with add-on functionality.

But all of this does not even remotely describe where the Pandora is actually good at: playing emulated games. Picked up a mupen file with a Super Mario 64 rom and was able to smoothly run it on my Pandora after a few tweaks. Simply awesome. No problems with the nubs during this, crystal-clean, rich display and a great sound make me forget all the minor problems I had with it before. Don't try to use it like a mini computer, use it like a gameboy. Plug in games and software - that's the nicest way to have fun with it. I can even use gcc on it, setting up a shell in no time and running my code from anywhere. Extremely awesome to say the least. The more I think about how much time I'll spend playing my favorite games once again in their almost complete former glory, the more I believe that this was one of the best investments I ever made years before. Not to mention the fact that they almost doubled the current price for some reason...


Could just not resist

Though I said that nothing programming-related will come for a while, I could just not resist and starting to code again. Did not attempt to fix the bug(s?) in ITK's app module but tinkered around with completely replacing C++ keywords and syntax with more basic-like keywords. It's interesting what you can do with macros this way. In the end, somebody with knowledge about the good ol' QBasic can read and write code without needing to learning to required C syntax (well, atleast some preprocessor directives should be in). Interesting but completely useless. I'm sort of glad to not always having to type IF THEN ELSE but use if(){} or just ?: for the ultimate shortener. I'm using elvis operator in macro expression and where they fit with obscuring the readability but would like to see them as a possible if-replacement some day. Makes the whole thing harder to read but reduces a programming language's syntax set a only a few operators when also shorting loops in similar fashion.

One day I tell you. One day there'll be a full language concept based on this. I'd love to have something like that as a scripting language for my engine. Just for the awesomeness' sake and code squeezability. Gotta write some parser functions...


What ugly

Oh, really. I picked up id software's "Rage" for about 20€ cause I was kind of curious how the whole looks and performs in the end (gameplay-wise, it's not worth my mind and especially not the 50€ I would've paid via Steam). And geez, what kind of crap game this is. They can't get themselves out of their low quality misery. I mean streaming textures to get less required loading times and an appropriate detail-to-distance ratio is one thing. But streaming a game which's detail textures are worse than those of better Game Cube games is a shame to the developers. I mean there are so few details in this game that streaming via popping up blocks is just not necessary. If the graphics would be just there, not flickering and constantly altering, I would've accepted this, but not at how it looks right now. Personally, I prefer quality over streaming and streaming only as far as it doesn't sacrifice visual quality. I guess that constantly streaming new data instead of reusing textures multiples is no good idea unless executed with more care and grace. No wonder why nobody's interested in "Mega Textures" due to that.

Even with a faster harddrive, even with exactly the same setup as the devs use to test their "hires" graphics, I doubt that it's worth counting on cause the details are still crappy for everything that's not a gun, car, our shootable person. Consistency is still nothing they count on, like most developers which's games I don't care about. How disappointing. Good that I didn't buy it for the full price. They better think about their steps towards streaming and take it easy. The first I played Half-Life 2 I believed the game was streamed because the scene transitions were so smooth. But then I noticed how cleverly they planted level data. Just take a look at Metroid Prime and you know what cleverly set transitions can do. And I tell you that Metroid Prime does still look worlds better than Rage currently does.

That said, I can't believe that some developers seemingly don't think about stuff like that. I mean ok, comparsing a console game to a multitude of different graphics cards is one thing, but a full-priced game that can't pack this is not worth it's full price on the PC.



I've uploaded an updated version of ITK with a few changes and a full documentation (though admittedly not at actually detailed amount). Also, the new version of the IGE atom demo is up and works at around 30 fps with ~2000 sprites. I'll stop development for a while now because I need to recharge my batteries after about 4 months of intense programming. This damn error gave me rest. Time to calm down and relax.

So yeah, the next weeks won't includ any programming content if everything works as it should. More gaming, more movies and the best of all - more Lego. Yes, I'll start sorting my Lego chest and look whether I can come up with non-gun models. Or I'll continue my still not finished muzzleloader model. Maybe I can find enough time to photograph my current one. All in all, I won't force my self to do keep any timeline. Just relaxing and and comftable activities!
The error seems to happen within the ressource manager once again. It's a bit strange - looking at log files over and over again to understand and recognizing the error patterns forming their body in a familiar but still msyterious shape. I can't actually tell where it happens but it's probably in a single function doing one thing very, very wrong. It makes me sad to think about all the time I'm right now spending to find bugs where other people went sleeping more than 6 hours ago. I don't like bugs suddenly popping up and for some reason I get the feel that it's not a good bachelor topic. But then again it's one bug I'm locating right now, everything else working totally fine and me not liking the sunlight crawling up my skin.


Oh my gosh, I really this day. I'm done documenting almost all modules but the convenience one and found a strange I didn't really use in the test code but should totally be used since it's a very comftable feature. So as I was recoding and executing the test program, I got segfaults and freezes again. And I thought they were gone! But I knew problem from the day I wrote the test program. It worked without the convenience model before but I overwrote the previous program, so I can't really look what I did differently. It should, however, completely work in theory (and it does, but not always). I even used macros for catching critical conditions on EVERY function call I used and the only thing I found was a rare event where a loaded ressource does have a data pointer though I passed the address of a global variable. It's something I have to find immediately. Can't leave this there without removing it.

Anyway, I added a few more macros here and there, removed old comments and added new ones. ITK is nearing Release 1 but without having fixed this error, I wont to upload anything. Damn, why does this always happen when I think I'm done. Fooking shite!