The mother of all demos

I'm currently watching an impressive demonstration of computer technology from 1968! It's really cool to see someone just demonstrating stuff the way it was back in the later 60's. And more interestingly, he's demonstrating stuff that's practical, something that would sorta catch my attention I'd have to work a lot of thinking and planning stuff everyday without knowing of computers. Well yeah, it's really interesting to watch. Though not as weird as the famous Erlang advertisement/commercial video.


Why I can't just can't into 3D

As I've already expressed in a previous post, I'm not satisfied with everything I write and stuff takes ages because I'm trying to archieve soemthing that's not the way I want thigs to be. And as I was thinking about my own personal graphics engine, I realized that I love cool 2D effects more than ready-to-run 3D graphic effects. It's just not the way I want to imagine my stuff, just not the way I feel comftable with doing ANYTHING in game programming. My previous engine went so well and was finished in just a few days and had everything I needed for my roguelike. I didn't really do anything until now that had to do with having actual graphical output. It's a total waste of time to mine and I can't motivate myself to create my personal visions with 3D acceleration. Everytime I start thinking about this stuff it goes worse and worse until I don't do anything and feel disappointed about things that aren't as perfect as they could be. It makes me depressive! I had so many ideas today how to make wonderful little effects with simple 2D blending using SDL and some alpha effects. No need for the full 3D overhead. Also, my so far two-and-a-half months long experience in the games industry taught me one thing: specialists have a lot of work to do. Currently we seem to have just a few 3D programmers and they don't really do anything else besides, well, porting 3D stuff from one platform to another. Everything I imagined how they would work nullifies by how boring a specialists work is. Thus, I'm going to do what I always did: get everything from every useful programming aspect and create a toolkit that's reflecting your personal game programming vision. Normally, I try to never specialize myself too much because it poisons your view at problems in general. The lead programmer will never realize how important the small details are if he only cares about getting things done. The graphics programmer is not of use for a wider range of platforms if he never tried to view things with a low-end rig instead of his monster system. Similarly, gameplay programmers need to know how to NOT scatter things across the memory because gameplay data is too atomic and simple compared to any other aspect of engine design. My point is that there are things that are usually more important than blindly specializing. I try to have a view over everything, to know when things could suck in the end and how this can be avoided. Even if this means that I'm never going to become a guru of AI or shader programming. My first duty is to be a good programmer, and that's necessarily the one who can pump up new features every day but the one creating lasting and efficient designs for the situations that matter most.



Had a clear moment today and decided to split the display list into a seperate "dtree" (d for depth) and associated iteration. I just didn't like how I cluttered everything and as I'm thinking about just leaving it this way. I mean in the ende you what you optimize for and yo usually also know what stuff will look like. Having a display list based on depth sorting is always a good idea, but you could use it for so much more than just premade graphis effects. I know that my progress is always slow, but the more I try uniform everything, the facets and reusability situations manifest in my mind. Interesting, I was able to cross my experiences from bytecode execution with this and my experience that inlining is still the best way to selectively optimize becomes true as I also rather want to drop fixed system instead of a flexible modular setup. I made so many new things while trying to squeeze everything into one packed, each time I start to get unsatisfied, I know it's time to recycle what I already have. I should make this a motto of some sort. It's rather philosophy at a very basic level - a point how to overcome things that bother you. Personally, I never feel better when I try to strive for something new or some artificially challenging. I feel better when thinking how things are, how things would react and how less troubled everything becomes for which path. In the end, I try to choose the right path that doesn't bring trouble to the future but brings satisfaction by working just fine. The easiest way to prevent a lot of trouble for myself is to just think and loose from physical matters. In the end, all psychological input has a base of physical input we all collected since we were born. Not getting involved in anything of this sort does usually result in a state of loose focus, often a great of voidness and potential strong focus on things you brought into this state. And sometimes a physical input like music or a though somewhere else expressed can bring you into this state. Also, for you guys and gals you who haven't yet discovered it, The Japan Channel's video about speedy builders over there is simply satisfying. Though I'd get creeped by the combination of rain and wood during build, they really know how to build stuff fast.


I'm so dumb

Naahh, goddammit I knew there are way better solution to my threading needs than the stuff I did before. You know what? Fuck the threadmanager thing, fuck global events cause it's so much easier and effective if just limit events to single threads and let one thread be the global event manager with exactly the same interface as everyone else! I would bite my own ear out of sheer frustration if I could. I started a very simple redesign based on message passing and multiple message boxes in combination with per-thread events. I know it's not as random to control as the previous setup, but it's way simpler and less error-prone in concept. I'll rely on this setup to rewrite everything until my guts leak out. Bah, what a disgusting realization all in all! However, this is food for my bachelor work as I knew there'd be a more simple setup if I'd only limit myself to single threads with no manager. Global events and thus pauses can be done by registering all threads that want to pause within a manager thread, just like the manager. Or just by creating n events that'll be locked in the beginning and used by the client threads to do waiting... And the message box fulfills my idea of tasks you can pass, though you'll have to create to associated events on you own by deriving the message structure. You can also pass messages between threads and share, store and reuse them. But if you run out of messages, you should make sure that you return incoming messages to it's sender or just pingpong them between threads. This is really more clean than my old version and you don't need to watchout for deadlocks while messages or so as message boxes of threads will stay alive even if the threads is paused or dead. Though this does not apply to events, these are a potential deadlock of course if don't watch out. Yeah, that's it for now. Can't believe how stupid I was to fiddle with alle this uber-complicated syncing stuff. I mean I'm more experienced now and the new design is better than before. And I can create sub types of threads and so on and so on. Comes in handy during my bachelor time cause the more non-working concepts you know (with own faulty implementation) makes more confident with the overall topic and theory.


I shouldn't use such titles anymore

I think I'm gonna rewrite some bits of the threading logic. Simplify stuff, add failsafe mechanics here and there... I can't find the bug in the current system and that's bugging me to no end! This will take me a lot of time I think, atleast until I found a good way to prevent all problems. I'm not sure about it, but something tells me that most stuff is right but only a few bits don't work. I know I'm pushing this around for quite a while now, so it's necessary to get over it. I found a simplified version of the workaround which only involves waiting for the nth fired event where n is the number of total threads. I mean everything else works fine and itk_app is bloated anyways. Yeah, this should do the trick. Doing some sync stuff in the end does not harm to the final product, in any way. It even works relatively 20 threads, so who cares... Woohoo, problem solved! I'll try whether I can exterminate some crude mechanics while solving everything via the event system. Hm, need to check this twice with some uber-random ressource usage before continuing. Still, since it made the old non-working stuff working, I shouldn't bother adding it elsewhere, too.

Interesting idea

I got an interesting idea a while ago but just fully realized the awesomeness of it. Once again it's about programming languages, dynamic memory allocation to be specific: what's the main problem of it? You want to allocate and release as much and often as possible with no delay while doing so. Usually, this is done with linked list on sytem layer or other stuff that increases memory load the more tings you add and stuff get's overly fragmented over time. Anyway, I've also been looking for something dynamic memory management isn't necessary or atleast so simplified that new and delete becomes an O(1) operation or merged or something like that. Ever thought about LISP would do garbage management? I certainly did while not spoiling the fun via looking it up. I didn't any possible difference to what's in Java or other garbage collecting languages until I realized that if every allocation has the same size and the same link possibility as linked list, then you only need to re-link lists. For example, let's use most of the RAM for a pointer/word pairs and link them in one single list. New would take n elements from the list, delete would move n elements to the old list. Done! I know, it sounds incredibly stupid and wasteful to essentially allocate double memory, but it's a very simple and stable concept. No null pointers by default! Just manage lists and watch for empty ones. Of course this is for singly-linked list and compound types still have to listed, too - making property iteration a pain in the ass. Nontheless, it's very effective in what it does and and I'll consider using is for some parts of my programming languages where it could replace otherwise painful memory management. However, this would only work effectively if you didn't want to access offsets an non-bound properties. All in all, each access to the nth element should either be iterated before or during code block execution. During is bad. Very bad. Before requires dynamic binding so that you can let your functions work on offsets. I sorta hesitate using it due to this disadvantage, but maybe I can code something very small utilizing it or add a feature for quick singly-linked list use. Well, the same idea also applies to doubly-linked list, so it's problem worth this instead due to increased flexibility.

Hmmmm.... it's too bad that most of the cool ideas have severe drawbacks. And you can't change it cause it's too bound to the base of computing. Anyway, I have a few other concepts for my language, too, that are nice enough to have. But I already know that realizing all the stuff I planned so far requires a looooot of work I can't afford right now. The simple, realizable ideas are often the slow or wasteful ones. I'm turning my own concept more and more into something less strict by binding all kinds of symbols when assigning code to a symbol so that it can be used. However, it's still creepy to think about the parsing I'll need to as well as the stack management... I need to find a way around this. It's no problem to change a syntax to something easier to parse, but it's problematic to use this syntax later as I don't really want to sacrifice the possibilities I know from C. And the resulting byte code isn't optimized at all, so I wonder whether it's of use to rely on the same mechanics I use in C. Maybe it's a good idea to reduce stuff step-by-step. Stuff that is of use in C and stuff that can be replaced with some other pattern or concept. Seperating high level parts that don't need to be super-effective from low level parts might be a good first step. I just don't want to spent all time thinking about but finding something that's easy to realize, comftable to use and quick to execute...

Well, I think my problem's the parsing. The knowledge that every infix notation requires a certain amount of optimization and complex operator system to work. I know that I'd love a purly symbolic operator base, but the amount of work to parse all this stuff is too annoying to me. So I'll probably need to find something else, but I don't know whether I want or can cope with something else! Maybe I should combine the possibility of prefix and postfix notation. This enables me to sorta emulate the needed infix sugar if I really need it. Once in a while I'm thinking about this and I realized how I dislike the disadvantages of it. But well, what should I do? Can't spent all day wishing my programming language popping up behind me and yelling "Oh boy, you made it!".

Gotcha, right?

Rejoice! I finally got around finding out why my multithreading system crashes on end and I finally found an interesting answer which actually states since each thread has it's own stack, it's by no means threadsafe to pass these variables to other threads due to context switches. I had my suspension about but found no obvious clue as it would work in a single context (multi core, I smell you!). But I do already know, I can try to make my event code to use to volatile keyword to tell it to not let it sink away on context switches or by making all event request dynamic and delete them via itk_app. Oh man, I hope this is the real solution to my problem as I clearly don't have the time or nerves to always would on it in my freetime. I mean sure, it's all hobby (except for the fact that I'll use it for my bachelor!) but I started this with 100% use of freetime, not counting the stuff I do now need to come down after working Well, let's that this will work. I'll try it on my way to work - awesome to have a laptop for such situations!


Some words about error checking

Bugs. Whatever you program, there'll always be bugs because you did something wrong, forgot to add something here or just because were pretty dumb the moment worked on it. Actually, the can prevent most of this by simply adding proper error checks and error reporting methods. Pretty standard so far, but I can't believe how often people don't check function return values or even passed parameters. It's one thing to completely omit it and treat the stuff like a macro or high performance function. Still, the shouldn't be the normal case and only used in basic stuff that's so deep underground that you may call a million or thousands or times (and don't do non-inlinable library calls there). Everything else should properly respond to problems and atleast shutdown everything down if something didn't work. Or maybe continue to work and mark the error very explicitly.

Problem is, no programmer I have met so far kept this in mind or even tried to do some sort of quality-assuring overhaul if the knows that there's stuff he is responsible for. It always comes from others, though it shouldn't even happen. I'm having a hard time to overlook the problems I see everywhere in the code cause I don't have the administrative power to just order it. I could actually shit in every shoe of every programmer here and they still wouldn't do it. You're at the bleeding edge of problems when doing QA work. I definitely know that I don't really want too much into their code as the nightmare continues each time I'm taking a look at it. Even worse, a lead programmer shouldn't try to advise you with stuff always already knew while not programming completely correct by himself, right? Yeah, but it simply happens. It had to happen of course, it's a human bug after all... Personally, I'm trying to always decide between functions that do so deep stuff that performance is necessary, functions that return valid pointers or NULL and functions that return a custom logic value along with debug macros I defined in ITK. If something goes wrong or threatens program flow (not counting options or rare "tolerated" incorrect parameters), it should shut down and give a detailed list about everything that was part of the program. That's not necessarily a stack trace, but a very proper lo with all happened events. If you do this by default, there isn't really a point where missing files or stupidly ignored script interpreter errors could get past you. I know it's cumbersome and takes time, but even me, the snail of get-it-done and let's-do-it is able to archieve complicated features in in a short time.

I don't know when, but some day I'll freak out about this. Some day I'll tell everyone to properly debug if they don't want to get more overhaul tasks than planned. I mean seriously, shit's breaking down due to ignorance and just cause they don't have the balls to tell their boss where everything stinks. Even if the boss knows where it stinks, it's a loss of programming honor, a clear mark of spaghetti. I may over-amplify this, but it's the most annoying thing ever if you want to get into code but notice that everything's just a total mess - conceptually and realistically. I mean they don't use exception as these provide a better than plain, simple ignorance of problems. And if you even hesitate to test a whether a new piece of codes generates an error, asking the QA department to test it for, THEN you're a really lazy bum.

Fuck, I cannot express how frustrating this is. It's like you're coding with a bunch of half-ass students and a lecturer who believes in the fact that logic doesn't outsmart own believes. Thus rendering a situation in which someone falsely thinks that new programmers do all mistakes of older programmers and apply their "advanced" way of programming. I long for a fully fledged workflow where new concepts will be introduced before enough brainstorming and old concepts abandoned if a new one was found. Too bad that this doesn't include the huge amounts of artifacts every larger project has from older versions. Well, maybe I'm just thinking too covering to appreciate the used concepts, maybe I believe too much in my own tech - that could be quite possible.


Let's Lego once again

Did some more legoish things today (all the programming made me sorta grumpy) and created a good replacement for the firing pin my current cartridge-based Lego gun has. The rather lengthy and heavy construct I used so far can be replaced with one of the bigger Lego springs as they have enough power to trigger a cartridge. I can also combine them with the trigger setup I use, so it's two Lego springs vs. one spring and 2 rubber bands - an improvement I can tell you! Yeah, there's not much to say about this fact except that I still need to find a good way to house the whole thing. I'm having a bunch of ideas how to continue from there if I get in the mood it and I also found a promising way to trigger the WIP muzzle loader in the same way I triggered my current model. It's all quite modular and simple to realize, so I just need a good action to implement, too. I'd like to do the next step for my cartridge guns in the next model. A combination of multiple cartridges but one trigger. Or just a clever setup where I can combine cartridges into one block and cycle it. Not sure, not sure. Or I'll once again just stick to something simple but with improved cartridge and gun model in general. It'd be nice have a better cartridge model with the same ballistic improvements I did with the muzzle loader barrel (rolls to stretch the rubber band even further). Yeah, maybe I can archive this somehow. Shouldn't be too hard, but it'd make the whole cartridge more or less bigger at tone point... Don't know. I'll just see what comes to my mind and combine stuff in the end. Though I'd really like to have to muzzle loader features in my cartridge... Also, though I wanted a muzzle loader, the barrel requires a shitload of important parts as I now notice. That's not good by default. I already ran out-of-parts with my last model. Soo, I'll merge them eventually and build a simple but updated new version of it while still keeping the old cartridge model. Need to have a history of models, you know! Hm, maybe I should also try to get my hands on a proper sighting system... Something adjustable that doesn't modify it's state every bounce. Yeah, why not try to do a fully-fledged model with the new simplified and slim designs I created today. That'd be one hell of an improvement if it makes the stuff work nicer, smaller, shoot farther and more accurate.


Back and forth, back and forth

After a bit of testing, I decided to install the newer Crunchbang version as my new distro. It became surprisingly good (even better than before) and it also has all the software I tools I use, too, by default. Just a bit here and a bit there and I'm happy. Very happy - to finally have a working laptop. I like using my pandora, but I can't get myself to type longer than an hour on my Pandora as it's clearly not made for my big hands to type that long on it. Anyway, I can only wholeheartedly recommend Crunchbang in the current 11 version.


Leveled memoization

While randomly browsing through my ITK headers, I found a few ones that I didn't continue or improve since I created them, so I picked up a few and got a great idea. I've been tinkering with memoization thoughts for quite some time now and my own set of functions around it was... rather disappointing. I also tinkered with something I called "update map", which's effectively a set of values and associated levels. Every time a level is below the global level, a new value will be applied on access (passed via parameter). This IS effectively a specialized memoization and for some reason I didn't realie that. Anyway, what would happen if we apply to same concept with some more extension a general memoization approach? You wouldn't need to reroll the whole calculation trigger thing and you can change the up-to-date status with one single assignment. Of course you sacrifce check speed but that doesn't count for memoization as your function should have more overhead than a function call itself to make memoization worth it. Anyway, I really love the contcept but I don't find any good use other than math-heavy stuff or heavy allocation/initialization. After all, it's barely guaranteed that your compiler will inline a function call made from an arbitray function pointer... So I'm not yet convince to use it everywhere, but it's awesome enough to have and play with it. Should speed up a few things in image processing I guess as well as in my game engine (maybe). Yeah... I just love this stuff. How nerdy. Oh, on a sidenote: I did some good progress reworking a lot of stuff and can now continue with IGE. Seeing how ridiculous the engine we use at work is, I love to see how clean and simple all my things are so far. You can see EVERYTHING in the debugger compared to just too many blackboxes and fully scattered random objects...


A number of reasons why OOP combined with careless programming sucks

Usually I try to be polite and not insult programmer doing bad OOP code, but today was one particularly day where I once again had no other option but to almost scream by the sheer mount of incredibly horrible code and program design. I don't even know where to start. First of all, a lot of people in this company seem to just not think about how they add stuff the their program. I've seen such truly nightmarish stuff where every proper programmer (whether OOP or not) would fall apart just thinking about it. Secondly, most of it is a result of trying to keep their own code in the same style of the underlying engine. This is bad idea by default. If you can't keep the same degree of stability or design philophy like the engine designers did, then you're likely to confuse your own problems with engine problems as well as doing stuff more worse than it would by would hand. Also, every engine isn't always a really appropriate way to realize your game. In fact, if skill, engine and used-to-dos don't fit, a lot of bad mojo and ignoring an engine's capability to process warnings is a bad in general. Every big commercial engine does handle errors well enough to be always sure that you really SHOULD catch everything that might happen. I've seen artists uploading half-finished files breaking down the whole system until a programmer noticed that he didn't catch his own, the engine's and hs co-workers return values. This brought down the whole system and it's getting worse if coders rework stuff with checking atleast whether the game starts or not. It's totally stupid. I mean they got all sorts of stuff they are changing and committing without checking it. They turned Python programmers into running C++ bombs and god know how often they blow up (currently, atleast once a week). And the worst of all is that they have any temporary branch for their own changes. The tactic is to dump in everything they have and only fix stuff efficiently if there's no more time. When the producer announced to change the whole thing to run via Scrum I didn't think of how important the many branches were that should come with it. Now that I realize how destroying the whole blocked day was, I start to hope that this setup will arrive as soon as possible. Seriously, who in hell is so dumb to not start the game after implementing new stuff. Compilation goes quite fast right now, so this is NO excuse. Anyway, back to OOP, it has no use if you just create singletons as this can be done with global variables and simple initialization order as well. It's disgusts me to just think about how careless these programmers work. That's why you shouldn't let convinced script programmers to get their hands dirty without a change of mind. I mean what's the use of code you don't document if you're only able to detect bugs if you coded it recently. What's the use of scattered object and temporary lists inside expressions if you can't watch their result afterwards cause your debugger can't track them this way. Right - there is no use. It's totally dumb and uneducated. Seriously, now I understand why don't get much money - lack of care and utter ignorance!