Just a random, totally unrelated and untitled note: I managed to write a little batch script that'll automatically mix, convert and concatenate Fraps and Teamspeak recordings like when doing Let's Plays. This is something I wanted since I started all the stuff and now I can let the script run all day while I'm at work and it'll convert, arrange and rename all files as I'd do manually. Awesome! This will make my life a lot easier and increase the potential amount of video material I can record. It also increases the possibility of doing totally random Let's Plays (possibly English, too!) as I don't need to worry about it anymore.
Yeah, I totally wanted to do this. I though about a portable C version but naah, you'd need to reinvent all the awesome shorteners you get otherwise. I'll tune the script a little bit to also optionally clean up the temporary data if it's done so I don't run out of disk space while at work. Finally some good use of the batch skills I acquired at work!
I know I'm writing as frequent as I sometimes tend to, but I'm nontheless working on my stuff around two or three hours per day! One result of this is that I finally got around adding the cool render algorithms and getting code done - step by step with the usual care. Interestingly, the engine department at the studio I work has been doing similarly basic stuff done cause the bought engine didn't cover it. Not complete rasterizers but the things needing to implement some effects. The day they presented their results was sort of weird. At first, I was able to see how much less different my CPU-based work differs from platforms that may have a weak GPU but better or more complex CPU features. I mean in the end it's not different, just that you know what's faster on which side and how much memory you can move from A to B. But in the end I wouldn't want to do graphics programming exclusively. I know I like basic and simple graphics engines with a few stable features over everything else because you can use the otherwise waster power for other stuff or atleast push the resulting features to their maximum effectivenes or just focus on making a good game instead of just a good-looking one. I'm more one the simulation side of development with this. Anyway, the graphics dev department seems to be the only team doing it's work properly (more or less), so it's no wonder why they utilize all possibilities they have. But I'd still only use the graphics card for platform-specific visuals and ony add CPU support if it's also useful for other logic, too. Sure, I'm doing my stuff via CPU-only, but that's because I love the purity, want to have fully customization and dislike the added complexity and amount of work added when doing all the stuff I want with shaders or so - not to mention the dependencies. But I sorta feel bad cause I can't constantly work on graphics stuff. There's so much programming wisdom useful to video game development other than graphics or scripting that it's no wonder why everything goes downhill if nobody knows about it. I starts with the very small building blocks of any program (memory management and algorithms for example) and stops at having the discipline to atleast follow your own coding conventions and understanding why error checks in general results in long-term and short-term quality assurance. I don't mean useless unit tests or garbage like that, I talk about a programer's ability to consistently reflect his own shit (drastic word, I know, but that's what happens). There's a lot more stuff I usually criticize, but that's what makes the difference between an randomly producing expert in a very specific area and someone trying to figure out why nothing's happening though the days passed by as well as avoiding situations like that every time. Anyway, I'd like to have something less frustrating later. Something I can actually do something against lazy bums instead of just pointing on them. Or just doing their undone work for a change.
Thinking about it, a clipping operation to draw from a source rect in buffer A to a destination rect in buffer B is actually a rather complex operation. First of all, both areas can be outside their actual buffer size/area. You need to perform a plain clipping to prevent out-of-range values, negative indices for arrays and so on. If that's done you need to make sure that you transer the source area from source space to destination space (though that's a pretty simple addition operation on a few vectors), build the intersection between both (the area we can iterate later to blit stuff) and create a translation vector to be added each time we want to convert the destination space to source space during destination iteration. So it's something you could theoretically solve using matrix transformations and preceding validity check. What's interesting about is that this generalization would make it possible to solve other clipping problems the same way or even problems I'd never think of solving using this method. Atleast I can freely combine my two clipping function with raw blitting operations to get optimal performance. Anyway, I always love the insight and general connection to deeper understanding of the very basic blocks that make up every-day computing features. Way more interesting that sticking existing stuff together.
After reworking a few vector macros and the addition of graphics-specific clipping mechanics I finally found an acceptably compact solution for n-dimensional rect/box/cube/whatever clipping. I wanted to have code that's also useful for more other, more advanced uses and I think I found a good base. The idea is similar to how SDL handles clipping: dst rect and source rect (clip rects and dst position in SDL) undergo some vector operations and out-of-clip checks to have a final dst iteration range and an offset that can be added to the iteration position to get a matching src position. One may call a fool to do this all by hand but it's no easy or totally trivial. Everyone who tried writing a proper and covering voxel cube clipper will know what I mean. I still believe that is the best possible approach if you want to become at what you do.
I didn't event remotely have the time to be productive in any way except a few hours per day. Anyway, I've thought about solving a few rather complex clipping problems with simplified rectangular set operations but quickly noticed that any set of operation simple but the complement. In 1D is that no problem of course but still generate either or two 1D sets. In 2D the problem becomes a manifold more complex as the number of possible areas to generate increases by a factor of three. The same goes for 3D, so you can see what ugly thing this is. Personally, I liked to complete modules in any useful way. But one too complex function can sorta lock you if you're a perfectionist like me. Really ugly, really ugly. I could use a quick complement function, but even with simplification, the overall hardcoded complexity is too much to be productive. It's just not worth it and the more I think about it, the more stupid the idea becomes. It's really annoying to see a thoroughly thought module for an exact feature set takes so much time cause the money-making hack solution has higher priority. Not that I do hacks at work, but you know, good deep thinkage never gets paid unless it reduces development time or cost in less time than necessary.