12.18.2011

Thinking about (system) object management and acquisition

As the (horribly long) title suggests, I'Ve been thinking about it. In general, handling mutices (mutexes?), semaphore and condition variables works fine with a limited amount in smaller numbers. However, what about dynamic creation? What if this dynamic creation would allow a vast, systematically managed amount of synchronization objects? Well, it's simple to guess that this is the case for super computers and all those system where massively parallely processed stuff is going on. What I am actually wondering is how they handle their synchronization inside the operating system is done. Is there a pool of ojects? Does every process provide a manual, non-predictable dynamic allocation of it's objects the system can't before they are created? I don't know it. I can only guess or drown myself in wikipedic research. Point is I don't have the time or interest to learn more about. My understanding of computers and processes is a very classic, single-processorish one (a reason why I'm taking my time to designing multithreaded stuff). Therefore, I'm actually interesting in knowing whether it is appropriate to simply doing everything dynamic but rather think about to support both model - will I know what platform my engine might run on? Not really. I sense it's the same as with memory allocation - there is a pool with n elements and you want to reserve m elements in a row. Ever though about generalizing this concept? That creating a mutex is only getting a one elements instead of m? That there may also be the possibility to combine multiple "layers" of acquisition that form their behaviour by the layers above and below? I find this thinking intriguing. In the end you end you end up with something very simple - the question for how to manage the distribution of m out of n. Dynamic memory works like this in every case. The base of it is a distribution where m equals n (a stick of RAM) - the case where no mangement is necessary cause everything is already there. Beyond dynamic memory are for example system objects or normal object custom for each program. They are either single or multiple, where multiple objects could actually once again used as a pool of n objects with m to choose. Once can use the same mechanics for memory management to manage those other objects cause it all depends on placement memory, right? I don't interrupting between my program and the system. But I know that this generalization can allow me to see everything as the same, making all performances equal on all systems as long as the object operations themselfs do not depend on how the acquisition went. That said, it'd solve a lot of problems getting or destroying mutices, condition variables, thread "slots", objects in memory and so on. You'd just say RESERVE and you get it. That's in the end of course the same for all those system calls and techniques used of course! I never said it's something new, I'm just reflecting and analyzing it from a distant point. One day I'll tackle this generalization so that every can use the same principles everywhere - no special cases anymore, just the same reliable system everwhere.

I'm looking forward to this day.

No comments: