Today I took me the time to do no intensive thinking or whatever else stuff requiring my attention and/or preventing me from getting any sleep. I think I deserved it since everything went so well the last days. Dunday and monday won't be different - an easter weekend I usually spend this way. However, after some washing the dishes I a phenomenal idea. I'm still tinkering with my programming language and so on. Mostly cause I'm a lazy bum and cause I can't decide how to design the bytecode in detail. I slowly begin to understand, why it's common to simulate a reality-based instruction set instead of a custom one. Not only that it makes it more comparable to native execution and thus compilable. It's also a comftable to think about the execution in general. You have ressources, their are instructions and then there's memory. Every program uses it's interface to the computer (via OS or directly), so a virtual machine just gives an interfce for the bytecode it can work with. I always have so many ideas involving a bare environment with nothing but the instruction set and ressources accessible using special commands. So essentially not even a BIOS, just an abstract machine running in a set environment. Taking this as a base, you can do anything you want don't need to worry about the structures behind. Almost like an emulator, except that you don't try to adapt actual architectures. One could write it's own OS from the ground, get an excuse for making software rendering and so on. So this is idea. I write my own fictive computer inside a computer and will be able to experiment in every way I want. The interesting thing is, that all these "ressources" one could use in such a system would require an environment given by the virtual machine itself. So a monitor would be a simple screen opened by SDL/OpenGL/whatever, a RAM stick would be a bigger memory block allocated by the VM and every other ressource in the same way. What they all share is that it's alway a kind of "hardwired" interface to other programs representing the hardware. So a driver for the VM to control libraries outside the VM (therefore in the OS) which then again use drivers that send it to the real hardware. Sounds complicated, but is just a different way of interpreting the processes already going on in existing systems and VMs. The Point is, that in a VM that represents those things like memory and external code as hardware, it's possible to do some really interesting things (especially cause you can see those hardware ressources (which are actually other pieces of code) as standardized blackbox-alike interfaces) with simple abstraction and almost atomic representation of basic tasks inside a program. Some careful decisions inside the VM instruction set might include a "plug and play" ressource access system. That means that it should be possible to check for newly plugged devices such as additional monitors (=screens or just bitmaps, memory etc) or some completely abstract things: a "callback device" for interprocess communication or calls over network, a new processing unit for parallel processing and so on. What I'd usually try to write as a set of specialized and hardly abstract system could be written in a more simple and generic way. I know this might a equally hard to design and imagine as other approaches, but it's a step forward. I believe in a... let's say future, where it's possible to ease ressource handling and reduce redundancy in code. It's my idea of a system that's able to handle whatever is coming in from whatever location and use in an all-same way. Goodbye heavily specialized code, goodbye cloud of individual solutions.

Oh man, I kind of flow in another level consciousness level today. But you must admit, the idea is pretty cool. Not sure whether I covered the whole bandwidth of it in here, but it's a goal to work forward and it has almost nothing to do with my current programming language progress except that I should be a part of it. I always tried to figure a nice solution for calling external functions, but this one is the best so far I think. If you 3D graphics, you can simply create an interface definition and you can access it no matter what hardware, driver, graphics engine or whatever is behind. It WILL require many small programs with very low latency to get this efficiently. But converting the whole bytecode thing into C code would simply make it possible to include all the handlers and then having no latency when calling it's functions. I think that's an almost perfect integration into the language I had in mind. No weird library concepts, just an abstract computer handling data in a highly regular way. Amen.

No comments: