I forgot to take my laptop with me and had to waste around four hours while waiting on the lecture. During these four hours I figured out how my idea of a virtual computer (in every facet) might lead to a more interesting idea involving fixed hardware setups placable by the user and then testing how the software will work with the new restrictions or components. I don't intend to do any serious virtualization technology, rather a virtual (sand)box for testing exotic instructions sets and almost impossible hardware functionalities. And figured out that not every cool idea would be efficient enough for too instruction-heavy programs. This is mainly due to the fact that if I want a completely modular system with dynamic interfaces between them, it wouldn't work as well as when writing specialized commands working with built-in components (the curse of beeing dynamic). But that doesn't really matter here, as it serves as a completely customizable test bed and as a bytecode executer for my programming language. It's really a lovely idea to simulate instructions, execute generated code and and export it to C code all at once. And the more you test and extend your instruction set with things actually useful to what you want to program with it, the more you're able to seperate them. I'm still not sure how to combine two seperate instruction sets without two seperate virtual CPUs, but I'll also find a solution for that. But since it's a totally virtual and fictive environment, it should be possible to wire all components together and represent their interfaces all at once. Still, that's probably not how it will work. Every hardware component/module would need it's own code. A normal RAM is a huge block of in/out memory, a CPU takes commands and so on. Wouldn't I need to wire them independently? A CPU has also connections to the RAM and other components. I may say that one could also simulate the controller and circuits, but that's a bit over the top I think. What I want is to have a set of abstracted components with a simple and address-based interface. So it's probably the best to enable the CPU to initially list all connected components so one can use them from the bytecode. Hm, yeah. That might be a good idea. The bytecode itself will be crucial for the CPU, as one will may need to operate without any RAM or components attached (if the setup doesn't feature any of them). Some thoughts made me realize that it's probably also possible to split all instructions sets by having a central CPU only there for sending and receiving data from other, more specialized CPUs for integer or float processing, DSP calculations, etc. This way one could step-by-step expand the system while letting the CPU do only the things it was made for. But wait, it's garbage to call these CPUs. They would be like ALUs or so, parts of the instruction set. You see, depending on what logic you put inside each component, there is the possibility to go deeper or keep it more compact. For example, one could also put a single gameboy CPU inside a component. And you don't even not to split the RAM from it, simply integrate it and it serves as a dumb, specialized bytecode executer. I think depending on the performance of communicating between different components, I might create specialized ones (they work like in reallife this way: the farther they are away from the processor they are used in, the longer time it takes to access them).
Hm, sounds like a good sum-up. I don't care about something like that beeing already existent. The cool thing is, that this is designed to also work when exporting it to C code. I know that the bytecode and the syntax of my programming language in general will be quite bizarre compared to C-like languages or others. Then again, it's made in a way I alway wished to see programming languages: as it works internally without getting ugly, like assembler. I reduced the formal syntax enough to look like lisp with some angle brackets as parameter notation. There won't be any special symbols and no different angle types anymore. Just (module) names and in/out parameters. I'll then provide compile-time functionalities like marking sequence indices, inserting bytecode in the current line and so on. I think that's the best way to tackle it. Bytecode generation/inlining will be problematic, as I'd need to execute the code for generating with an executer (which I'm then writing the code for). So I may design the API without bytecode generation first to add it later. Though I'm wondering what the execution environment/system should then look like. The point is that is could theoretically be possible to use all kinds of instructions sets inside the bytecode generator and accessing whatever hardware you may want to generate it. But where to define this environment? I think that's another thing I need to add the bytecode: environment requirements. You send what you want, the generator (or any other) checks whether the ressources are there, shutting done if not. Hmmm, difficult! But I guess I could also wrap it again and say that you can create a compile environment inside the bytecode using special commands and let the generators run the programs you defined somewhere in the original bytecode... That's getting complicated to explain, but the idea is to drop all strange inbetween stages and include the possiblity to create environment using the bytecode. So you start the first bytecode with no environment and then another evironment using special commands and feed the latter with functions for bytecode generation and the final code itself that you want to use after it became generated. So you start one environment with a generic code for generating bytecode, save the temporary result and run it from the final enviroment.
A bit like normal programming, if not more complicated. In normal programming terms it becomes an own preprocessor for implementing proper code generation for a language without (like C). So yeah, it's just explained complicated and totally blown up. But it's way smarter. Could you simply extend your preprocessor during runtime with more memory or a debug interface for later bug hunting? Probably not. So instead of pimping the language, one does setup environments to extend the language with it's own possibilites and written in it's own code. Whoa, crazy. How can I only loose this knot in my head.