At first I thought of trying to embed multiple kernels in the same binary but that doesn't really work - things like the library functions are coalesced across all kernels at best and everything needs a unique name which is just a bit of a pain to work with. And it basically just devolved into using overlays which requires a linker script and are just generally a bit involved.
Whilst I was out 'doing stuff' I came up with another solution which I think solves the usability problem and at this stage I think should support everything I want.
- A runtime executive (the exec) provides some global functions which are shared amongst all cores, possibly all of libezecore (e-lib);
- Each kernel is compiled into a standalone binary which includes any ezecore functions it needs that aren't in the exec, it has no startup routine;
- Persistent data structures are placed in special sections such as .data.kernel. The normal mechanism of using weak symbols is to to reference them from other kernels/cores;
- At runtime, the full set of interacting kernels must be supplied for a given workgroup.
This last point breaks the queuing abstraction somewhat but I don't see any practical alternative because the total persistent data space needs to be known in advance. Well unless an alternative mechanism is used such as supplying a separate data object as a base elf file.
At load-link time there should be enough information to assign unique locations for any shared buffers and then relocate the kernel code to a common location (or multiple) which the exec can DMA in as required.
Hmm, well I guess I see if it will work when I get around to coding something up.