This appears sufficient to dynamically configure PlatformIO’s SCons environment without hardcoding BSP-specific flags.
My current plan is for builder/frameworks/rtems.py to invoke pkg-config (with configurable PKG_CONFIG_PATH) and inject the returned flags directly into the PlatformIO build environment.
Clarifications Before Proceeding
Would you prefer:
One unified platform-rtems supporting multiple architectures, or
Separate platforms per architecture (e.g., platform-rtems-sparc, platform-rtems-arm)?
Is sparc/erc32 acceptable for the initial prototype, or would another BSP be more representative?
For the prototype phase, is basic pio run compilation sufficient, or should debugging/simulator integration be considered early?
I’ll proceed with a minimal prototype once alignment is clear.
It uses pkg-config to dynamically extract all BSP flags zero hardcoding. pio run successfully compiles and links an RTEMS hello world for sparc/erc32, and the built ELF runs correctly on SIS.
Yes, this is a good baseline to start from. You should be able to use this to explore some of the questions you’ve raised already and to shape your proposal. You’ll want to show these capabilities as part of your proposal.
I would suggest you create a comprehensive plan with realistic milestones along the way toward the longer term vision described in the Issue. Working out the framework with sparc/erc32 is a good first step. I think adding another architecture with a qemu simulator is a good second step. If you can get access to any boards (e.g., Beaglebone Black or RPi family) that run RTEMS that would be a good third step. This should help to consider how different architectures will come in to play with the design also, and help you answer your first question regarding how to think about the platform-rtems versus platform-rtems-$arch.
I’ve been thinking about the platform structure question. My current inclination is to keep a unified platform-rtems abstraction that derives architecture-specific behavior from board metadata. The prototype already validates this across SPARC and ARM using the same framework script.
I would only consider splitting into platform-rtems-$arch if PlatformIO toolchain packaging or debug integration introduces architecture-specific constraints that cannot be cleanly abstracted.
I’ll document this trade-off clearly in the proposal and include a milestone checkpoint to revisit the decision after additional architectures are validated.
I wanted to share a quick update on the prototype.
Following the earlier SPARC (erc32/SIS) and ARM (realview_pbx_a9_qemu/QEMU) validation, I’ve now added support for:
aarch64/raspberrypi4b (Raspberry Pi 4 Model B)
What was done:
Built rtems-aarch64 toolchain (RTEMS 7)
Built and installed the aarch64/raspberrypi4b BSP
Added boards/raspberrypi4b.json with:
"rtems_arch": "aarch64"
"rtems_bsp": "raspberrypi4b"
Verified pio run builds successfully using aarch64-rtems7-gcc
Importantly, no changes were required in builder/frameworks/rtems.py.
All compiler and linker flags are still derived dynamically via the BSP’s .pc file.
The abstraction now works across:
SPARC (sparc/erc32 via SIS)
ARM (realview_pbx_a9_qemu via QEMU)
AArch64 (raspberrypi4b)
Three architectures, same framework script, zero hardcoded flags, only board metadata changes.
Commit:
Next, I’ll focus on refining auto-detection of installed BSPs and improving simulator/debug integration.
Sounds good, I think you’re on a solid path toward figuring out what to put in a proposal.
In terms of integration with RTEMS, I would suggest that you explore if this would be a good fit for the rtems-tools.git repository, or where we should have something like this live.
I reviewed the rtems/tools group and see two possible candidates: rtems-tools (ecosystem tools project) and rtems-deployment (deploy RTEMS with BSPs and third-party packages). The rtems-deployment description seems closest in spirit, it’s a PlatformIO integration that sits on top of an existing RTEMS installation rather than bundling RTEMS itself.
Would you envision this living inside one of these existing repositories (as a subdirectory or module), or as a new standalone repository under rtems/tools? I’d like to align the structure with RTEMS conventions before finalizing it in the proposal.
RTEMS Deployment is a place to collect ways to deploy RTEMS. I used waf in it as a simple way to check dependencies between files and then run commands. There is nothing more to that selection. The RSB is doing all the real work.
What RTEMS Deployment has that is valuable is the is config directory. This has builtsets and config.ini files for BSPs you can feed the RSB to build vertical stacks. I suggest you consider forking the RTEMS Deployment repo and seeing if PlatformIO can be added and supported within it. If this matures and works out then that may become the project’s preferred deployment path.
The platformio/ directory contains the platform manifest, board definitions, and builder scripts. I verified that pio run works when pointing to this subdirectory within rtems-deployment, using an RTEMS prefix generated via deployment + RSB.
Currently validated:
SPARC (erc32 / SIS)
ARM (realview_pbx_a9_qemu / QEMU)
AArch64 (raspberrypi4b)
The framework continues to extract all BSP-specific flags dynamically via pkg-config; no BSP-specific flags are hardcoded. Adding support for a new BSP requires only a board JSON file.
Next, I plan to explore whether the existing config/ builtsets could be leveraged to auto-generate PlatformIO board definitions, keeping both layers aligned. I’ll also review whether wscript should optionally expose PlatformIO as a deployment target.
I’m happy to adjust the directory placement or structure to better align with repository conventions.
Does platformio files all need to be under the platformio directory?
Reviewing what you have makes me wonder if the board JSON files could be generated from the deployments config files? If the answer is “yes” maybe platformio is treated as packaging and JSON generation support added under pkg. The platformio directory remains as is.
Let me explain. RTEMS Deployment’s waf build system generates what ever files a packaging system needs. It does not run any packaging commands. For example, if you are on an RPM Linux distro RPM spec files are created, one for each configuration in the config directory. You then run rpmbuild to create an RPM. The same could be done for platformio. If ./waf configure detects platformio CLI commands the board files could be generated?
If platformio needs configuration data the config tree does not hold we can look at adding there.
@gedare
Right now the board JSON files are written by hand. For each BSP, I inspected the installed .pc file to confirm the arch/BSP mapping and created the corresponding PlatformIO board definition manually.
At build time, the framework script only depends on rtems_arch and rtems_bsp; all compiler and linker flags are obtained dynamically via pkg-config.
Yes, the rtems_arch and rtems_bsp fields can be derived directly from the config/*.ini section headers (e.g. [arm/beagleboneblack]). Those are the only build-critical values.
The complication is that PlatformIO board JSONs also require hardware metadata (MCU name, clock frequency, RAM/flash sizes, vendor, etc.), and that information is not currently represented in the deployment config/ tree. So generation cannot be fully automatic without an additional source of that metadata.
A clean approach would be:
waf reads the deployment config/ files and extracts arch/BSP (source of truth)
Hardware metadata is provided either via optional fields added to the .ini files or via a small lookup table under platformio/
waf generates the board JSON files as packaging artifacts, similar to how RPM .spec files are generated
This keeps config/ authoritative while treating PlatformIO as another packaging target.
Regarding directory structure: PlatformIO requires platform.json at the root of whatever directory it treats as the platform, with boards/ and builder/ relative to that root. The directory name itself is not important, but those components must remain together.
I’ll prototype JSON generation via waf so we can evaluate the approach before making structural changes.
On a Linux host with rpmbuild the rpmspec build target can be used to build specs files for all found build sets. As I mentioned before you need to run rpmbuild. This separations aids integration to CI runner. The Gemini observatory uses GitLab to build RTEMS, tools and networking to an RPM file they can install in the Docker containers they use.
If this approach is followed you would add platformio.py to pkg and a new target called platformio which could build the files PlatformIO needs.
The config.py code will find the build sets and manage any INI files.
Maybe platform.json could be copied to the build output directory?
Note: default deployment builds to a tar file and rpmspec builds to a single RPM file if that packaging is used. The objective of deployment is to have a repeatable process users can depend on to make a working RTEMS with tools, kernel and any other packages like networking packaged in a way they can install on to a suitable clean machine and have the same known configuration.
I’ve read through the posts you linked, and the deployment flow is much clearer now. The separation between waf generating artifacts and external tools handling the actual packaging step makes a lot of sense.
Treating PlatformIO as another packaging target under pkg/ feels consistent with how rtems-deployment is designed. Instead of keeping static board files in-tree, it would be cleaner to add a platformio target that uses config.py to enumerate the buildsets and generate the board JSON files from the .ini section headers. The platform.json and builder scripts could then be copied into the build output directory alongside the generated board files.
That keeps config/ as the source of truth and avoids duplicating BSP data anywhere else.
For the extra PlatformIO metadata (MCU name, clock, RAM/flash sizes), I’ll start minimal since they’re not build-critical, and we can decide later whether those belong in the .ini files or somewhere else.
If this direction looks right, I’ll prototype the waf platformio target and share what the flow looks like before making any structural changes.
I would suggest that you start to work on the plan for your proposal in parallel to your early exploration. It sounds like you have found a good direction to head in.
Thanks @gedare I’m working on the proposal in parallel and will incorporate the deployment integration direction into it. I’ll share an updated version soon.
The implementation is in pkg/platformio.py, following the same pattern as pkg/linux.py for RPM spec generation. It registers a
platformio BuildContext command and uses configs.py for INI parsing.
Right now the generated board JSONs have the arch/BSP fields (which are build-critical) and placeholder values for the display metadata. Those can be filled in as we decide where that data should live.