Hello everyone,
My name is Nisarg Wath. I’m interested in contributing to RTEMS for GSoC 2026.
I’ve started exploring the “Cobra Static Analyzer and RTEMS” project idea. So far I have set up the RTEMS development environment, built the RTEMS toolchain (sparc-rtems7), and installed Cobra (v5.3) on macOS.
As an initial experiment, I ran Cobra on the RTEMS source code, starting with the cpukit/score directory. Cobra was able to parse about 8776 files (~17.8M tokens) from the RTEMS tree. I also tried running the MISRA rule set to see what kind of results Cobra produces.
Probably cpukit/rtems/src is a manageable place to start. The score is good and eventually should be supported, but it is also quite complex. You should start somewhere simpler to work through the flow of the tools.
The main issue with these static analysis tools is how to make using them seamless and able to integrate with the RTEMS developer ecosystem.
Thanks for the suggestion! I’ll start experimenting with cpukit/rtems/src and see what kind of results Cobra produces there. I’ll also try to understand how the analysis workflow could be automated so it integrates better with the RTEMS development process.
I did a bit more experimenting with Cobra and wrote a small script to automate running the analysis on a few RTEMS modules.
From the initial runs I got something like:
It looks like cpukit/score/src is noticeably larger than cpukit/rtems/src, so static analysis there might be a bit more complex.
Next I’m planning to extend the script to summarize warnings and try filtering directories that may generate a lot of noise (for example BSP headers).
Would it make sense to focus the analysis mainly on cpukit/score and cpukit/rtems first and treat BSP directories separately?
I ran Cobra across the cpukit modules (score, rtems, and libcsupport). These processed several hundred files but produced no warnings. Earlier runs showed some warnings in BSP headers, so I’m planning to compare cpukit and BSP directories to see where most static analysis noise originates.
Hi sharing my proposed Cobra integration design (external tool + filtering for BSP noise) based on experiments would appreciate feedback on workflow fit and filtering approach.
Hi,
Quick update I’ve been running Cobra on RTEMS modules.
cpukit (rtems, score, libcsupport) looks mostly clean, while most warnings are coming from BSP/architecture-specific code.I’m now grouping these warnings and trying some filtering for BSP noise. Planning to focus on cpukit first.
I’ve also drafted a simple architecture diagram for this workflow would it make sense to include it in the proposal?
Does this direction sound reasonable?
Being able to run on bsps/ and testsuites/ is good to have as a feature but actually going through them as a human and analysing the issues is low priority. Focus should be on cpukit/ with score, RTEMS, sapi, POSIX/, and libcsupport. Those have the most required or frequently used capabilities. But all of cpukit/ will ultimately need to be analysed and reviewed. As a point of reference, the Coverity Scan build is for SPARC/leon3 with tests disabled.
One issue from last summer was that the export to CSV produced a cell with over 32k of content. None of the spreadsheet programs I tried could work with it.
Part of doing this is determining which rules we already meet. Some rules are gcc warnings. Then figure out what rules are flagging things. For each rule, we need to decide to follow it or ignore it with justification. The Xen project does something similar
Thanks, this helps a lot.
I’ll focus mainly on cpukit (score, RTEMS, sapi, POSIX, libcsupport) and keep BSP/testsuites secondary. I’ll avoid CSV and use other formats for reports.
I’ll also go through the Cobra rules and see which ones make sense to follow or ignore, similar to Xen.
I’ve updated my proposal based on this.
Thanks again.