Controller Area Network (CAN) Stack Improvement: BeagleBone Black D-CAN Driver Implementation [GSoC 2026]

Hello everyone!

I’m interested in CAN Stack Improvements Project for GSoC 2026. I’d like to focus on implementing a D-CAN driver for BeagleBone Black (AM335x). Is it a reasonable scope for GSoC project?

I’m new to CAN protocol but eager to do my best. Will start from the recommended publications and the processor TRM/CAN protocol so far.

Open the topic for discussions as well.

Thanks for the interest. The D-CAN controller would be useful. It is found on
TI TMS570LC4357 and TMS570LS3137 chips which are supported by RTEMS (see TMS570 BSP in RTEMS User Manual and our student related thesis, unfortunately in Czech language only). There is even our complex time triggered support for it available (again thesis, fortunately in English). I have some reference source for that. The controller is found in its HECC (TI High End CAN Controller) variant on many other TI chips like BegleBoneBlack (AM335x, BSP) and there has been attempt to implement it in the GSoC frame in the past but the setup of CAN bus core infrastructure failed at that time. The CAN core developed by Michal Lenc (see thesis) is mature and integrated now and adding the chip support based on the other chips examples should be feasible for GSoC.

Do you have some above hardware with D-CAN supported by RTEMS? I have all three BBB, TMS570LC4357 and TMS570LS3137 but I have not tested RTEMS on them for long time so some cooperation and need for fixes could be required during bring-up.

Disadvantage is that D-CAN does not have support in QEMU yet. Qemu includes support only for CTU CAN FD, SJA1000, Xilinx on UltraScale and above and FlexCAN in review/integration stage.

But I agree that BBB and TMS570 are targets fitting well RTEMS application area so it would worth to be included.

I and Michal Lenc can mentor this projects.

1 Like

Hi Pavel, thank you for the detailed response and for offering to mentor,

Regarding the hardware, I don’t currently have BBB or TMS570 boards. I’ll obtain one for the project as soon as I can, it can be challenging here in Bishkek due to the ship costs and time though. I think I’ll receive the board by May or at latest by June. Any recommendations on additional hardware to consider?
I’ll study the docs, existing implementations, and sketch out some code meantime.

By the way, would occasional remote access to one of your boards be possible during early development? Even for milestone testing before my hardware arrives, it would be greatly appreciated. If such a need arises, of course.

Looking forward to working with you and Michal!

For this project, you would need both the BBB and also a CAN Transceiver (cape). We would want this hardware available and demonstrated capability to use it within the application period, so it seems in your case this will not be a suitable project at the moment.

You need direct hardware access to the board to work with this kind of project, due to the hardware interactions needed for development and testing.

Thank you for the feedback! Well, given the constraint, I’ll likely revisit the idea of ​​porting PortableGL.

Should I do something to this topic?

Leave it open for others to find, thanks.

Hi Pavel, thanks for the details of reply! I am interested in BeagleBone Black D-CAN Driver Implementation. I can get a BeagleBone Black board next week and along the whole GSoC.

When I am looking into the CAN improvements project, if I am understanding right, the task is adding new D-CAN driver. So it is important to have hardware like BeagleBone Black to test on it. I realized the driver would be important for DCAN communication in RTEMS, and the driver can let DCAN keep the same priority queue as RTEMS, which is important for RTOS.

I will keep studying the thesis and papers you provide. I will test the board when I get it and let you know how it goes. And I would really appreciate some advice for the next steps. Thanks! Looking forward to your reply!

It will be important as part of your proposal to show that you can run the BBB. It would also be beneficial if you can run the CAN stack on a currently supported board/target, and to show how you would be able to debug when you run into problems.

Thanks for your precious advice! I will follow these instructions to draft my proposal, showing the results of running BBB, and trying to run the CAN stack and showing how to debug when I run into problems. Thanks!

Running current CAN/CAN FD stack core should be completely hardware agnostic. It should run everywhere where RTEMS runs. The virtual device can be registered by rtems_can_virtual_initialize and it can be used then. The registration of the virtual CAN device/channel is shown in the example code in the rtems_can_test/can_virtual.c. The registration of the set of CAN test commands into RTEMS shell is shown in the rest of the test code rtems_can_test. This Michal Lenc implemented code had main goal to test the implementation, priority preemption principles, latency and more. It would worth to integrate some simplified examples into mainline RTEMS tests directory to provide simpler examples how to start own applications.

1 Like

I can get a BeagleBone Black board next week and along the whole GSoC

Please also keep in mind you will need a CAN transceiver as BeagleBone Black doesn’t have any, to test the communication while you implement DCAN - these are pretty cheap, just few dollars at most. You can take a look at this article Adding CAN to the BeagleBone (Black) – Beyondlogic

It looks like the board has two CAN interfaces, which is convenient - you could get two transceivers and connect them against each other for the initial tests of your implementation. The other option is to test against your computer, for which you also need CAN to USB converter.

And I would really appreciate some advice for the next steps.

You can take a look at the registration of virtual controller and examples in the links Pavel Píša sent above - you can run those on your BeagleBone once you get it. You can copy the files to your build system if you prefer.

The repository also contains steps to compile the examples, register the controller and run the test in the top level Readme. The compile steps are for Xilinx-Zynq, but it should work with BeagleBone as well just by adding RTEMS_MAKEFILE_PATH argument to make command with the path to your BeagleBone RTEMS build.

$ export PATH=$PATH:/opt/rtems/7/bin
$ make RTEMS_MAKEFILE_PATH=/opt/rtems/7/path_to_beagle_build

For example for i386 target it’s RTEMS_MAKEFILE_PATH=/opt/rtems/7/i386-rtems7/pc686

1 Like

I successfully ran the RTEMS CAN test program on my Mac using QEMU.

  • I first installed the RTEMS tools and required software, then fixed the folder path by creating a symbolic link to redirect “/opt/rtems/6” to my RTEMS installation path so the example scripts could find RTEMS correctly.
  • After that, I built the required RTEMS target (pc686), using a serial-only setup to avoid graphics errors, and compiled the CAN test project.
  • I modified the QEMU run script so it could work on macOS by changing Linux KVM to the TCG accelerator and disabling Linux networking features.
  • I then booted RTEMS and entered the RTEMS shell. Inside the system, I registered a virtual CAN device and successfully ran the CAN test applications. Here is the screenshot.

    Although the real CTU CAN FD hardware cannot be tested, maybe because the macOS version of QEMU does not include that device model, the virtual CAN test confirmed that the RTEMS CAN system and test programs are working correctly.
    I will keep looking into the example code and understand it. I would appreciate ideas based on what I have done. Thanks!

You should be using rtems 7 toolchain and main branch for GSoC and in general for new development work.

PS: as part of your proposal, you should also show that you can run RTEMS on the BBB, and further that you can run some kind of CAN workload on the BBB (e.g., with Linux).

Running CTU CAN FD hardware on BeagleBone in QEMU is possible but quite hard. On the targets which provide PCI/PCIe support in hardware and in RTEMS, it is possible to connect emulated CTU CAN FD PCIe board into QEMU realized system and test it with CTU CAN FD driver registration for PCI (rtems_can_test/can_ctucanfd_pci.c) found in our test repository.

On platforms without PCI/PCIe you need direct mapping of CTU CAN FD into SoC address space into area not occupied by other devices or memory. But support of CTU CAN FD plain memory mapping is not included in QEMU mainline (branch net-can-ctucanfd-platform on my fork) and it requires to find and map IRQ signal to SoC IRQ controller manually which is quite tricky. Unfortunately, mainline maintainers do not like my approach (fair) but refused to discuss alternatives, so usability of CTU CAN FD IP emulation and all other configurable IPs for FPGA equipped SoCs is blocked in the mainline. AMD/Xilinx QEMU fork has some solution for their IPs.

So testing of CTU CAN FD with QEMU AM335x emulation is too much complications for now. D-CAN emulation in QEMU would be feasible when somebody invests some man-months. But for RTEMS it worth to focus on support and testing real HW for now.

Some more hints to your experiment in QEMU with RTEMS i386_pc686 BSP.

You have registered only single interface by

can_register -t virtual

That is why you see errors when the /dev/can1 is accessed. It is possible to repeat registration, but you obtain two disconnected interfaces that way, which cannot be used to test mutual frames exchange.

But you can specify that sender and receiver interface are the same and then the test should pass

can_set_test_dev /dev/can0 /dev/can0
can_1w
can_2w

There is already fix in can_latecy in our master, so may it be it can be run in QEMU as well. I need to test it we have focused on running on real PC and Zynq HW to push forward SJA1000 support which seems stable now and we are checking how head of line blocking between high and low priority frames behaves on it. It is possible/seems that due to (mis)features of OpenCores design it cannot achieve this functionality. We will test on real SJA1000 on Tuesday to see if it allows to push back low priority message blocking Tx buffer when higher priority one arrives.

1 Like

As for real hardware emulation on QEMU, the mainline QEMU should support CTU CAN FD on PCI/PCIe, SJA1000 on PCI/PCIe, XCAN on Zynq UltraScale and Versal on all host platforms, GNU/Linux, BSD, MacOS, Windows but the connection to the host system CAN bus is supported only on GNU/Linux over SocketCAN till now.

This means that two controller setup on one bus and testing of the frames sent between them should be no problem on any host system. For CTU CAN FD

$QEMU \
      -object can-bus,id=canbus0-bus \
      -device kvaser_pci,canbus=canbus0-bus \
      -device ctucan_pci,canbus0=canbus0-bus,canbus1=canbus0-bus \
      ...

The only GNU/Linux specific option is connection to the bus on the host system

      -object can-host-socketcan,if=can0,canbus=canbus0-bus,id=canbus0-socketcan \

it is problematic for CI anyway because you have to have real CAN controller configured and connected to the bus or you need setup virtual CAN bus and controller

modprobe can-raw
modprobe vcan

ip link add dev can0 type vcan
ip link set can0 up

which are privileged operations so again not available in CI and Docker.

So testing of communication between two interfaces from RTEMS against QEMU provided controllers HW is the best option for CI.

Thank you mentors for your guidance!! I would like to share my recent progress on booting RTEMS on the BeagleBone Black using an SD card with a simple hello.c application.

So far, I have completed several key steps:

  • Built the RTEMS BSP for ARM
  • Using RTEMS 7 toolchain and main branch
  • Compiled hello.c into hello.exe using arm-rtems7-gcc
  • Converted the RTEMS ELF into a U-Boot bootable image:
    • Used arm-rtems7-objcopy to generate raw binary
    • Compressed with gzip
    • Used mkimage to generate rtems-app.img
  • Prepared the SD card for boot:
    • Formatted as FAT32 with MBR partitioning
    • Created uEnv.txt boot script
    • Verified contents:
      • rtems-app.img
      • uEnv.txt
  • Integrated device tree:
    • Identified correct DTB: am335x-boneblack.dtb
    • Located in BBB Debian system:
      • /boot/dtbs/5.10.168-ti-r71/am335x-boneblack.dtb
    • Manually mounted SD card on BBB
    • Copied DTB to SD card
  • Verified final SD card contents:
    • rtems-app.img
    • uEnv.txt
    • am335x-boneblack.dtb
  • Understood BBB boot process:
    • Default boot from eMMC
    • SD boot requires holding BOOT button (S2) during power-up

At this point, the SD card preparation for RTEMS boot is complete.
But I have a question here, how to use computer to test if BBB boot from SD card and run RTEMS? I tried to read serial output through my computer terminal, I can connect BBB serial while BBB in Linux, but do not know how to connect BBB after I boot from SD card.

Next steps:

  • Verify runtime behavior on hardware. Consider using Saleae logic analyzer for low-level signal observation, or, use UART (serial console) to observe boot logs and confirm execution.

I am working on setting up the hardware using one BBB with two transceivers to test CAN workload in Linux. I will show more results later.
I am also preparing the proposal, will send the link with the hardware test results.

You might like to ask the generic questions about using the BBB on the General or ARM topics. I guess we don’t have BSP-specific topics, maybe we should.

You ought to be able to still connect the serial but not over FTDI and instead using UART TX/RX. You can also test this with linux. You can use a serial-to-ftdi (USB) adapter to plug in to a computer for making the connection, and you connect the TX/RX/GND wires. If you get a 4-wire (popular adafruit/arduino-style) adapter, you do not connect the power! BBB works at 3.3v and anything 5v will likely brick your board.

Thank you for the resources! I would like to share progress on setting up CAN communication on the BeagleBone Black (BBB) using Linux. I have successfully completed both the hardware setup and initial communication tests using the onboard DCAN controllers.

  1. Hardware Setup

I configured a minimal CAN network on a single BBB using both DCAN interfaces.

DCAN1 (P9.24 / P9.26) is connected to one CAN transceiver.
DCAN0 (P9.19 / P9.20) is connected to another CAN transceiver.
The CANH and CANL lines from both transceivers are connected together to form a shared CAN bus.
Two 120 ohm termination resistors are placed across CANH and CANL.

This setup allows the BBB to simulate two CAN nodes on the same physical bus.

  1. Linux Configuration

I enabled CAN functionality using pin multiplexing:

config-pin p9.24 can
config-pin p9.26 can
config-pin p9.19 can
config-pin p9.20 can

Both CAN interfaces were brought up using the same bitrate:

sudo ip link set can0 up type can bitrate 125000
sudo ip link set can1 up type can bitrate 125000

I verified that both interfaces are active using:

ip link show

The output shows that both can0 and can1 are in the UP state.

  1. CAN Communication Test

I performed bidirectional communication tests between can0 and can1.

Test 1: can0 to can1

In terminal 1:

candump can1

In terminal 2:

cansend can0 123#11223344

The message was successfully received on can1.

Test 2: can1 to can0

In terminal 1:

candump can0

In terminal 2:

cansend can1 123#1122334455667788

The message was successfully received on can0.

  1. Screenshots

I have attached screenshots showing:

CAN interface status (ip link show)
Successful transmission and reception using candump
Hardware wiring on the breadboard

  1. Key Observations

Both DCAN controllers can operate simultaneously on a single BBB.
The SocketCAN framework and c_can driver function correctly.
Matching bitrate across nodes is required for communication. Lower bitrate generally provides more stable CAN communication, especially during initial testing and debugging. So I choose 125kbps rather than 500kbps.
Physical layer setup, including transceivers and termination resistor, is essential for stable operation.

  1. Next Steps

Next, I plan to study the Linux c_can driver in detail, including mailbox management, interrupt handling, and transmit/receive paths.
Also study the CAN stack of RTEMS. Think how to add driver on BBB.

Please let me know if there are any suggestions or additional tests I should perform.