Blogs | Technical Articles | Videos

Electronic Design recently published 11 myths about µC/OS as part of its "11 Myths" series. As µC/OS, a world-renowned embedded real-time operating system (RTOS), hits its 25th anniversary this year, it’s an ideal time to examine the kernel and address some of the many myths that have proliferated in the embedded market over the years.

1. The µC/OS kernels were community-developed.

Absolutely not. I wrote 100% of the code as well as a series of books describing the internals of the µC/OS kernels: µC/OS (1992), µC/OS-II (1998), and µC/OS-III (2008). The only help I got as an author was editing, but that’s typical for most authors. I did receive a lot of feedback from embedded developers, and of course evolved the code to satisfy some of those requests. But for the past couple of years, Micrium’s developers have been maintaining the code following the strict coding guidelines I established for Micrium.

In some ways, it’s hard for me to believe that the µC/OS kernel is 25 years old; perhaps mostly because I don’t want to believe it! The kernel has an interesting history, largely because it didn’t start as a commercial project, and yet 25 years on, I find myself still hard at work on µC/OS products. Now seems like a good time for a look back at the software’s origins, as many of the challenges that existed then are still experienced by developers today. The difference is that now the proven, certifiable real-time operating system (RTOS) µC/OS-II® and µC/OS-III® kernels are commercially available.

The μC/OS story started in 1989, when I joined Dynalco Controls in Fort Lauderdale, Florida, and began working on the design of a new microprocessor-based ignition control system for a large industrial reciprocating engine. I was convinced that an RTOS kernel would benefit this project. Initially, I wanted to use a kernel I had experience with, but budget requirements drove the selection of a less costly alternative. It quickly became apparent that I was paying for the seemingly cheaper RTOS with my time. I spent the next two months in contact with technical support, trying to determine why even the simplest applications would not run. It turned out I was one of the first customers for the kernel, meaning I ended up as an unintended beta tester.

Micrium's µC/OS-III kernel has a rich set of built-in instrumentation that collects real-time performance data. This data can be used to provide invaluable insight into your kernel-based application, allowing you to have a better understanding of the run-time behavior of your system. Having this information readily available can, in some cases, uncover potential real-time programming errors and allow you to optimize your application.

In Part I of this post we examined, via µC/Probe, a number of the statistics built into µC/OS-III, including those for stack usage, CPU usage (total and per-task), context-switch counts, and signaling times for task semaphores and queues.

In this post, we'll examine the kernel’s built-in ability to measure interrupt-disable and scheduler lock time on a per-task basis. Once again, we used µC/Probe to display, at run time, these values.

Micrium's µC/OS-III kernel has a rich set of built-in instrumentation that collects real-time performance data. This data can be used to provide invaluable insight into your kernel-based application, allowing you to have a better understanding of the run-time behavior of your system. Having this information readily available can, in some cases, uncover potential real-time programming errors and allow you to optimize your application. In this two-part series of posts, we will explore the statistics yielded by the kernel's instrumentation, and we'll also consider a unique way of visualizing this information.

In Part 1 of this two-part series, we looked at what stack overflows are and how to determine the size of a task stack. Now, we turn to detect stack overflows, as there are a number of techniques that can be used. Some make use of hardware, while some are performed entirely in software. They are listed in order of the most preferable to the least preferable, based on the likelihood of detecting the overflow. As we will see shortly, having the capability in hardware is preferable since stack overflows can be detected nearly immediately as they happen, which can help avoid those strange behaviors and aid in solving them faster.

Hardware stack overflow detection mechanisms generally trigger an exception handler. The exception handler typically saves the current program counter (PC) and possibly other CPU registers onto the current task’s stack. Of course, because the exception occurs when we are attempting to access data outside of the stack, the handler would overwrite some variables or another stack in your application, assuming there is RAM beyond the base of the overflowed stack.