Critical Regions and the Effect on RTOS Responsiveness A key design concern for any RTOS is that of its responsiveness to interrupts. Because an interrupt can occur at any time, the RTOS must be prepared to recognize the interrupt as quickly as possible and then accommodate the interrupt service routine (ISR) that handles it. The issue becomes one of managing the internal data structures of the kernel in such a way that regardless of what is executing at the time of the interrupt, none of those data structures get corrupted. The time-honored way of preventing data corruption in an RTOS is to make use of one or more critical regions in places where an interruption of processing flow within a kernel service and possible reentrancy might lead to damage. The critical region can assume several forms but it is primarily a section of code that executes with interrupts disabled. The critical region starts by disabling interrupts and ends by enabling them. In between, the code can execute without fear of interruption and hence, corruption. There are various ways to disable and enable interrupts, especially those processors that have multiple levels, making critical regions a bit more complex. But for now, let’s just keep it simple for the sake of clarity. Because critical regions disable interrupts, system responsiveness is greatly affected by the duration of the critical region (if any) that is in effect at the time an interrupt occurs. Simply put, the shorter the critical region, the better the system’s responsiveness. If a task is interrupted in the middle of a kernel service it has invoked, there is a good chance that the kernel’s data structures are in a state of flux at the time of the interrupt. Without the kernel executing the code that keeps the data structure pristine, the interrupt and its ISR could invoke a kernel service that totally corrupts the data structure, leading to les than desirable results for both the task and the ISR. Obviously, such situations must be avoided. Thus, when designing an RTOS one has to make a fundamental decision as to how kernel services are to be treated at the interrupt level. There are basically two choices and both are legitimate. Allow ISRs to make kernel service requests just like a task with the exception that an ISR cannot block, (i.e., wait for a service to reach completion such as waiting to get data from an empty queue). ISRs can only issue kernel service requests from a limited set of services that restricts what the ISR can do/change in the system. The first choice naturally leads to a reentrant kernel and great flexibility because tasks and ISRs are treated the same, with that one exception about waiting. That flexibility comes at a price of designing kernel services that require lengthy critical regions in order to protect the underlying data structures in the kernel. An ISR must not be allowed to corrupt an internal data structure in use by a task at the time of the interrupt. Thus, any and all parts of the kernel service code that would be vulnerable to the ISR must be protected by one or more critical regions. Hence, the critical region may be rather large, leading to reduced interrupt responsiveness. So there is a trade-off: Great coding flexibility for reduced responsiveness. The second design choice limits the capability of the kernel services that may be invoked from an ISR. By making such a restriction, the kernel designer can precisely determine how a task’s use of a service and its underlying data structures would be vulnerable to an ISR. There are still critical regions to consider but the design generally results in fewer and smaller critical regions, making this choice more responsive to interrupts. The trade-off for this concept then is: Reduced flexibility for improved responsiveness. While both of these choices are legitimate, the RTXC Quadros RTOS uses the second choice in the belief that responsiveness is preferable to ISR coding flexibility, provided that there is a reasonable set of kernel services available to the ISR designer. And, of course, RTXC Quadros RTOS does provide such a set of kernel services using the same form as services available to tasks but with name space separation. Task services have a prefix of KS_ while the services available to ISRs have a prefix of IS_. Given these two choices and how they can affect an RTOS design, it should be pointed out that it is also possible to create an RTOS design that reduces the number of critical regions in the entire kernel to only a few, perhaps two or three. The resulting design, while atypical, can mitigate the undesirable elements of the two choices just described yielding an RTOS that is highly responsive to interrupts and has high ISR coding flexibility through a common set of kernel services for tasks and ISRs. ...
RTOS Explained: Understanding Counters and Alarms
One of the valuable feature of a real-time operating system is the ability to count and then to take action at certain defined values of an associated counter. Most RTOSes count only a single independent variable, time, denominated in the unit of time each tick represents. But the RTXC Quadros RTOS provides a more flexible approach to counting because time is not the only independent variable one is likely to encounter in a real-time system. The RTXC Quadros RTOS certainly accommodates time, represented by periodic events, but it can also handle aperiodic events that represent independent variables such as angular rotation or flow. Why is this important? Imagine having to keep track of rotational speed of an automobile’s engine or the volume of fluid flowing through a positive displacement meter with only time as your independent variable. All sorts of transformations would be required, each with its own built-in errors. But if there were a tick representing increments of angular displacement, measuring angular velocity or RPMs on an engine is significantly easier. Likewise, if each tick from a PD meter represented some known volume, flow rate and volume become very straightforward and more accurate. To allow more degrees of design freedom to system developers, the RTXC Quadros RTOS uses a hierarchy of classes to count and keep track of ticks. At the highest level in the hierarchy, there is an Event Source, representing some base count frequency. One or more Counters can be associated with the parent Event Source. The user defines the units of the Counter and the number of Event Source ticks that represent one Counter tick. In this way, it is easy to divide down a higher frequency into a lower one. Using time as an example, an Event Source might provide a fixed rate of 1000 ticks per second (1 kHz), which would allow a counter representing 1 millisecond if each Event Source tick was treated as a Counter tick. And if another Counter associated with that same Event Source used 1000 Event Source ticks as one Counter tick, its units would be denominated in seconds (1 Hz). Thus, if a Counter is incremented at a regular rate, then it can represent time. Normally, the units are milliseconds but others are possible. And the Counter doesn’t have to start at zero. A task can set up a Counter to begin counting from a specified base value, making a time base such as Base Universal Time very simple to achieve. Many applications need to know the number of ticks that occurred between two events. Two kernel calls are required: the first to establish the starting value and the second to get the number of ticks (from whatever counter) that have elapsed between two events, returning the difference in number of ticks from the initial event. Some RTOSes must continually poll their counters to check the value. The RTXC Quadros uses an Alarm class for generalized counting management as the third member of the counting hierarchy. This solution allows any Counter to be the parent of an Alarm with its units denominated in the units of the parent Counter. Thus, both periodic and aperiodic alarming is possible. If you need an alarm every 0.5 litres of volume flowing through a given meter without regard to instantaneous flow rate, having an alarm to tell you when it occurs is a real improvement over some traditional RTOS capabilities. The concept of alarming, however, is rather simple. An Alarm is defined to occur at a particular value of an associated Counter. When the counter reaches the value that matches the expiration point of the alarm, an action can take place, such as resuming a task that was waiting for the alarm to expire. There are two basic types of Alarm objects in the RTXC Quadros RTOS: One-shot Alarm: An Alarm that is defined for one and only pone point of expiry, upon which the alarm becomes inactive and can only be re-armed by explicit action via a kernel service before it can be used again. Cyclic Alarm: An Alarm having an initial period and a recycle period. The first point of expiration is at the end of the initial period, after which the Alarm is automatically reset to the recycle period for each expiration thereafter until the Alarm is cancelled. At the expiration of an alarm, whether one-shot or cyclic, it is possible to synchronize that event by resuming a task waiting for that event, signaling a semaphore, etc. The expiration of an alarm can also cause a RTXC Thread to be scheduled directly or indirectly. Through these synchronization mechanisms, it is not necessary to spend non-productive processor cycles polling the alarm to see if it has expired. As a result, a deterministic response to an alarm expiration is possible. No special CPU resources are required for alarms or their associated counters. In fact, it is possible to define an internal event as the system tick event. The counting hierarchy does not distinguish between an event tick from an internal or an external source. Obviously, most applications make use of some sort of periodic interrupt for the system time tick. It is up to the user to define the source of such a tick. ...
RTOS Explained: Understanding Kernel Services
Kernel services (sometimes referred to as systems calls or functions) are used to cause the kernel to operate on the data of the kernel objects to achieve the desired behavior of the application code. Knowledge of how these data structures work is fundamental to building real-time application systems around the RTXC Quadros RTOS product. To function consistently and predictably a real-time operating system must be based on a set of rules. These rules permit software processes to operate and gain access to system resources in an orderly manner. Kernel services embody and enforce these rules to ensure the necessary order in the application processes that use them. Time is the most difficult and unforgiving resource managed by the kernel. Kernel services must be designed and coded to require minimal execution time yet remain predictable. Execution speed of the kernel services determines the responsiveness of the system to changes in the physical process. Here are just a few examples of RTXC Quadros kernel services: KS_TestSemaT Test a semaphore and wait for a specified number of ticks on a specified counter if the semaphore is not DONE. KS_YieldTask Yield to the next ready task of the same priority. KS_OpenQueue Allocate and name a dynamic queue. KS_ReceiveMsgT Receive a message from a mailbox. If the mailbox is empty, wait a specified number of ticks for a message. KS_AllocBlkW Allocate a block of memory. If the partition is empty, wait for an available block. KS_UseMutx Look up a dynamic mutex by name and mark it for use. XX_AllocSysRAM Allocate a block of system RAM. A rich set of kernel services means less code for you to write. In the case of the RTXC Quadros RTOS you have access to more than 330 services. And, like the 88 keys on a piano that provide a seemingly infinite variety of music, you can use these kernel services in an almost infinite number of combinations to meet the specific need of your embedded applications. In the RTXC Quadros RTOS User Manual each kernel service is described in detail with example code. ...
RTOS Explained: Preemptive Scheduling
The RTXC Quadros multistack RTOS supports three scheduling methods that may be used in whatever combination the developer requires. Preemptive Round Robin Time-Sliced Today we will discuss the topic of preemptive scheduling. To achieve efficient CPU utilization a multitasking RTOS uses an orderly transfer of control from one code entity to another. To accomplish this the RTOS must monitor system resources and the execution state of each code entity, and it must ensure that each entity receives control of the CPU in a timely manner. The key word here is timely. A real-time system that does not perform a required operation at the correct time has failed. That failure can have consequences that range from benign to catastrophic. Response time for a request for kernel services and the execution time of these services must be fast and predictable. With such an RTOS, application program code can be designed to ensure that all needs are detected and processed. Real-time applications usually consist of several tasks (also can be called processes or threads) that require control of system resources at varying times due to external or internal events. Each task must compete with all other task for control of system resources such as memory, execution time, or peripheral devices. The developer uses the scheduling models in the RTOS to manage this “competition” between these tasks. Program code can be compute-bound (heavily dependent on CPU resources) or I/O-bound (heavily dependent on access to external devices). Program code that is I/O bound or compute bound cannot be allowed to monopolize a system resource if a more important task requires the same resource. There must be a way of interrupting the operation of the lesser task and granting the resource to the more important one. One method for achieving this is to assign a priority to each task. The kernel then uses this priority to determine a task’s place within the sequence of execution of other tasks. In a prioritized system the highest priority task that is ready is given control of the processor. Tasks of lower priority must wait and, even when running, may have their execution preempted by a task of higher priority. When a task completes its operation or blocks (waits for something to happen before it can continue) it releases the CPU. The scheduler in the RTOS then looks for the next highest priority code entity that is ready to run and assigns it control of the CPU. In this preemptive scheduling model a task must be in one of four states: Running – the task is in control of the CPU Ready – the task is not blocked and is ready to receive control of the CPU when the scheduling policy indicates it is the highest priority task in the system that is not blocked Inactive – the task is blocked and requires initialization in order to become ready. Blocked – the task is waiting for something to happen or for a resource to become available. Preemptive Scheduling in Summary Each Task has a priority relative to all other tasks The most critical Task is assigned the highest priority The highest priority Task that is ready to run gets control of the processor A Task runs until it yields, terminates, or blocks Each Task has its own memory stack Before a Task can run it must load its context from its memory stack (this can take many cycles) If a Task is preempted it must save its current state/context; this context is restored when the Task is given control of the processor Further reading Read our previous blog post on Run-to-Completion Threads Download the whitepaper: 10 Questions to Ask Your RTOS Vendor Visit the RTXC Quadros RTOS information page ...
RTOS Explained: Run-to-Completion (RTC) Threads
Introduction to RTXC Threads The RTXC Quadros RTOS offers two different execution entities: Tasks and Threads. You may be familiar with various terms like process or task or thread, depending on your OS experience. In RTXC Quadros Tasks and Threads have very specific functionality. Whether you use Threads or Tasks (or both) depends on the requirements of your application and on the particular RTXC Quadros configuration you are using. Tasks are RTXC program code that performs a defined function or set of functions. In the RTXC Quadros world each task has a priority and its own stack for saving context when preempted or rescheduled. Tasks can block or wait on events. Each task is independent of other tasks but can establish relationships with other tasks in many forms, including data structures, input, output, or other constructs. Threads are lightweight tasks which are designed to run to completion unless preempted by an ISR or a Thread at a higher priority level. The run-to-completion model allows all Threads in an application use a single stack. If preempted they save their context to the common stack. The single stack means that Threads cannot block or wait on events however Threads can be initiated by events. A Thread has no context upon entry and must perform any required data initialization upon entry. When its operations are complete, the thread returns to the RTXC/ss scheduler without context. Threads have an inherent priority above that of tasks. Thus, the lowest priority Thread is of greater importance than the highest priority Task. Thread Usage Threads are ideal for applications with event driven, hard real-time requirements. Because RTXC Threads fit very nicely into fixed priority scheduling methods, thread priorities can be determined using Rate Monotonic Analysis (RMA) to ensure schedulability of the real-time tasks to meet their deadlines. Being lightweight tasks, Threads are adept at handling processing associated with a high frequency of interrupts. They can be scheduled and complete their operating more quickly and with less system overhead. Threads are also ideal for event-driven execution using Finite State Machines (FSM) or Hierarchical State Machines (HSM). The advantage for state machine implementations is that our implementation of Threads supports the Run-To-Completion execution model inherent in state machine design: the system must complete the processing of an event before it can start processing the next event. Thread Scheduling and Preemption Depending on how you set up your system you can operate RTXC/ss as a Cooperative Executive or as a Prioritized, Preemptive Executive. Cooperative Scheduler: If you implement only a single priority level within RTXC/ss then Threads will run to completion once they are launched. However, during their operation they may be interrupted. You can establish a priority for Threads within a level but this only determines the order in which they will be scheduled. A running Thread cannot be preempted by another Thread in the same level. Prioritized, Preemptive Scheduler: If you implement multiple priority levels within RTXC/ss then a ready Thread in a higher priority level will preempt the Current Thread. In this case the context of the Current Thread is saved in the common stack to be resumed after all ready Threads in higher levels of priority have run. For more information on any of these topics request our user manuals or contact your local sales representative. ...