Atollic and Quadros Systems Partner to Offer RTOS-Aware Debugging

The Atollic® TrueSTUDIO® C/C++ embedded development suite for ARM® microcontrollers now offers debug visibility for the RTXC™ Quadros™ real-time operating system. Thirteen dockable windows provide deep insight into the status of the RTOS during a debug session. This feature is included in Atollic TrueSTUDIO v4.1 which was just released last week. (click on images for larger view) Get more information here Atollic® TrueSTUDIO® is the leading C/C++ development tool for ARM® developers, reducing time to market and increasing efficiency in your next embedded systems project. Atollic TrueSTUDIO is based on one of the most widely used compilers in the world, thus providing proven and reliable code generation, compact code and high performance for ARM7™, ARM9™ and ARM Cortex™ projects. Atollic TrueSTUDIO conforms to open standards, such as the ECLIPSE™ IDE framework and the GNU toolchain, significantly reducing training and porting costs across teams and projects. More information on Atollic TrueSTUDIO can be found here: http://www.atollic.com/index.php/truestudio   ...

Why would anyone pay for an RTOS or other embedded software?

Maybe this is a question you have asked.  Maybe it’s a question you live by. After all, why pay for software when you can download something similar at no cost? You’ve heard it before but you do get what you pay for. That doesn’t mean that a free RTOS is worthless. What it does mean is that you are on your own. Maybe you’re the hero type; a talented embedded engineer that can code yourself out of any situation. Maybe you trust the “community” to help you out if you get into trouble. Maybe you are a newbie and a bit naive about the complexity of embedded development. It’s not really free. What I mean is that it may be free to you (to use) but somebody paid for it. Maybe it was a microcontroller supplier trying to lure you in to using their platform. User beware; you could be locking yourself in to an software platform with limited functionality & support and no upgrade path. Or maybe the license locks you in to a particular processor supplier. What if you want to change? What about middleware? So you downloaded a free RTOS but then you realize you need USB or TCP/IP or a file system? What do you do? License a bunch of software pieces from different suppliers and then piece them together? At Quadros Systems you are purchasing value and expertise, with years of embedded development expertise. A company that will stand by you, to get you through those days when you are stuck on a difficult problem and behind schedule. Developing reliable embedded systems is not easy. The smallest integration detail or incorrect parameter can take weeks to debug. Getting the right advice at the beginning of your project can be of critical importance. At Quadros Systems we take the time to understand your project and the assumptions and requirements behind it. And then we try to match our software to what you need. If we don’t have it, we won’t try to force-fit a substandard solution to make a sale. Want to work with an RTOS company that cares about results? Contact us at +1 832-351-2830, use our information request form to tell us about your project, or send an email to info@quadros.com. Get more information about the Quadros Difference here. ...

RTOS Explained: Understanding Interprocess Communications

A key feature of any kind of operating system is to be able to pass data between processes (tasks, threads, and interrupt service routines). The best RTOSes give the application developer as much flexibility as possible in how to do this. A single messaging option could be good for one situation but not for another. The RTXC Quadros RTOS provides three different object classes for passing data. Each class has unique properties that support interprocess communication according to the needs of the application. All three allow both synchronous and asynchronous communication between execution entities through the use of kernel services. And all three support a design that allows multiple producers and consumers for maximum application flexibility. The Queue class allows user-defined, fixed-size data to be passed in First-In-First-Out (FIFO) order between tasks, preserving its chronological nature. Queues are circular. The kernel copies data from a source buffer into the queue (enqueue) and from the queue into a destination buffer (dequeue) in three variants: simple enqueue/dequeue; enqueue/dequeue waiting on full/empty conditions; and enqueue/dequeue with limited duration waiting on full/empty conditions. The RTXC kernel maintains the status and ensures proper operation on EMPTY and FULL conditions. Under normal conditions the services associated with the Queue class provide the necessary synchronization. However, in the case of certain transitional conditions when special synchronization is required, queue semaphores are also supported. These queue semaphores are used in connection with RTXC services that operate on a set of semaphores associated with multiple queues or other types of events in order to eliminate polling for available data. The Mailbox/Message class allows variable length data to be passed from a sending task to a receiving task with or without a priority. Mailboxes are globally available to all tasks and are exchange points for messages sent by a sending task to a receiving task. The sender and the receiver must agree on the mailbox they will use for passing Messages between them. Messages are prepared by the sending task and consist of a message envelope and a message body. As a matter of system safety, the Message envelope is allocated by the sender but maintained by the kernel.The kernel has no interest in the message body other than to see to its delivery at the designated mailbox. Only the sender and the receiver tasks know the form and content of the message. Due to the fact that messages sizes can vary, message services do not copy the data into a mailbox. Instead, they manipulate pointers to the message. The result is a clean, efficient means for passing data of any size. The developer can choose to pass data in a FIFO manner or in one of two priorities, Urgent or Normal. The receiver task gets Urgent priority messages before those of Normal priority. A task can send Messages synchronously or asynchronously. For synchronous messages the sending task sends the message and waits for the receiver to acknowledge its receipt before proceeding. (NOTE: it is possible to associate an alarm with the receipt of the acknowledgement so that the duration of any waiting condition can be limited.) For asynchronous message transfers, the sending task sends the message but continues without waiting for an acknowledgement. The task can choose to wait at a later time for an acknowledgement. Normal send/receive operations have the same three variants as Queues: simple send/receive; send/receive waiting for successful completion; and send/receive with limited duration waiting for success. Like Queues, Mailboxes have related semaphores intended for use with transitional events to eliminate the necessity of polling for available data. The Pipe class, allows data to be passed from a producer task, thread or Interrupt Service Routine (ISR) to a consumer task, thread or ISR. Pipes are data passing mechanisms in the RTXC Quadros RTOS that permit rapid, unidirectional transfers from one or more producers to one or more consumers. The producer or the consumer may be an ISR, a thread or a task. Pipes are ideally suited to communications applications where data is acquired and buffered at a high frequency and then must be processed rapidly while the next block is being acquired. Functionally, pipes are buffers of data that are filled by the producer and processed by the consumer. The RTXC Quadros pipe services perform the necessary functions of managing the buffers in the pipe while the actual reading and writing of the data in the pipe buffers is the responsibility of the consumer and producer, respectively. In operation, the producer allocates a free buffer from the pipe, fills it with data, puts it back into the pipe. The consumer gets the full buffer, processes the data in it, and then frees the empty buffer back to the pipe where it can be re-used. Normally, pipe buffers are allocated sequentially and processed in the same order. However, RTXC also permits the producer to put a full buffer at the front of the pipe instead of at the tail, causing the consumer to process it next. Pipes can also be associated with thread gates (in the RTXC/ss configuration) so that certain pipe operations can result in operations on associated thread gates. Such a capability permits close synchronization between pipe producers and consumers and minimizes the amount of RAM required for pipe buffers. Summary The three flexible interprocess communications object classes in the RTXC Quadros RTOS eliminates the necessity of trying to force-fit one method of moving data into all situations a design might require. The developer can choose the one that best fits the application’s needs and have the appropriate kernel services to implement efficient data transfers for optimal time and memory considerations. ...

RTXC Quadros RTOS supports CEVA-X DSPs

We have been working closely with CEVA, Inc. to support their versatile CEVA-X family of low power, high performance DSP cores using the CEVA-Toolbox™ suite of development tools. CEVA holds 90% of the market for DSP cores and boasts adoption by many leading SoC developers. Today we announced our support for CEVA-X1622, CEVA-X1641 and CEVA-X1643 DSP cores. Read more about RTXC/CEVA integration here. ...

RTOS Explained: Understanding Event Flags

To fully understand event flags it will be helpful to read the previous blog entry on semaphores since both semaphores and event flags are techniques used by RTOSes for synchronizing tasks. Semaphores and event flags must have the inherent capability to capture and retain information about an event’s occurrence since the system may be otherwise occupied when the event happened. Event flags are bits and the usual way of handling them is to group them into larger data elements. Different efficiencies can be obtained with different sizes, so it is not uncommon to see one RTOS that uses an event flag grouping of a byte and others that use a short or a word. Whatever the grouping, it is usually called an event group. The application developer assigns the association of flags in the group to their related events. At time of initialization, the common procedure is to set all flags in an event group to a value, indicating that the associated events have not occurred. From there on, the content of the event group at any given moment is a joint effort between the tasks that use it and the operating system. The system may set a flag to indicate the event’s occurrence but it is usually the responsibility of the task to cause the flag to be reset. This shared responsibility exposes the vulnerability of event flags – if the task doesn’t reset the flag before the next event arrives, an event can be lost, which can have varying degrees of consequence. While there is that glaring vulnerability, event flags still have some very attractive features. For one, it is very simple to perform logical testing of the flags. A task can test on a certain configuration of flags in an event group and wait if that condition is not met. Testing on the logical OR condition of a set of event flags allows a task to wait until any one of the events occurs and to identify which one simply and easily. In the same way a task can wait for a logical AND condition in which all of the event in a set must occur before the waiting task can proceed. Event flags can be designed so that they are reset upon a trigger or left intact so that the testing task can determine which one of them happened. This is most likely to be the case when using a set of event flags in a logical OR manner. When the testing task has made its determination as to which event triggered the task’s release, it must issue a request to reset them in order to prepare for the next occurrences. When the event flags are used in a logical AND condition, the task does not get triggered until all of the events in the set have occurred, leaving no question about which events happened. In this condition, resetting of the event flags can be managed by the system to reset on triggering, which is much less likely to run into the race condition one can easily encounter if the task resets the event flags in an Or condition. So within the boundaries of response time for the various events and their frequency of occurrence, event flags can be a very useful object for managing events and synchronizing with them. While event flags are easy to use, one downside to their use is that it can be difficult to process an event deterministically. For system designs where the event groups are available to all tasks, it is possible (and quite likely) that a single event group may hold event flags that are tested by multiple tasks. Such usage demands that each task have a unique testing trigger set up for that event group. When any one of the events in the group occurs, the set of triggering conditions must be examined to see if any of the triggers have occurred. It might be a bounded operation dependent on the number of triggering conditions in action, but it is not deterministic. Event flag objects can be included in an RTOS with a minor increase in code footprint. For that reason alone, they are worthwhile to include provided that the vulnerability to event loss is not a concern. ...

RTOS Explained: Understanding Priority Inversion

A fundamental maxim in a real-time system is that the highest priority task that is ready to run must be given control of the processor. If this does not happen deadlines can be missed and the system can become unstable or worse. Priority inversion simply means that the priority you established for a task in your system is turned upside down by the unintended result of how priorities are managed and how system resources are assigned. It is an artifact of the resource sharing systems that are fundamental to RTOS operation. Mutex (and/or Semaphore) objects are used to ensure that the various tasks can have exclusive access to system resources when they need them. Consider Task C that is writing to RAM. To protect against memory corruption which could occur if another higher priority task (Task A) preempts Task C and writes to the same block, Task C uses a Mutex to lock access to the RAM until the write is completed. However, Task A interrupts Task C before the operation can be completed and yet Task A needs to access the file system to complete its work. The result is a priority inversion. Task C must finish its work so that Task A can get access, but Task A has blocked Task C because Task A has a higher priority. Embedded gridlock! This scenario can be termed “bounded inversion.” A variation of this is termed “unbounded inversion.” Task C has locked the RAM with a Mutex and is then preempted by Task B which is attempting to perform an unrelated, but higher priority operation. Highest priority Task A then preempts Task B but relinquishes control (blocks) since it cannot access the resource because of the Mutex claimed by task C. Task B then resumes operation and maintains control of the processor for as long as it needs it. Task A (the highest priority task) is unable to run during this unbounded period. The simple solution is to let Task C finish what it was doing, uninterrupted, and then release the Mutex on the file system so that Task A can do what it needs to do. But the operating system objects are doing exactly what they were programmed to do and, without special logic to deal with these situations, the scheduler must honor the relative priorities of Tasks A, B and C and the inherent rules of the operating system. So how does the developer avoid these situations or once in them, how does the system get out of the situation? A great deal of study has gone into this subject and space does not allow a detailed review of the research. Instead, let’s consider the two most common ways that RTOSes deal with priority inversion and how each approach affects the conditions of bounded or unbounded inversion. Both of these techniques are valid and both have their pros and cons. Priority Inheritance Protocol – Change the priority of Task C to the same as Task A. Task A has to wait for Task C to complete, but the assumption is that at the higher priority, Task C can get more cycles that are not pre-empted and can thus complete its critical region more quickly in real-time. When Task C completes its critical region, it returns to its original priority and ownership of the shared resource’s Mutex is then assigned to the waiting Task A. Priority Ceiling Protocol – This method is also called Instant Inheritance Protocol. If there is a set of n tasks that use a shared resource, a Mutex is assigned to the shared resource and given a priority that is at least as high as, if not higher than, the highest priority of the n tasks in the set. When any task attempts to acquire ownership of the shared resource’s Mutex, it will be granted and its priority instantly changed to the priority of the Mutex. This action prevents any possibility of preemption by another member of the set of the shared resource’s users because all other tasks in the set are, by definition, at a lower priority than that of the current owner. Thus, there can be no tasks that are waiting to get ownership of the Mutex because they cannot get scheduled. This method effectively prevents the possibility of a priority inversion. Notice that this method also effectively deals with unbounded inversion issue because no other task (such as Task B) that has a priority between that of the lowest priority task in the set and the highest priority members of the set can get scheduled. When the owner task completes its critical region, the Mutex is released and the task returned to its original priority. The Priority Ceiling Protocol used by other RTOSes avoids the potential problem of dealing with unbounded inversions but, since all tasks that use the same resource are permanently given the same high priority, even if they are not essential tasks, this may starve other, more important tasks. The RTXC Quadros RTOS uses Priority Inheritance Protocol. It addresses bounded inversions and avoids the potential problem of Priority Ceiling Protocol that can starve tasks which are not part of the set. ...

RTOS Explained: Understanding Critical Regions

Critical Regions and the Effect on RTOS Responsiveness A key design concern for any RTOS is that of its responsiveness to interrupts. Because an interrupt can occur at any time, the RTOS must be prepared to recognize the interrupt as quickly as possible and then accommodate the interrupt service routine (ISR) that handles it. The issue becomes one of managing the internal data structures of the kernel in such a way that regardless of what is executing at the time of the interrupt, none of those data structures get corrupted. The time-honored way of preventing data corruption in an RTOS is to make use of one or more critical regions in places where an interruption of processing flow within a kernel service and possible reentrancy might lead to damage. The critical region can assume several forms but it is primarily a section of code that executes with interrupts disabled. The critical region starts by disabling interrupts and ends by enabling them. In between, the code can execute without fear of interruption and hence, corruption. There are various ways to disable and enable interrupts, especially those processors that have multiple levels, making critical regions a bit more complex. But for now, let’s just keep it simple for the sake of clarity. Because critical regions disable interrupts, system responsiveness is greatly affected by the duration of the critical region (if any) that is in effect at the time an interrupt occurs. Simply put, the shorter the critical region, the better the system’s responsiveness. If a task is interrupted in the middle of a kernel service it has invoked, there is a good chance that the kernel’s data structures are in a state of flux at the time of the interrupt. Without the kernel executing the code that keeps the data structure pristine, the interrupt and its ISR could invoke a kernel service that totally corrupts the data structure, leading to les than desirable results for both the task and the ISR. Obviously, such situations must be avoided. Thus, when designing an RTOS one has to make a fundamental decision as to how kernel services are to be treated at the interrupt level. There are basically two choices and both are legitimate. Allow ISRs to make kernel service requests just like a task with the exception that an ISR cannot block, (i.e., wait for a service to reach completion such as waiting to get data from an empty queue). ISRs can only issue kernel service requests from a limited set of services that restricts what the ISR can do/change in the system. The first choice naturally leads to a reentrant kernel and great flexibility because tasks and ISRs are treated the same, with that one exception about waiting. That flexibility comes at a price of designing kernel services that require lengthy critical regions in order to protect the underlying data structures in the kernel. An ISR must not be allowed to corrupt an internal data structure in use by a task at the time of the interrupt. Thus, any and all parts of the kernel service code that would be vulnerable to the ISR must be protected by one or more critical regions. Hence, the critical region may be rather large, leading to reduced interrupt responsiveness. So there is a trade-off: Great coding flexibility for reduced responsiveness. The second design choice limits the capability of the kernel services that may be invoked from an ISR. By making such a restriction, the kernel designer can precisely determine how a task’s use of a service and its underlying data structures would be vulnerable to an ISR. There are still critical regions to consider but the design generally results in fewer and smaller critical regions, making this choice more responsive to interrupts. The trade-off for this concept then is: Reduced flexibility for improved responsiveness. While both of these choices are legitimate, the RTXC Quadros RTOS uses the second choice in the belief that responsiveness is preferable to ISR coding flexibility, provided that there is a reasonable set of kernel services available to the ISR designer. And, of course, RTXC Quadros RTOS does provide such a set of kernel services using the same form as services available to tasks but with name space separation. Task services have a prefix of KS_ while the services available to ISRs have a prefix of IS_. Given these two choices and how they can affect an RTOS design, it should be pointed out that it is also possible to create an RTOS design that reduces the number of critical regions in the entire kernel to only a few, perhaps two or three. The resulting design, while atypical, can mitigate the undesirable elements of the two choices just described yielding an RTOS that is highly responsive to interrupts and has high ISR coding flexibility through a common set of kernel services for tasks and ISRs. ...