Biggest problems for embedded systems developers?

A recent thread in the Real Time Embedded Engineering Group on LinkedIn raised some interesting issues among developers when they were asked about their most difficult problem areas. Do these sound familiar? Unrealistic development schedules set by managers who don’t understand development Deficient documentation of the processor Insufficient errata; struggling with a problem for two years, contacted the manufacturer with no result, and 1 year later there is the errata. Finding and resolving timing issues Weak or poor tools: debuggers, emulators Bosses expecting if I kneel or spend time with the device it may relent and start working Learning NEVER to use a device/processor/controller if there aren’t many ‘how to, why, when” questions on Google. The device is too good to be true or else you end being the only using and fighting the problem. Low level debugging and overall performance of the system Hardware/software co-development. Finding root causes when both the code and the hardware are suspect. A paucity of available low-level-embedded example code Hardware debugging, especially during software integration on a new design where all hardware unit tests showed OK but putting it altogether creates a side effect. A badly written device driver delivered with the H/W; caused havoc and a delay of more than 4 months S/W interface which was way too complex to interpret on an embedded system Of course at Quadros Systems we would argue that starting with the right software platform can make a big difference. RTXC Quadros RTOS-based software is delivered fully integrated with working drivers and interfaces with a binding to your chosen tool environment. Start with the provided sample project and you are good to go. Our Release Notes with tool caveats can reduce the learning curve up front, and save valuable time when development crunch time begins. We know our customers save weeks, if not months, over developing a product using free or poorly tested software. Find out more about how Quadros Systems can save you time and aggravation. ...

Why would anyone pay for an RTOS or other embedded software?

Maybe this is a question you have asked.  Maybe it’s a question you live by. After all, why pay for software when you can download something similar at no cost? You’ve heard it before but you do get what you pay for. That doesn’t mean that a free RTOS is worthless. What it does mean is that you are on your own. Maybe you’re the hero type; a talented embedded engineer that can code yourself out of any situation. Maybe you trust the “community” to help you out if you get into trouble. Maybe you are a newbie and a bit naive about the complexity of embedded development. It’s not really free. What I mean is that it may be free to you (to use) but somebody paid for it. Maybe it was a microcontroller supplier trying to lure you in to using their platform. User beware; you could be locking yourself in to an software platform with limited functionality & support and no upgrade path. Or maybe the license locks you in to a particular processor supplier. What if you want to change? What about middleware? So you downloaded a free RTOS but then you realize you need USB or TCP/IP or a file system? What do you do? License a bunch of software pieces from different suppliers and then piece them together? At Quadros Systems you are purchasing value and expertise, with years of embedded development expertise. A company that will stand by you, to get you through those days when you are stuck on a difficult problem and behind schedule. Developing reliable embedded systems is not easy. The smallest integration detail or incorrect parameter can take weeks to debug. Getting the right advice at the beginning of your project can be of critical importance. At Quadros Systems we take the time to understand your project and the assumptions and requirements behind it. And then we try to match our software to what you need. If we don’t have it, we won’t try to force-fit a substandard solution to make a sale. Want to work with an RTOS company that cares about results? Contact us at +1 832-351-2830, use our information request form to tell us about your project, or send an email to info@quadros.com. Get more information about the Quadros Difference here. ...

RTOS Explained: Understanding Interprocess Communications

A key feature of any kind of operating system is to be able to pass data between processes (tasks, threads, and interrupt service routines). The best RTOSes give the application developer as much flexibility as possible in how to do this. A single messaging option could be good for one situation but not for another. The RTXC Quadros RTOS provides three different object classes for passing data. Each class has unique properties that support interprocess communication according to the needs of the application. All three allow both synchronous and asynchronous communication between execution entities through the use of kernel services. And all three support a design that allows multiple producers and consumers for maximum application flexibility. The Queue class allows user-defined, fixed-size data to be passed in First-In-First-Out (FIFO) order between tasks, preserving its chronological nature. Queues are circular. The kernel copies data from a source buffer into the queue (enqueue) and from the queue into a destination buffer (dequeue) in three variants: simple enqueue/dequeue; enqueue/dequeue waiting on full/empty conditions; and enqueue/dequeue with limited duration waiting on full/empty conditions. The RTXC kernel maintains the status and ensures proper operation on EMPTY and FULL conditions. Under normal conditions the services associated with the Queue class provide the necessary synchronization. However, in the case of certain transitional conditions when special synchronization is required, queue semaphores are also supported. These queue semaphores are used in connection with RTXC services that operate on a set of semaphores associated with multiple queues or other types of events in order to eliminate polling for available data. The Mailbox/Message class allows variable length data to be passed from a sending task to a receiving task with or without a priority. Mailboxes are globally available to all tasks and are exchange points for messages sent by a sending task to a receiving task. The sender and the receiver must agree on the mailbox they will use for passing Messages between them. Messages are prepared by the sending task and consist of a message envelope and a message body. As a matter of system safety, the Message envelope is allocated by the sender but maintained by the kernel.The kernel has no interest in the message body other than to see to its delivery at the designated mailbox. Only the sender and the receiver tasks know the form and content of the message. Due to the fact that messages sizes can vary, message services do not copy the data into a mailbox. Instead, they manipulate pointers to the message. The result is a clean, efficient means for passing data of any size. The developer can choose to pass data in a FIFO manner or in one of two priorities, Urgent or Normal. The receiver task gets Urgent priority messages before those of Normal priority. A task can send Messages synchronously or asynchronously. For synchronous messages the sending task sends the message and waits for the receiver to acknowledge its receipt before proceeding. (NOTE: it is possible to associate an alarm with the receipt of the acknowledgement so that the duration of any waiting condition can be limited.) For asynchronous message transfers, the sending task sends the message but continues without waiting for an acknowledgement. The task can choose to wait at a later time for an acknowledgement. Normal send/receive operations have the same three variants as Queues: simple send/receive; send/receive waiting for successful completion; and send/receive with limited duration waiting for success. Like Queues, Mailboxes have related semaphores intended for use with transitional events to eliminate the necessity of polling for available data. The Pipe class, allows data to be passed from a producer task, thread or Interrupt Service Routine (ISR) to a consumer task, thread or ISR. Pipes are data passing mechanisms in the RTXC Quadros RTOS that permit rapid, unidirectional transfers from one or more producers to one or more consumers. The producer or the consumer may be an ISR, a thread or a task. Pipes are ideally suited to communications applications where data is acquired and buffered at a high frequency and then must be processed rapidly while the next block is being acquired. Functionally, pipes are buffers of data that are filled by the producer and processed by the consumer. The RTXC Quadros pipe services perform the necessary functions of managing the buffers in the pipe while the actual reading and writing of the data in the pipe buffers is the responsibility of the consumer and producer, respectively. In operation, the producer allocates a free buffer from the pipe, fills it with data, puts it back into the pipe. The consumer gets the full buffer, processes the data in it, and then frees the empty buffer back to the pipe where it can be re-used. Normally, pipe buffers are allocated sequentially and processed in the same order. However, RTXC also permits the producer to put a full buffer at the front of the pipe instead of at the tail, causing the consumer to process it next. Pipes can also be associated with thread gates (in the RTXC/ss configuration) so that certain pipe operations can result in operations on associated thread gates. Such a capability permits close synchronization between pipe producers and consumers and minimizes the amount of RAM required for pipe buffers. Summary The three flexible interprocess communications object classes in the RTXC Quadros RTOS eliminates the necessity of trying to force-fit one method of moving data into all situations a design might require. The developer can choose the one that best fits the application’s needs and have the appropriate kernel services to implement efficient data transfers for optimal time and memory considerations. ...

RTOS Explained: Understanding Priority Inversion

A fundamental maxim in a real-time system is that the highest priority task that is ready to run must be given control of the processor. If this does not happen deadlines can be missed and the system can become unstable or worse. Priority inversion simply means that the priority you established for a task in your system is turned upside down by the unintended result of how priorities are managed and how system resources are assigned. It is an artifact of the resource sharing systems that are fundamental to RTOS operation. Mutex (and/or Semaphore) objects are used to ensure that the various tasks can have exclusive access to system resources when they need them. Consider Task C that is writing to RAM. To protect against memory corruption which could occur if another higher priority task (Task A) preempts Task C and writes to the same block, Task C uses a Mutex to lock access to the RAM until the write is completed. However, Task A interrupts Task C before the operation can be completed and yet Task A needs to access the file system to complete its work. The result is a priority inversion. Task C must finish its work so that Task A can get access, but Task A has blocked Task C because Task A has a higher priority. Embedded gridlock! This scenario can be termed “bounded inversion.” A variation of this is termed “unbounded inversion.” Task C has locked the RAM with a Mutex and is then preempted by Task B which is attempting to perform an unrelated, but higher priority operation. Highest priority Task A then preempts Task B but relinquishes control (blocks) since it cannot access the resource because of the Mutex claimed by task C. Task B then resumes operation and maintains control of the processor for as long as it needs it. Task A (the highest priority task) is unable to run during this unbounded period. The simple solution is to let Task C finish what it was doing, uninterrupted, and then release the Mutex on the file system so that Task A can do what it needs to do. But the operating system objects are doing exactly what they were programmed to do and, without special logic to deal with these situations, the scheduler must honor the relative priorities of Tasks A, B and C and the inherent rules of the operating system. So how does the developer avoid these situations or once in them, how does the system get out of the situation? A great deal of study has gone into this subject and space does not allow a detailed review of the research. Instead, let’s consider the two most common ways that RTOSes deal with priority inversion and how each approach affects the conditions of bounded or unbounded inversion. Both of these techniques are valid and both have their pros and cons. Priority Inheritance Protocol – Change the priority of Task C to the same as Task A. Task A has to wait for Task C to complete, but the assumption is that at the higher priority, Task C can get more cycles that are not pre-empted and can thus complete its critical region more quickly in real-time. When Task C completes its critical region, it returns to its original priority and ownership of the shared resource’s Mutex is then assigned to the waiting Task A. Priority Ceiling Protocol – This method is also called Instant Inheritance Protocol. If there is a set of n tasks that use a shared resource, a Mutex is assigned to the shared resource and given a priority that is at least as high as, if not higher than, the highest priority of the n tasks in the set. When any task attempts to acquire ownership of the shared resource’s Mutex, it will be granted and its priority instantly changed to the priority of the Mutex. This action prevents any possibility of preemption by another member of the set of the shared resource’s users because all other tasks in the set are, by definition, at a lower priority than that of the current owner. Thus, there can be no tasks that are waiting to get ownership of the Mutex because they cannot get scheduled. This method effectively prevents the possibility of a priority inversion. Notice that this method also effectively deals with unbounded inversion issue because no other task (such as Task B) that has a priority between that of the lowest priority task in the set and the highest priority members of the set can get scheduled. When the owner task completes its critical region, the Mutex is released and the task returned to its original priority. The Priority Ceiling Protocol used by other RTOSes avoids the potential problem of dealing with unbounded inversions but, since all tasks that use the same resource are permanently given the same high priority, even if they are not essential tasks, this may starve other, more important tasks. The RTXC Quadros RTOS uses Priority Inheritance Protocol. It addresses bounded inversions and avoids the potential problem of Priority Ceiling Protocol that can starve tasks which are not part of the set. ...