Advertisement
Product Releases
Advertisement

Optimizing the Embedded Software Design Process

Wed, 08/12/2009 - 1:02pm
Many tools and techniques are available to help reduce time and cost during the product development cycle and can be applied regardless of the software methodology used.


Optimizing the Embedded Software Design Process
There are many different software design methodologies out there: Waterfall, V-Model, Agile, Extreme Programming — the list goes on and on. They all have their benefits and drawbacks, and proponents of each one have spent countless hours arguing their merits. Rather than continuing the great methodology debate, this article will focus on common activities performed during the software lifecycle regardless of the methodology used and recommend some ways to streamline and save time. We all know the benefits of getting through the software lifecycle quickly and efficiently. The bottom line is that shipping high quality products quickly means more profitability for our companies and more job security for us.

For the purposes of this article, we'll define time to market as the time it takes to get from product specification to the product shipping in full volume production. Volume production for device manufacturers is kind of like Black Friday for the retail industry. It's where we finally turn the corner from spending (and losing) money on a product to making money on it. By using this definition, we can put a stop to all those cheaters who ship buggy products just to make a release date. This clearly ends up causing a lot more harm than good. It delays or saps profitability from the project and, even worse, it can destroy your company's reputation. We don't want to sacrifice quality in our search for speed.

Ok, now that we've established the ground rules, let's talk about the major factors that delay embedded software projects.
  • Bad or incomplete requirements
  • Changing requirements (feature creep)
  • Incomplete testing
  • Software defects (bugs)

    While issues related to requirements should not be overlooked, addressing these issues will undoubtedly lead into arguments about methodologies, and that is outside the scope of what will be covered here. The only reason that incomplete testing makes the list is because it allows bugs to slip through the process undetected. That leaves us with software defects and the act of removing those defects from the system: debugging. A survey of San Jose ESC attendees in 2006 concluded that debugging is the most time-consuming and costly phase of software development, "with 63 percent of respondents citing debugging as the most significant problem they encounter, almost double any other single task."
    Minimizing Software Defects
    In order to streamline the embedded development process, we must tackle the biggest problem facing software developers — minimizing software defects. At every step in the development process we must look for opportunities to minimize the introduction of software defects and optimize the rate at which defects can be removed. Therefore we'll look at things that can be done in the design phase, opportunities to automate in various phases, and advanced tools for debugging. When all of these techniques are used together significant gains can be achieved in reducing software defects.
    Design
    KISS, KISS, KISS. Keep it simple, stupid! Minimizing complexity is perhaps the most important aspect of designing efficiently. And it's easier said than done. Most of the time when engineers sit down to design and put pencil to paper, the first thing that comes out is complex, convoluted, and confusing. Achieving simple and elegant designs typically takes many iterations and a concerted effort to eliminate anything that is not needed. Spending the extra cycles at this stage to analyze the design and look for ways to simplify will pay huge dividends in the long term. Minimal and elegant design leads to much greater maintainability and fewer software defects in the long run.

    Realizing good componentization is another important aspect of this phase. When the design of a subsystem gets too complex, break the subsystem into easy-to-understand components. A good rule of thumb is that one engineer should completely understand every line of source for a single component and be responsible for it. Often times the underlying operating system can lend a hand in managing componentization and enforce rules about how various components interact with one another. A microkernel OS with memory protection and good separation capabilities enables developers to:

  • Utilize processes to enforce barriers between various components and avoid unintended interactions that are extremely difficult to understand and debug.
  • Use message passing for inter-process communication.
  • Avoid using shared memory.
  • Minimize the number of threads in a process. Shoot for one thread per process wherever possible to further minimize complexity.
  • Employ the principle of least privilege. If a component does not need access to a resource, i.e. the file system or TCP/IP stack, do not allow it access to that resource.

    When a system is properly componentized, software defects that pop up later are easy to isolate and contain. Fixes for those defects come faster and don't negatively affect other components in the system. Furthermore, with a well defined message based API for the component, it is easy to develop complete test cases that are re-playable for regression testing.
    Automate
    While minimizing complexity provides benefits for long term maintainability and overall efficiency of the development process, there are many tasks in the everyday workflow that can be automated to immediately recognize gains in productivity and efficiency.

  • Use static source code analysis. These tools automatically find a variety of bugs including: buffer overflows, resource leaks, reads of uninitialized objects, out of scope memory references, and a host of other problems that often go undetected during run time testing and typical field operation. Static source code analysis provides early detection of these defects. If undetected, these types of problems lead to very difficult-to-debug issues such as memory corruption. Early detection and elimination of these types of defects provides a huge return on investment for the tools.
  • Automate coding standards enforcement. Most companies have coding standards and countless hours are wasted in source code review meetings nit-picking the coding standard violations. An automated tool that can enforce these standards as the developer is writing the code saves everyone time.
  • Use dynamic source code analysis. While static source code analysis finds some bugs, dynamic source code analysis finds additional ones that cannot be detected by static analyzers. These tools instrument source code and locate bugs during execution. Any type of automatic bug detection as early in the process as possible is your friend.
  • Automate test harness generation and execution. To achieve good unit test code coverage, developers often write one line of test code for every line of source code in their system. This is often done late in the process and is sometimes omitted when projects are behind schedule. Automated test generation tools analyze source code and automatically create test harnesses that can be maintained and utilized over the life of a project for a fraction of the time spent using manual methods.
    Debug like a Pro
    There is no one single debugging tool or technique that is a silver bullet for finding and eliminating software defects. While finding bugs automatically with the tools mentioned in the previous section is by far the most efficient method, automatic bug detection cannot identify all software defects. Finding the rest of the bugs efficiently is primarily about system visibility. If you can't see it, you can't find it. A complete set of debugging tools should include:
  • Run mode debug - debug of a single thread or group of threads while the rest of the system continues to run.
  • Stop mode debug - debug through JTAG or other hardware level access to the CPU, where the entire system is stopped when a breakpoint is reached.
  • Kernel awareness - the debugger "knows" about the target operating system. This includes the ability to set task specific breakpoints and browse kernel objects to ascertain the state of the system at any given time
  • Bullet List Post-mortem debugging - ability to capture and debug core dumps.
  • System profiler - shows CPU and memory utilization.
  • Event analyzer - records and displays a timeline of events such as context switches, interrupts, message passing, and user defined events.
  • Memory leak detection
  • Code coverage analyzer
  • System simulators
  • System logging - printf() and friends
  • Trace based debug - requires support in the target CPU such as Nexus or ETM and external hardware to capture the trace. Think of it as a DVR for your software debugging, providing the ability to rewind, execute forwards and backwards, and gain ultimate visibility and run control over the captured trace data.

    It is also imperative that the developers using these tools are experts at using them. It does no good to give someone a circular saw if they are going to try to use it to bang a nail into a piece of wood. Not only do you have to have the right tools at your disposal, but you have to know which one to use, and how to use it for the given situation.
    Summary
    Looking back over the recommendations I've provided, I don't really think there is anything earth shattering here, but I'm continually amazed by the number of software organizations that I've worked with that don't do some of the simple things outlined above. For example, I've lost track of the number of Linux developers that have told me the only debug tool they use is printf. These are people that work in world class companies developing applications that are millions of lines of code. And they are absolutely handicapped by their toolsets, like a carpenter without a hammer! So, while most of what I've put in here seems like common sense, actually walking the walk is another matter. How do you and your organization stack up?

    Joe Fabbre is technical solutions manager for Green Hills Software, www.ghs.com, 800-765-4733.

    Managing Time to Market with Early Software Design Verification
    Ken Karnofsky, senior strategist for signal processing applications at The MathWorks
    In today's complex, algorithm-intensive wireless communications systems, verification is a major contributor to project delays and engineering costs. The current algorithm verification process is inefficient and creates opportunities to introduce errors. In a typical flow, designs start with algorithm developers, who pass the design to software and hardware teams using specification documents. Each team typically crafts its own manual test procedures to determine that the implementation is functionally correct according to their interpretation of the specifications.

    Compounding this inefficiency is the use of separate tools and workflows for software, digital, and RF/analog hardware components, which inhibits cross-domain verification. And engineers often discover late in the development process that algorithms don't work as intended in the target environment.

    It doesn't have to be this way. Many designers of algorithm-intensive systems already have the tools they need to get verification under control. By using these same tools for early verification with Model-Based Design, engineers can not only reduce verification time, but also improve the performance of their designs.

    With early verification, the algorithm design and implementation teams use the same executable system model as their design reference and test bench. Automation interfaces between algorithm and system simulation tools and software and hardware development tools, enabling each team to reuse the test bench with minimal disruption. The result is a faster, error-free verification process that leverages each team's existing tools and workflow.

    The cross-domain verification problem can be solved by pushing verification up to a higher level in the design flow. Tools for Model-Based Design provide multidomain modeling capabilities that enable "virtual integration" by simulating algorithms, software, digital hardware and analog together in one environment. This aspect of early verification lets system architects and component developers see how design decisions affect system-level behavior. It also helps designers catch integration problems early, while they are still easy to correct.

    Using Model-Based Design, algorithm developers can apply their same tools for algorithm development and system simulation to rapidly prototyping their designs on hardware, without low-level programming. This early verification technique lets designers quickly prove the viability of new ideas and analyze performance under real-world conditions.

    Leading communications, electronics, and semiconductor companies have used all of these early verification techniques to gain competitive advantages by simultaneously reducing their test and verification costs while strengthening their ability to develop innovative new products faster.


  • Advertisement

    Share this Story

    X
    You may login with either your assigned username or your e-mail address.
    The password field is case sensitive.
    Loading