BEACON Researchers at Work: Evolution and Software Engineering

This week’s BEACON Researchers at Work blog post is by MSU graduate student Erik Fredericks.

Erik FredericksI spent several years in the automotive industry as both a software developer and project manager, developing advanced systems that assisted in preventing accidents. These systems usually involved a camera and radar working together to sense obstacles or problems that a driver may face. That being said, I was indoctrinated in low-level, embedded C code that was very procedural and very straightforward. As more responsibilities piled on in my day to day activities, I was required to perform software testing, endure software quality reviews, and report on how “good” my team’s code actually was. What this all boils down to is the overall mysterious process known as software engineering.

At its core, software engineering is an approach for designing, developing, testing, and maintaining software. What makes it interesting from a research perspective is that it can be a quantifiable process. In essence, we can apply various techniques to optimize the different aspects of the software development cycle. This is where evolution, specifically evolutionary computation, can come into play, and is the focus of my research here at MSU with Dr. Betty Cheng.

Search-based software engineering (SBSE) is a growing field that applies search-based techniques to the engineering process. My research focuses on how evolutionary computation can be applied to topics within this field. As with other domains, evolution is typically used for optimization in SBSE. For instance, how can we enable software developers to reduce the amount of time spent creating test cases during development, so that they have time to spend on tasks that cannot be automated? Furthermore, how can we ensure that those test cases fully represent the spectrum of possibilities that the software may face while used by the general public?

Recently, I have been studying how complicated software systems perform over time, and particularly what conditions could cause them to react unexpectedly. Consider, for example, an autonomic robot vacuuming system, such as an iRobot Roomba. Its goal is to vacuum a specified area, accounting for problems such as obstacles, stairs, and the occasional pet. If the Roomba bumps into a wall or a table, it attempts to then find a new path that will be clear of problems and enable it to resume cleaning. If it encounters stairs, then it will turn around to ensure that it does not fall and damage itself. Pets, however, are allowed to ride for free.

The reason I bring up the Roomba is that it can be modeled as a dynamically adaptive system (DAS). A DAS is a fairly complicated software model consisting of different configurations. Each configuration can be selected at run time to mitigate a set of problems that are encountered. With a robotic vacuum, this can entail changing its mode of operation from cleaning in a straight line to slowly encircling a particularly dirty area. Each of these configuration modes is typically accompanied by a large set of parameters, each with different values to accommodate specific environmental conditions. As you can guess, this represents an explosion of possible states in which the robotic vacuum can operate, providing an ideal framework to explore with evolutionary computation!

Smart Vacuum System

Smart Vacuum System

To understand the different ways in which a robotic vacuum executes over time, I created a 3D simulation using the Open Dynamics Engine and a very nifty visualization engine from fellow BEACONite Jared Moore. Novelty search, an evolutionary technique that is concerned with finding a diverse representation of the solution space (rather than an optimal solution), was used to generate a set of system and environmental conditions that cause the vacuum to behave in many different ways. Furthermore, logging statements were inserted throughout the code that controlled the robot. The resulting set of logging statements, known as an execution trace, provides chronological information as to the steps taken throughout the simulation.

So what exactly does this tell us about how a vacuum performs over time, and particularly, why does this interest us? As a software engineer, any behavior that deviates from the expectation is typically bad. This deviation may demonstrate that there is a flaw in our software requirements, or perhaps in the system design, or even in the code itself. By having a representative, but more importantly diverse, set of execution traces, we can systematically localize software behavioral problems and determine an appropriate method for fixing the issue. More importantly, by using evolutionary computation, we have saved ourselves valuable time and effort in testing, validating, and finding new solutions not possible with traditional techniques!

[PDF] E. Fredericks, A. Ramirez, and B. Cheng, “Validating code-level behavior of dynamic adaptive systems in the face of uncertainty,” in Search based software engineering, Springer Berlin Heidelberg, 2013, vol. 8084, pp. 81-95.

For more information about Erik’s work, you can contact him at freder99 at msu dot edu.

This entry was posted in BEACON Researchers at Work and tagged , , , . Bookmark the permalink.

Comments are closed.