rss
email
twitter
facebook

Thursday, April 8, 2010

Boosting software speed by up to 20 percent



For some programs, the arrival of multi-core processing power has made little difference to how they operate. Some applications, such as word processors and web browsers, are unable to split process operation over a number of cores and instead pile everything onto just one. Researchers from North Carolina's State University have come up with a way to break up such programs into different threads, resulting in a 20 percent increase in run speed.
Computer chips are having more and more processing cores squeezed onto them these days which should result in significant performance improvements. Applications where some operations can only continue after the outcome of different elements within the program is known, such as word processors and web browsers, are renowned for their stubborn refusal to break this flow chart-like cycle so that such operations can be spread across multiple cores for parallel processing.
A research team from North Carolina State University has developed a method for separating the memory management aspect of program operation and running it as another thread. Instead of a program entering a repeating cycle of performing a computation and then preparing for the addition or release of storage space to accommodate the result of the operation on a single central processing unit, Dr Yan Solihin and the team have managed to take the memory management step and have it operate on its own thread.
With this approach, according to lead author of the research paper Devesh Tiwari: "the computational thread notifies the memory-management thread - effectively telling it to allocate data storage and to notify the computational thread of where the storage space is located. By the same token, when the computational thread no longer needs certain data, it informs the memory-management thread that the relevant storage space can be freed". The upshot being that both processes can operate in parallel over different cores and in doing so allow the program to function with up to 20 percent more efficiency.
The technique also opens up development opportunities for simultaneous application integrity or security checks which would otherwise adversely impact on program, and possibly system, performance. The paper, "MMT: Exploiting Fine-Grained Parallelism in Dynamic Memory Management", is to be presented at the IEEE International Parallel and Distributed Processing Symposium in Atlanta on April 21.

0 comments:

Post a Comment

linkbucks