IBM developed a super-fast mechanism for data analysis

On this week ended Supercomputing Conference 2010 in USA Corporation IBM presented details of a new architecture for data storage, designed for large archives and libraries.The new technology allows you to analyze terabytes of data and convert the necessary consistency and create accounts for them at least two times faster than previously existing technologies.

The new technology is very suitable for cloud environments and network-intensive workloads, such as networks, media companies, banks, working with terabytes of archives, government agencies, and financial analysis company.From IBM said that as a rule datasets with size of several terabytes being analyzed in several hours during the preparation of various analytical reports, and new technology makes it possible to reduce this time to several minutes.

IBM intends to use new technology and in future generations of their systems to support decision-making and drawing up reports for regulatory authorities.

The technology was created in the division of IBM Research company on the basis of cluster development General Parallel File System Shared Nothing Cluster (GPFS-SNS), before ipolzvana in applications of critical level of importance. The system relies on the analysis of clustered distributed file system, data processing in a special way.

As told by IBM, the technology for the analysis operate in such a way that each node of the system is completely self-sufficient and operates in such a way as not to await the results of calculations of the previous computer.For example, a financial institution that works with algorithms for risk analysis based on archive size, measuring up to petabytes data. Here billions of files are distributed in different parts of the network and the calculations require significant IT resources due to their complexity.Using GPFS-SNC may be using a single dynamic file system, operating in parallel in a cloud between different nodes.

Leave a Reply