Fujitsu today announced development of the world’s first stream aggregation technology able to rapidly process both stored historical data and incoming streams of new data in a big data context. The nature of big data requires that enormous volumes of data be processed at a high speed. When data is aggregated, longer aggregation times result in larger data volumes to be processed. This means computation times lengthen, which causes frequent updating operations to become more difficult. This is why improving the frequency of updates when aggregation times are lengthened has so far been challenging.
Fujitsu has therefore developed a technology that returns computation results quickly and manages snapshot operations, without re-doing computations or re-reading a variety of data types that change over time. As a result, even with high-frequency updating and long aggregation times, data can be processed 100 times faster than before. This technology promises to improve both large volumes of batch processing and the processing of streaming data.