Fueling The Discovery of Oil with StorNext

Oil Field Pumps Up Exploration with StorNext

One of the main goals of petroleum research giant Daqing Oil Field Petroleum Exploration and Development Research Institute (EDRI) is to increase the pace of oil and gas exploration at China’s Daqing Oil Field. But to do so requires the ability to process ever-larger amounts of seismic data. EDRI chose Quantum’s StorNext to solve its file system limitations. The result: The removal of performance bottlenecks and the acceleration of data processing. 

File System Limitations

For the past 27 years, EDRI has consistently made important contributions to Daqing Oil Field’s track record of achieving high and stable yields of 50 million tons per year. To help maintain the Oil Field’s track record and support its strategy of driving long-term and sustainable development momentum, EDRI needs a sophisticated IT backbone.

In order to increase the pace of oil and gas exploration, EDRI’s Geophysics Service Center (GSC) decided to add capacity for seismic data processing. This required an expansion of its clustered system of high-performance computing servers.

However, with exponential growth in seismic data computation and disk space, traditional single-node NFS servers could not meet the demands for further data processing. When operating as I/O servers to export the file system among the many computing nodes, a performance bottleneck resulted. In addition, the file system architecture was open to a single point of failure.

The solution to the problem was a combination of Quantum StorNext File System, StorNext Distributed LAN Client (DLC), StorNext Storage Manager archival system, and a Quantum Scalar tape library.

Benefit: Cluster Efficiency

GSC quickly found that the Quantum solution provides better equipment optimization, improved resource utilization, and greater overall cluster efficiency. In particular, the deployment of shared file systems and rational usage of disk storage space meets the Institute’s needs for shared access to seismic data.

Through the implementation of StorNext, GSC created a large-scale seismic processing systemconsisting of 698 servers, 1,444 CPUs, 10Gbps of new network technology, 576 Ethernet ports, a 400TB fibre channel network disk array, and a 600 terabyte (TB) Quantum Scalar tape library. Each server in the cluster runs the StorNext file system software, which allows each server to directly access the shared FC disk at a speed of more than 100MBps.

In addition to supporting data processing, StorNext performs seismic data archival, retrieval, data protection, and vaulting through the Scalar tape library. Data can be migrated from the online RAID systems to tape, thereby releasing disk space for other jobs. Archived files can be retrieved automatically from tape back to disk. Final data can be replicated to the tape library to allow offsite vaulting for disaster recovery.