The Challenge
P/GSI’s VSP studies generate massive amounts of data. Oil field troubleshooting projects can produce from 10 to 25 gigabytes of data, while microseismic studies can easily produce 1 to 2 terabytes. The process of turning raw seismic data into reliable 3D images of the subsurface can take anywhere from one to three months. Since there is often a very large investment at stake, it is critical to its business that P/GSI provide accurate subsurface images to its clients in as short a timeframe as possible. Geophysicists are continually refining their models and parameters, working in an iterative fashion to generate the most accurate images. These sorts of data processing problems lend themselves to compute clusters where the processing can be broken into pieces, and each piece assigned to a cluster node.
The Solution
To research a replacement for the aging compute cluster, Griesbach and her team talked with several major hardware vendors and system integrators, and were able to obtain test environments from many of them. “We run our own proprietary software on the cluster,” she says, “so the standard benchmarking suites don’t address our concerns very well. We wanted to test the potential solutions with jobs we would actually run.”processors, equaling 94 processor cores, with more than 400GB of local disk space as well as multiple terabytes of DAS-connected storage.