Peter Tilke

Schlumberger Research in Cambridge, MA United States



The complex interaction of liquids, gases and solids at the pore scale is of interest in many areas of geoscience including enhanced oil recovery, hydraulic fracturing and carbon sequestration. This paper presents a numerical framework that is capable of simulating multiphase flow at the pore scale and, consequently, is used to assess the permeability of reservoir rocks. By utilizing this robust and expedient digital workflow the time and cost associated with traditional experimental measurements can be greatly reduced.

The presented framework combines a suite of numerical methods, including smooth particle hydrodynamics (SPH) and the lattice Boltzmann method (LBM), with shared-memory, multicore parallel processing to increase the flexibility and scalability of the numerical workflow.  One of the strengths of SPH is its ability to naturally incorporate the behavior of free surfaces and phase interfaces, which is particularly important when simulating multiphase flow. Additionally, the LBM can be readily coupled to complex structural geometries which is essential for simulating fluid flow in tortuous pore networks. By incorporating both methods in the numerical framework their respective strengths can be leveraged as appropriate.

In the numerical permeability workflow the model geometry is generated from a segmented, X-ray microtomographic image of the rock sample. Depending on the size of the rock sample and the imaging resolution, the entire model can end up containing eight billion voxels or more. For computational tractability, simulations are applied to sub-blocks with voxel dimensions up to 400^3 and leveraging multicore parallelism.

The utilized parallel programming model exploits the large memory as well as the low latency and increasing capacity of processor caches available in contemporary multicore servers. Maximized cache performance is achieved by taking a fine-grained approach to domain decomposition and also taking advantage of the spatial locality of data in the solvers. This results in scalable speed-up efficiency, whilst the asynchronous distribution of fine-grained, parallel work tasks results in natural load balancing.

The absolute permeability results for a range of rock samples are presented, along with the results of the scalability and efficiency of the shared-memory, multicore, parallel processing model.