地震数据:越大越好

弹性计算正在帮助运营商使用更丰富的地震数据集来更好地管理他们的油藏。

计算的进步使得能够足够快地解释大量地震数据(包括时移数据),从而帮助操作人员更好地管理其油藏。 

弹性计算是分析软件服务 Sharp Reflections 方法的核心,可以更轻松地搅动叠前地震数据集,并且用例已经扩展到使操作员能够使用更丰富的数据集。

延时数据集包含大量数据,如果不快速解释和采取行动,这些数据可能会变得陈旧。公司总裁兼首席执行官 Bill Shea 表示,该公司正在努力发布一个新的延时模块,该模块有助于处理多个勘测年份的集合,以及用于振幅解释、地震反演和方位角解释的模块。

“从历史上看,处理叠前数据是一件痛苦的事,因为它太大了,”谢伊说,并指出这些数据是以非常高的保真度获取的,并且在数据上进行了大量的浓缩和浓缩。前往口译员办公桌的路。“在获取原始状态的典型调查时,数据点可能比查看数据的人桌上实际出现的数据点多一百倍。”

使用处理现场磁带数据的先进算法,将这些数据处理成所谓的成像叠前地震数据。

这些叠前道集在偏移后会被进一步处理,以减少数据噪声。

三步聚调理
三步道集调节提高了地震数据的振幅保真度。(来源:锐利反思)

Shea 补充说,这种迁移后处理对于正确查看数据中的流体效应至关重要。通常,充满油和充满水的储层之间的信号差异小于 10%。 

他说,由于大量叠前数据很难处理,因此业界很早以前就集中在捷径上,以便可以在工作站上处理这些数据。

通过利用当今更强大的计算能力,这些捷径不再是必要的,解释器可以更轻松地处理完整的数据集。

核心计算的进步

大约 15 年前,第一批多核处理芯片上市,当时英特尔宣布,提高单个中央处理器 (CPU) 性能的传统方法已达到极限,新芯片开始融化。多核处理现已成为 CPU 设计的标准。

新的多核芯片可以将计算负载分配到多个核心上,以提高处理速度。但是,谢伊说,利用新芯片技术的速度潜力的唯一方法是使用并行计算编码。

他说,Sharp Reflections 是在英特尔宣布将生产新型多核芯片时成立的,该公司以利用这些芯片工作方式的方式编写软件代码。

该公司的目标是能够以解释人员及时做出决策所需的速度处理更大量的地震数据。

“如果您为该范例编写新代码,您的软件运行速度可能比 10 年前快 50 倍,特别是如果您利用计算机集群的话。如果你没有重写你的代码,你基本上就不会从这些额外的核心中得到任何东西,”他说。

“这在今天造成了巨大的分歧,像我们这样的少数公司说,‘好吧,我们将接受英特尔的挑战,’”然后其他人说,‘好吧,我们我们可能会在不特定的年份里依靠我们所拥有的东西而度过,只是不用担心它是否会比以前变得更快。”

除了快速处理之外,结果之一是处理数据所需的集群大小正在缩小。

收集调节工作流程期间的噪声消除。
收集调节工作流程期间的噪声消除。(上)通过去噪程序消除了噪音。(下)通过去多重过程去除的噪声。(来源:锐利反思)

“十年前我们需要一大架服务器才能完成的工作所需的计算机尺寸正在不断缩小,”他说。“在未来,我们可能会回到这样一个地方:我们可以回到单个工作站或服务器,它拥有更多的计算能力,实际上可以处理 10 年前无法处理的任务。但我们还没有到那一步。”

Sharp Reflections 所使用的典型叠前勘测从大约 10 TB 的输入开始,可能会导致输出是该大小的三到四倍。

“地震公司收集的原始数据只是开始,”他说。 

当然,计算技术总是在不断发展。 

“我们正处于一个新的计算时代,像亚马逊和微软这样的公司都在提供他们所谓的弹性计算,这基本上是更大的计算机,具有更多的功能和更多的内存,可以愉快地处理这些叠前数据集”使用这样的软件谢伊说,由 Sharp Reflections 提供。“当技术不断进步时,世界就会快速发展。”

转向水库管理 

该公司自己的产品也发展迅速,从挪威国家石油公司赞助的最初项目发展为部分处理和部分解释的混合产品,该项目将对地震承包商提供的叠前数据进行质量保证,并为详细的定量解释做好准备。 

通过一系列行业资助的项目,Sharp Reflections 能够继续添加其核心产品的功能,创建 Sharp Reflections 大数据平台产品。该平台包括功能工具包,包括 PRO(叠前数据增强)。QAI(定量振幅解释)、INV(反演)和 AZI(方位角解释)工具包已经可用,4D 延时工具包正在开发中。4D 工具包于 2022 年 10 月开始向用户进行软发布,预计于 2023 年第四季度进行商业发布。 

地图视图和剖面视图中的差异图像
地图视图(左列)中的差异图像,以及沿着穿过两个注水井和一个水平生产井(右列)的随机线的剖面视图,用于连续时移地震年份。压力和饱和度的变化都会导致观察到的幅度变化。通过分析叠前地震数据,可以分离压力和饱和度变化对地震振幅变化的影响。(来源:锐利反思)

“我们的平台正在迅速扩展,我们决定 PreStack Pro 产品名称不再反映我们所做的一切,”Shea 说。如今,Sharp Reflections 大数据平台为更多人提供了更多服务。我们看到更广泛的定量地震解释者和油藏地球科学家用户群体现在使用这些工具包来处理堆叠数据集。即使作为堆栈,4D 和宽或全方位角数据集对于基于标准工作站的方法来说也太大了。”

QAI 使用多步 QAI 分析增强油藏洞察力,而 INV 则改进油藏描绘和净产气量估算。AZI 提高了对复杂地质结构的理解,4D 将有助于识别排水不良或绕过的水库部分。

“更先进的工具用于油藏管理,而不是勘探,”谢伊说。

他说,公司成立时只处理勘探活动的地震数据,几乎不涉及油井数据。 

“现在直接整合油井数据,为正在开发的油田提供油藏管理工具,”他说。

跨越时间的水库

管理水库的一部分意味着了解水库如何随时间变化。 

延时摄影或 4D 地震有助于实现这一点。但为了让延时数据发挥作用,解释人员需要能够“快速消化这些数据,理解它并用它来规划其他油井,”谢伊说。“这”有大量数据需要消化、快速理解。这是我们的大数据能力可以发挥作用的最佳点。”

对于公司来说,及时根据延时数据采取行动非常重要。

——将延时地震想象成新鲜的杂货。如果您获取数据并且花了两个月的时间来处理它,那么数据很快就会过时。它只能为您在可用之日后的几个月内做出钻井决策提供可靠的依据,”他说。”在那之后,还有另一项调查即将到来,你必须立即开始工作。

叠前地震属性的罗斯图分析可以分离压力和饱和度变化
叠前地震属性的交会图分析可以分离压力和饱和度变化。区域 A 显示了注水开始和停止时注水井周围含水层的压力影响。B 区显示了生产井水代油和两口注入井压力支持的综合作用。(来源:锐利反思)

因此,在这个领域,更快的分析非常重要,因为如果解释速度太慢,那么基本上就浪费了地震方面的投资。”

生产开始前油田的原始地震可以提供有关储层的基线数据,而使用与原始参数相同的参数进行的后续勘测可以识别储层中与生产相关的差异,例如枯竭和压力变化。他说,此类信息可用于确定老油田的最佳加密钻井模式。

例如,ConocoPhillips 是 Sharp Reflections 当前项目开发联盟的参与者,拥有北海 Ekofisk 油田的 20 多组地震数据。

——这造成了数据洪流。你如何掌握 20 项调查,不仅如此,查看所有差异,”他说。“他们基本上必须对每一项调查进行成对比较。”

他说,这意味着如果有 20 项调查,就有相当多的组合可以进行比较和计算。

“对于口译员来说,这将成为一场巨大的战斗,”谢伊说。

他补充说,Sharp Reflections 内置于其软件中的一些处理叠前地震的功能对于跟踪“不断增长的延时调查集合”并快速提供结果非常有用。“以前,大量的成对计算现在只需点击几下鼠标即可以完全自动化的方式完成。您可以在每个水库水位上获得数百张地图,而以前生成两张地图所需的时间。”

原文链接/hartenergy

Seismic Data: Big Gets Bigger

Elastic computing is helping operators work with richer seismic data sets to better manage their reservoirs.

Computing advances have made it possible to interpret huge seismic data volumes, including time lapse, quickly enough that they can help operators better manage their reservoirs. 

Elastic computing is at the core of analysis software service Sharp Reflections’ approach to more easily churn through prestack seismic data sets, and use cases have expanded beyond enabling operators to work with richer data sets.

Time lapse data sets contain vast volumes of data that can go stale if not interpreted and acted upon quickly. The company is working to release a new time lapse module that facilitates working with collections of multiple survey vintages, along with modules for amplitude interpretation, seismic inversion and azimuthal interpretation, according to company president and CEO Bill Shea.

“Historically, it’s been a pain to work with prestack data because it’s too big,” Shea said, noting the data is acquired with very high fidelity and that it gets boiled down and condensed by a huge amount on the way to the interpreter's desk. “A typical survey in its raw state while it's being acquired might be a hundred times more data points than what might actually wind up on the desk of the person that's looking at the data.”

That data is processed into what can be called imaged prestack seismic, using advanced algorithms that work on data from field tapes.

These prestack gathers are then further processed after migration to reduce data noise.

Three-step gather conditioning
Three-step gather conditioning improves the amplitude fidelity of seismic data. (Source: Sharp Reflections)

This post-migration processing is critical to get right in order to see fluid effects in the data, Shea added. Typically, there’s less than a 10% signal difference between oil- and water- filled reservoirs. 

Because the vast volumes of prestack data were hard to work with, the industry long ago converged on shortcuts to make it possible to work with that data at workstations, he said.

By harnessing today’s greater compute power, those shortcuts are no longer necessary, and interpreters can more easily work with the full data set.

Core computing advances

About 15 years ago, the first multicore processing chips went on the market, following an Intel announcement that the traditional way of improving single central processing unit (CPU) performance had reached its limit and new chips were starting to melt. Multicore processing is now the standard in CPU design.

The new multicore chips made it possible to distribute the computing load onto multiple cores to crank up processing speeds. But, Shea said, the only way to exploit the speed potential of the new chip technology was to use parallel compute coding.

Sharp Reflections was formed around the time Intel announced it would make the new multicore chips, he said, and the company writes software code in a way that exploits the way those chips work.

And the company’s goal is to be able to handle bigger volumes of seismic data with the speed an interpreter needs to make timely decisions.

“If you wrote new code to that paradigm, you could have software that runs 50 times faster than it did 10 years ago, especially if you take advantage of computer clusters. If you didn't rewrite your code, you get basically nothing out of any of those extra cores,” he said.

“That's kind of creating this huge divergence today between the few companies like us that came along and said, ‘Okay, we're going to take the Intel challenge,’ and then the rest who said, ‘Well, we could probably skate by on what we have for an unspecified number of years, just not worrying about whether it gets much faster than it used to.’”

One of the results, aside from speedy processing, is that the size of the cluster needed for processing data is shrinking.

Noise removal during gather-conditioning workflow.
Noise removal during gather-conditioning workflow. (Top) Noise removed by de-noise procedure. (Bottom) Noise removed by the de-multiple procedure. (Source: Sharp Reflections)

“The size of the computer that you need to do the jobs that we needed a big rack of servers to do 10 years ago is constantly shrinking,” he said. “In the future, we probably will wind up in a place where we can be back to a single workstation or server that has so much more compute power on it that it can actually handle these tasks that 10 years ago it couldn't. But we're not there yet.”

A typical prestack survey that Sharp Reflections works with starts at around 10 terabytes of inputs and may result in outputs that are three or four times that size.

“The raw data being collected by the seismic companies is just the beginning of it,” he said. 

And of course computing technology is always evolving. 

“We are in a new era of computing where companies like Amazon and Microsoft are all offering what they call elastic computing, which is basically bigger computers with more power and more memory that happily churn through these prestack data sets” using software like that supplied by Sharp Reflections, Shea said. “The world moves fast when technology’s on the move.”

Shifting into reservoir management 

The company’s own offerings have also moved quickly, evolving from an initial project sponsored by Statoil that would carry out quality assurance on prestack data incoming from seismic contractors and prepare it for detailed quantitative interpretation to a hybrid product that was part processing and part interpretation. 

Through a series of industry-funded projects, Sharp Reflections was able to continue adding to the functionality of its core offering, to create the Sharp Reflections Big Data platform product. The platform includes toolkits of capabilities, including PRO (Prestack Data Enhanced). QAI (Quantitative Amplitude Interpretation), INV (Inversion) and AZI (Azimuthal Interpretation) toolkits are already available and a 4D Time-Lapse toolkit is under development. A soft launch of the 4D toolkit started rolling out to users in October 2022, and commercial release is slated for the fourth quarter of 2023. 

Difference images in map view and section view
Difference images in map view (left column), and section view along a random line through two water injection wells and one horizontal producer well (right column) for sequential time-lapse seismic vintages. Both pressure and saturation changes can cause the observed amplitude changes. By analyzing pre-stack seismic data, the effects of pressure and saturation changes on seismic amplitude changes can be separated. (Source: Sharp Reflections)

“Our platform is expanding rapidly, and we decided that the PreStack Pro product name no longer reflects everything we do,” Shea said. “Today the Sharp Reflections Big Data platform offers more to more people. We see a broader user group of quantitative seismic interpreters and reservoir geoscientists now using these toolkits to work on stacked datasets. Even as stacks, 4D and wide or full-azimuth datasets are too big for a standard workstation-based approach.”

QAI sharpens reservoir insight using multistep QAI analysis, while INV improves reservoir delineation and net pay estimation. AZI improves understanding of complex geological structures, and 4D will help identify poorly drained or bypassed reservoir sections.

“The more advanced tools are for reservoir management, rather than exploration,” Shea said.

When the company started, it handled only seismic data for exploration activities with little well data involved, he said. 

“We now incorporate well data directly to provide a reservoir management tool for fields that are in development,” he said.

Reservoirs across time

Part of managing a reservoir means knowing how that reservoir is changing over time. 

Time lapse, or 4D, seismic helps with that. But for time lapse data to be useful, the interpreters need to be able to “digest that quickly, make sense of it and use it to plan other wells,” Shea said. “That’s a lot of data to digest, to make sense of it fast. That is a sweet spot for where our big data capability can make a difference.”

And it’s important for companies to act on time-lapse data in a timely fashion.

“Think of time-lapse seismic as fresh grocery goods. The data goes stale really fast if you acquired it and it took you two months to process it. It's only reliably giving you the basis to make drilling decisions for a few months after the day it becomes available,” he said. “After that, there's another survey coming in which you've got to jump on and start to work with.

ross-plot analysis of pre-stack seismic attributes allows separation of pressure and saturation change
Cross-plot analysis of pre-stack seismic attributes allows separation of pressure and saturation changes. Area A shows pressure effects in the aquifer around a water injection well when injection is started and stopped. Area B shows the combined effect of water-replacing oil at the producing well, and pressure support from two injection wells. (Source: Sharp Reflections)

So it's a space where faster analysis is really important because if you're too slow in the interpretation, then you basically wasted the investment in the seismic.”

Original seismic in a field before production begins can provide baseline data about the reservoir, while follow-up surveys shot with the same parameters as the original can identify production-related differences in the reservoir such as depletion and changed pressures. Such information can be used to determine optimal infill drilling patterns in older fields, he said.

For example, ConocoPhillips, which is a participant in Sharp Reflections’ current project development consortium, has more than 20 sets of seismic data over the Ekofisk Field in the North Sea.

“That creates a data deluge. How do you keep on top of 20 surveys, more than that, looking at all the differences,” he said. “They have to basically compare pair-wise each and every survey.”

That means if there are 20 surveys, there are quite a lot of combinations of pairs that could be compared and calculated, he said.

“For the interpreter, that becomes a mega battle,” Shea said.

Some of the capabilities Sharp Reflections had built into its software to handle prestack seismic are useful for keeping track of “this burgeoning collection of time lapse surveys” and making results available quickly, he added. “Previously what was as a huge sequence of pair-wise calculations is now being done in a fully automated way with a few mouse clicks. You get hundreds of maps at each reservoir level in the time that it used to take to generate two.”