储层表征

预训练基础模型实现快速地震解释

本文提出了一种自监督方法来训练地震基础模型,并通过六个实际用例展示了该模型在地震数据处理、解释和反演中的应用场景。

本文提出了一种自监督方法来训练地震基础模型,并通过六个实际用例展示了该模型在地震数据处理、解释和反演中的应用场景。
本文提出了一种自监督方法来训练地震基础模型,并通过六个实际用例展示了该模型在地震数据处理、解释和反演中的应用场景。
来源:SPE 227546。

稳健的地震解释和建模对于成功的地下测绘和表征至关重要,但由于其对数据质量、领域知识和专家指导的高度依赖性,仍然面临诸多挑战。对大型地震地基模型(FM)进行预训练,可以对叠后地震图像上观测到的各种地震模式进行丰富可靠的表征,并将其应用于多个下游工作流程。本文提出了一种用于训练此类地震地基模型的自监督方法,并展示了其应用。

地震调频

模型架构。所提出的地震FM模型由编码器、解码器、三个细化器和五个任务处理器组成,从而能够基于共享的嵌入空间进行多任务训练。具体而言,编码器从输入的地震数据生成丰富的嵌入,并以在自然图像上预训练的Dino-v2模型作为初始检查点,构建视觉-Transformer-Large架构。

SPE_logo_CMYK_trans_sm.png
继续阅读,了解 SPE 会员资格
SPE会员:请在页面顶部登录以访问此会员专属内容。如果您还不是会员,但觉得JPT的内容很有价值,我们鼓励您加入SPE会员社区,以获得完整访问权限。
原文链接/JPT
Reservoir characterization

Pretrained Foundation Model Enables Rapid Seismic Interpretation

This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.

This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.
This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.
Source: SPE 227546.

While robust seismic interpretation and modeling is essential in successful subsurface mapping and characterization, it remains challenging because of its strong dependency on data quality, domain knowledge, and expert supervision. Pretraining of a large seismic foundation model (FM) can provide rich and reliable representation of diverse seismic patterns observed on post-stack seismic images and adopt it into multiple downstream workflows. This paper presents a self-supervised approach for training such a seismic FM and demonstrates its applications.

Seismic FM

Model Architecture. The proposed seismic FM is made of an encoder, decoder, three refiners, and five taskers, which enable multitasking training from a shared embedding space. Specifically, the encoder generates a rich embedding from input seismic and of a vision-transformer-large architecture from the Dino-v2 model pretrained on natural images as the starting checkpoint.

×
SPE_logo_CMYK_trans_sm.png
Continue Reading with SPE Membership
SPE Members: Please sign in at the top of the page for access to this member-exclusive content. If you are not a member and you find JPT content valuable, we encourage you to become a part of the SPE member community to gain full access.