数据挖掘/分析

利用大型语言模型进行零样本学习可增强钻探信息检索

本文测试了几种商业大型语言模型,使用零样本、上下文学习对钻井数据进行信息检索任务。

图1——基于RAG的LLM问答流程简化图(带上下文应用)。
图 1 — 基于 RAG 的 LLM 问答流程简化图,其中应用了上下文。绿色框表示文档检索,蓝色框表示 LLM 模型完成。本文重点在蓝色部分评估不同的 LLM,同时控制检索到的上下文。
来源:SPE 217671。

在钻井行业中,跨多个数据库、格式和文档查找信息仍然是一项手动工作。大型语言模型 (LLM) 已被证明在数据聚合任务(包括回答问题)中非常有效。但是,使用 LLM 进行特定领域的事实响应是一项不小的挑战。训练特定领域 LLM 的专家劳动力成本阻碍了细分行业开发定制的问答机器人。

图片尺寸 1280X1280
以 SPE 会员身份继续阅读
SPE 会员:请在页面顶部登录以访问此会员专属内容。如果您不是会员,但您认为 JPT 内容很有价值,我们鼓励您加入 SPE 会员社区以获得完全访问权限。
原文链接/JPT
Data mining/analysis

Zero-Shot Learning With Large Language Models Enhances Drilling-Information Retrieval

This paper tests several commercial large language models for information-retrieval tasks for drilling data using zero-shot, in-context learning.

Fig. 1—A simplified chart of a RAG-based LLM question/answer process with context application.
Fig. 1—A simplified chart of a RAG-based LLM question/answer process with context application. The green boxes show document retrieval, and blue boxes show LLM model completion. This paper focuses on evaluating the different LLMs in the blue part while controlling the retrieved context.
Source: SPE 217671.

Finding information across multiple databases, formats, and documents remains a manual job in the drilling industry. Large language models (LLMs) have proven effective in data-aggregation tasks, including answering questions. However, using LLMs for domain-specific factual responses poses a nontrivial challenge. The expert-labor cost for training domain-specific LLMs prohibits niche industries from developing custom question-answering bots.

×
SPE_logo_CMYK_trans_sm.png
Continue Reading with SPE Membership
SPE Members: Please sign in at the top of the page for access to this member-exclusive content. If you are not a member and you find JPT content valuable, we encourage you to become a part of the SPE member community to gain full access.