加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
docAsk_huaweidevelopers_失败.py 3.11 KB
一键复制 编辑 原始数据 按行查看 历史
ovjust 提交于 2023-06-27 18:23 . 初测成功
'''
Author: kun 56216004@qq.com
Date: 2023-06-26 11:56:05
LastEditors: kun 56216004@qq.com
LastEditTime: 2023-06-27 17:13:42
FilePath: \langchain\docAsk.py
Description: 这是默认设置,请设置`customMade`, 打开koroFileHeader查看配置 进行设置: https://github.com/OBKoro1/koro1FileHeader/wiki/%E9%85%8D%E7%BD%AE
'''
#
# (langchain39)
# pip install langchain
# Collecting langchain
# Downloading langchain-0.0.215-py3-none-any.whl (1.1 MB)
# pip install openai
# Collecting openai
# Downloading openai-0.27.8-py3-none-any.whl
# pip install jieba
# Collecting jieba
# Downloading jieba-0.42.1.tar.gz (19.2 MB)
# pip install unstructured
import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import TokenTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ChatVectorDBChain
from langchain.document_loaders import DirectoryLoader
import jieba as jb
import openai
from pathlib import Path
my_file = Path(f"./data/cut/")
if not my_file.is_dir():
os.makedirs(my_file)
openai.api_base = "https://api.chatanywhere.com.cn/v1"
api_key = "sk-fMVlblKn6OebFV4m5X75KGmyIKZ97WPccCPzgWXDpgWlk482"
openai.api_key = api_key
files=['研发简要流程.txt','产品经理.txt']
import time
start_time = time.time()
for file in files:
#读取data文件夹中的中文文档
my_file=f"./data/{file}"
with open(my_file,"r",encoding='utf-8') as f:
data = f.read()
#对中文文档进行分词处理
cut_data = " ".join([w for w in list(jb.cut(data))])
#分词处理后的文档保存到data文件夹中的cut子文件夹中
cut_file=f"./data/cut/cut_{file}"
with open(cut_file, 'w',encoding='utf-8') as f:
f.write(cut_data) #'gbk' codec can't encode character '\uf06c' in position 1814: illegal multibyte sequence
f.close()
#加载文档
loader = DirectoryLoader('./data/cut',glob='**/*.txt')
docs = loader.load() #unstructured package not found, please install it with `pip install unstructured`
# Resource punkt not found.
# Please use the NLTK Downloader to obtain the resource:
#文档切块
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=0)
doc_texts = text_splitter.split_documents(docs)
#调用openai Embeddings
a=os.environ["OPENAI_API_KEY"] = ""
embeddings = OpenAIEmbeddings(openai_api_key=a)
#向量化
vectordb = Chroma.from_documents(doc_texts, embeddings, persist_directory="./data/cut")#Could not import chromadb python package. Please install it with `pip install chromadb`. 安装失败
vectordb.persist()
#创建聊天机器人对象chain
chain = ChatVectorDBChain.from_llm(OpenAI(temperature=0, model_name="gpt-3.5-turbo"), vectordb, return_source_documents=True)
def get_answer(question):
chat_history = []
result = chain({"question": question, "chat_history": chat_history})
return result["answer"]
question = "产品经理职位的核心职责是什么?"
print(get_answer(question))
end_time = time.time() # 程序结束时间
run_time = end_time - start_time # 程序的运行时间,单位为秒
print(run_time)
Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化