首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >利用flask构建自己的LLM系统

利用flask构建自己的LLM系统

作者头像
顾翔
发布2025-10-11 11:25:07
发布2025-10-11 11:25:07
2200
代码可运行
举报
运行总次数:0
代码可运行

1.注册LangSmith

LangSmith 是一个用于构建生产级 LLM 应用程序的平台。它能让您密切监控和评估您的应用程序,从而帮助您快速、自信地交付产品。

1)到LangSmith(https://www.langchain.com/langsmith)注册登录账号,获取

  • LANGSMITH_TRACING
  • LANGSMITH_ENDPOINT
  • LANGSMITH_API_KEY
  • LANGSMITH_PROJECT
  • OPENAI_API_KEY

2)建立文件set_env.bat(Window下)

代码语言:javascript
代码运行次数:0
运行
复制
@echo off
setx LANGSMITH_TRACING TRUN|Fale REM
是否开启跟踪日志
setx LANGSMITH_ENDPOINT https://api.smith.langchain.com
setx LANGSMITH_API_KEY <your_ LANGSMITH_API_KEY>
setx LANGSMITH_PROJECT <your_ LANGSMITH_PROJECT>
setx OPENAI_API_KEY <your_OPENAI_API_KEY>
echo 环境变量已设置,请重新启动终端

3)运行set_env.bat

4)重新启动终端

代码语言:javascript
代码运行次数:0
运行
复制
echo LANGSMITH_TRACING
echo LANGSMITH_ENDPOINT
echo LANGSMITH_API_KEY
echo LANGSMITH_PROJECT
echo OPENAI_API_KEY

显示对应得值

5)建立如下py文件

代码语言:javascript
代码运行次数:0
运行
复制
from langgraph.prebuilt import create_react_agent
def get_weather(city: str) -> str:
  """Get weather for a given city."""
  return f"It's always sunny in {city}!"
  agent = create_react_agent(
  model="openai:gpt-5-mini",
  tools=[get_weather],
  prompt="You are a helpful assistant.",
  )
  # Run the agent
  agent.invoke(
  {"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}]}
  )

运行

2 注册科大讯飞

1)到科大讯飞官网控制台https://console.xfyun.cn/app/myapp

注册登录账号

注册应用

2)获得APPID、APISecret、APIKey

3)在本地建立.env文件

代码语言:javascript
代码运行次数:0
运行
复制
SPARK_APP_ID=APPID
SPARK_APP_SECRET= APISecret
SPARK_APP_KEY= APIKey

3.建立文件iflytek.py

代码语言:javascript
代码运行次数:0
运行
复制
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from typing import Any, List, Optional, Dict
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
import logging
class SparkLLM(LLM):
    domain: str = "general"
    temperature: float = 0.1
    @property
    def _llm_type(self) -> str:
        return "Spark"
    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        if stop is not None:
            raise ValueError("stop kwargs are not permitted.")
        try:
            from spark_middlerware import SparkMiddleware
            smw = SparkMiddleware(domain=self.domain, role='user', content=prompt)
            response = smw.response()  
            # 如果响应为空,返回默认响应
            if not response or response == "":
                return f"作为{'中餐' if '中餐' in prompt else '西餐'}厨师,我会这样制作: {prompt}"
            return response
        except Exception as e:
            logging.error(f"Spark中间件错误: {e}")
            return f"作为厨师,我会精心制作: {prompt}"
    @property
    def _identifying_params(self) -> Dict[str, Any]:
        return {"domain": self.domain, "temperature": self.temperature}

不要去管里面关于食物的部分,下同

4 建立spark_middlerware.py

代码语言:javascript
代码运行次数:0
运行
复制
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import json
import _thread
import base64
import hashlib
import hmac
from urllib.parse import urlparse, urlencode
from datetime import datetime
from time import mktime
from wsgiref.handlers import format_date_time
import ssl
from dotenv import load_dotenv, find_dotenv
# 加载环境变量
_ = load_dotenv(find_dotenv())
# 动态导入 websocket,处理不同的包结构
try:
    from websocket import WebSocketApp
    print("✓ 使用 from websocket import WebSocketApp")
except ImportError:
    try:
        import websocket
        WebSocketApp = websocket.WebSocketApp
        print("✓ 使用 import websocket")
    except ImportError:
        print("✗ 无法导入 websocket")
        exit(1)
class SparkMiddleware:
    def __init__(self, domain, role, content):
        self.appid = os.getenv("SPARK_APP_ID")
        self.api_secret = os.getenv("SPARK_APP_SECRET")
        self.api_key = os.getenv("SPARK_APP_KEY")
        self.domain = domain
        self.answer = ""
        # 定义域名对应的URL
        self.domain_urls = {
            "general": "ws://spark-api.xf-yun.com/v1.1/chat",
            "generalv2": "ws://spark-api.xf-yun.com/v2.1/chat",
            "generalv3": "ws://spark-api.xf-yun.com/v3.1/chat",
        }
        self.text = [{"role": role, "content": content}]
        self._connect_websocket()
    def _create_url(self):
        """创建WebSocket认证URL"""
        url = self.domain_urls[self.domain]
        host = urlparse(url).netloc
        path = urlparse(url).path
        # 生成日期
        now = datetime.now()
        date = format_date_time(mktime(now.timetuple()))
        # 拼接签名原始字符串
        signature_origin = f"host: {host}\ndate: {date}\nGET {path} HTTP/1.1"
        # 进行HMAC-SHA256加密
        signature_sha = hmac.new(
            self.api_secret.encode('utf-8'),
            signature_origin.encode('utf-8'),
            digestmod=hashlib.sha256
        ).digest()
        signature_sha_base64 = base64.b64encode(signature_sha).decode('utf-8')
        # 构建授权参数
        authorization_origin = f'api_key="{self.api_key}", algorithm="hmac-sha256", headers="host date request-line", signature="{signature_sha_base64}"'
        authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode('utf-8')
        # 构建URL参数
        params = {
            "authorization": authorization,
            "date": date,
            "host": host
        }
        return f"{url}?{urlencode(params)}"
    def _gen_params(self):
        """生成请求参数"""
        return {
            "header": {
                "app_id": self.appid,
                "uid": "1234"
            },
            "parameter": {
                "chat": {
                    "domain": self.domain,
                    "temperature": 0.5,
                    "max_tokens": 2048,
                    "top_k": 4
                }
            },
            "payload": {
                "message": {
                    "text": self.text
                }
            }
        }
    def _on_message(self, ws, message):
        """WebSocket消息处理"""
        try:
            data = json.loads(message)
            code = data['header']['code']
            if code != 0:
                print(f"错误代码: {code}, 消息: {data}")
                ws.close()
                return
            choices = data["payload"]["choices"]
            status = choices["status"]
            content = choices["text"][0]["content"]
            self.answer += content
            
#print
(content, end="", flush=True)
            if status == 2:
                ws.close()
                print("\n对话完成")
        except Exception as e:
            print(f"消息处理错误: {e}")
    def _on_error(self, ws, error):
        """WebSocket错误处理"""
        print(f"WebSocket错误: {error}")
    def _on_close(self, ws, close_status_code, close_msg):
        """WebSocket关闭处理"""
        print("连接关闭")
    def _on_open(self, ws):
        """WebSocket连接打开处理"""
        def run(*args):
            params = self._gen_params()
            ws.send(json.dumps(params))
        _thread.start_new_thread(run, ())
    def _connect_websocket(self):
        """建立WebSocket连接"""
        try:
            ws_url = self._create_url()
            print(f"连接URL: {ws_url}")
            # 创建WebSocket连接
            ws = WebSocketApp(
                ws_url,
                on_message=self._on_message,
                on_error=self._on_error,
                on_close=self._on_close,
                on_open=self._on_open
            )
            # 运行WebSocket
            print("启动WebSocket连接...")
            ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
        except Exception as e:
            print(f"WebSocket连接错误: {e}")
            self.answer = f"连接错误: {str(e)}"
    def response(self):
        return self.answer if self.answer else "未收到响应"

5 建立flask文件app.py

代码语言:javascript
代码运行次数:0
运行
复制
import os
os.environ["LANGCHAIN_PROJECT"] = "Food"
os.environ["LANGCHAIN_TRACING"] = "true"
import warnings
warnings.filterwarnings('ignore')
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
from iflytek import SparkLLM
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from flask import Flask, render_template, request
# 初始化语言模型
llm = SparkLLM(domain="generalv3", temperature=0.1)
# 构建提示模板
template = """
下面是需要你回答的问题:
{input}"""
nomal_chef_chain = (
    PromptTemplate(template=template, input_variables=["input"])
    | llm
    | StrOutputParser()
)
# 创建路由函数
def route_question(input_text):
    question = input_text.lower()
    # 根据关键词路由
    keywords = ["中餐"]
    return "western"
# 处理问题的函数
def process_question(input_text):
    route = route_question(input_text)
    return nomal_chef_chain.invoke({"input": input_text})
# Flask应用
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def index():
    result_text = ""
    input_text = ""
    if request.method == 'POST':
        input_text = request.form.get('input_text', '')
        if input_text.strip():
            result_text = process_question(input_text)
        else:
            result_text = "请输入文本内容"
    return render_template('index.html', result_text=result_text, input_text=input_text)
if __name__ == '__main__':
  app.run()

6 建立templatesm目录,在这个目录下建立index.html文件

代码语言:javascript
代码运行次数:0
运行
复制
<!-- templates/index.html -->
<!DOCTYPE html>
<html lang="zh-CN">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>我的LLM</title>
    <style>
        body {
            background-color: 
#f8f9fa
;
            padding: 20px;
        }
        .container {
            max-width: 800px;
            background-color: white;
            border-radius: 10px;
            box-shadow: 0 0 15px rgba(0, 0, 0, 0.1);
            padding: 30px;
            margin-top: 20px;
        }
        h1 {
            color: 
#4a4a4a
;
            margin-bottom: 30px;
            text-align: center;
        }
        .form-label {
            font-weight: 500;
        }
        .btn-primary {
            background-color: 
#4361ee
;
            border-color: 
#4361ee
;
        }
        .btn-primary:hover {
            background-color: 
#3a56d4
;
            border-color: 
#3a56d4
;
        }
        .result-box {
            min-height: 200px;
            background-color: 
#f8f9fa
;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>我的LLM</h1>    
        <form method="POST">
	<div class="mb-3">
                <label for="resultText" class="form-label">LLM回答</label>
                <textarea class="form-control result-box" id="resultText" name="result_text" 
                          rows="10" cols="100" readonly>{{ result_text }}</textarea>
            </div>
            <div class="mb-3">
                <label for="inputText" class="form-label">提示词</label>
				<input type="text " class="form-control" id="inputText" name="input_text"
                       placeholder="请输入提示词" size="100"  ></input> 
            </div>            
            <button type="submit" class="btn btn-primary">提交处理</button>
        </form>
    </div>
</body>
</html>

7 运行app.py,打开浏览器,输入127.0.0.1:5000

在提示词输入需要回答的内容,点击【提交处理】,过一会,在LLM回答中显示相关内容。

顾翔凡言:人工智能未来的发展瓶颈在于对知识的更新。唯一不变的是变化,知识发生了变化,人工智能软件能否及时跟进变化,可能阻碍人工智能的使用。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2025-09-20,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档