Qwen2.5-1.5B Streamlit教程:添加对话评分、反馈按钮与日志记录功能
1. 项目概述
本项目基于阿里通义千问官方Qwen2.5-1.5B-Instruct轻量级大语言模型构建,实现了一套完全本地化部署的纯文本智能对话服务。通过Streamlit框架打造可视化聊天界面,无需复杂配置即可快速部署使用。
核心优势在于:
- 轻量高效:1.5B参数模型适配低显存GPU环境
- 隐私安全:所有数据处理均在本地完成
- 开箱即用:简洁界面设计,零技术门槛
2. 环境准备与基础部署
2.1 安装依赖
确保已安装Python 3.8+环境,执行以下命令安装必要依赖:
pip install streamlit torch transformers2.2 模型准备
将Qwen2.5-1.5B-Instruct模型文件放置在本地目录,例如/root/qwen1.5b,确保包含以下文件:
config.jsonmodel.safetensors- 分词器相关文件
3. 基础聊天功能实现
3.1 初始化模型与界面
创建app.py文件,添加基础代码:
import streamlit as st from transformers import AutoModelForCausalLM, AutoTokenizer @st.cache_resource def load_model(): model = AutoModelForCausalLM.from_pretrained( "/root/qwen1.5b", device_map="auto", torch_dtype="auto" ) tokenizer = AutoTokenizer.from_pretrained("/root/qwen1.5b") return model, tokenizer model, tokenizer = load_model()3.2 构建聊天界面
添加Streamlit界面代码:
st.title("Qwen2.5-1.5B 本地对话助手") if "messages" not in st.session_state: st.session_state.messages = [] for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) if prompt := st.chat_input("请输入您的问题..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): message_placeholder = st.empty() full_response = "" inputs = tokenizer.apply_chat_template( st.session_state.messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, temperature=0.7, top_p=0.9 ) response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True) message_placeholder.markdown(response) st.session_state.messages.append({"role": "assistant", "content": response})4. 增强功能实现
4.1 添加对话评分功能
在聊天界面中添加评分按钮:
if st.session_state.messages and st.session_state.messages[-1]["role"] == "assistant": cols = st.columns(5) with cols[0]: if st.button(" 有帮助"): st.session_state.last_rating = "positive" with cols[1]: if st.button(" 无帮助"): st.session_state.last_rating = "negative" if "last_rating" in st.session_state: st.write(f"您已评价: {st.session_state.last_rating}")4.2 实现反馈收集
添加反馈文本框:
if st.session_state.messages and st.session_state.messages[-1]["role"] == "assistant": with st.expander("提供详细反馈"): feedback = st.text_area("您的建议或意见") if st.button("提交反馈"): # 这里可以添加日志记录逻辑 st.success("感谢您的反馈!")4.3 添加日志记录功能
创建日志记录系统:
import logging from datetime import datetime def setup_logging(): logging.basicConfig( filename="chat_logs.log", level=logging.INFO, format="%(asctime)s - %(message)s" ) setup_logging() def log_interaction(user_input, ai_response, rating=None, feedback=None): log_entry = { "timestamp": datetime.now().isoformat(), "user_input": user_input, "ai_response": ai_response, "rating": rating, "feedback": feedback } logging.info(str(log_entry))在聊天逻辑中添加日志记录:
# 在生成响应后添加 log_interaction(prompt, response) # 在评分按钮逻辑中添加 if "last_rating" in st.session_state: log_interaction( st.session_state.messages[-2]["content"], st.session_state.messages[-1]["content"], rating=st.session_state.last_rating )5. 完整功能集成
将所有功能整合后的完整代码示例:
import streamlit as st from transformers import AutoModelForCausalLM, AutoTokenizer import logging from datetime import datetime # 初始化日志 def setup_logging(): logging.basicConfig( filename="chat_logs.log", level=logging.INFO, format="%(asctime)s - %(message)s" ) def log_interaction(user_input, ai_response, rating=None, feedback=None): log_entry = { "timestamp": datetime.now().isoformat(), "user_input": user_input, "ai_response": ai_response, "rating": rating, "feedback": feedback } logging.info(str(log_entry)) # 加载模型 @st.cache_resource def load_model(): model = AutoModelForCausalLM.from_pretrained( "/root/qwen1.5b", device_map="auto", torch_dtype="auto" ) tokenizer = AutoTokenizer.from_pretrained("/root/qwen1.5b") return model, tokenizer setup_logging() model, tokenizer = load_model() # 构建界面 st.title("Qwen2.5-1.5B 本地对话助手") if "messages" not in st.session_state: st.session_state.messages = [] for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) if prompt := st.chat_input("请输入您的问题..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): message_placeholder = st.empty() full_response = "" inputs = tokenizer.apply_chat_template( st.session_state.messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, temperature=0.7, top_p=0.9 ) response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True) message_placeholder.markdown(response) st.session_state.messages.append({"role": "assistant", "content": response}) log_interaction(prompt, response) # 评分和反馈功能 if st.session_state.messages and st.session_state.messages[-1]["role"] == "assistant": cols = st.columns(5) with cols[0]: if st.button(" 有帮助"): st.session_state.last_rating = "positive" log_interaction( st.session_state.messages[-2]["content"], st.session_state.messages[-1]["content"], rating="positive" ) with cols[1]: if st.button(" 无帮助"): st.session_state.last_rating = "negative" log_interaction( st.session_state.messages[-2]["content"], st.session_state.messages[-1]["content"], rating="negative" ) if "last_rating" in st.session_state: st.write(f"您已评价: {st.session_state.last_rating}") with st.expander("提供详细反馈"): feedback_text = st.text_area("您的建议或意见") if st.button("提交反馈"): log_interaction( st.session_state.messages[-2]["content"], st.session_state.messages[-1]["content"], feedback=feedback_text ) st.success("感谢您的反馈!") # 清空对话按钮 st.sidebar.button("清空对话", on_click=lambda: st.session_state.clear())6. 总结与进阶建议
通过本教程,我们为Qwen2.5-1.5B Streamlit聊天应用添加了三个实用功能:
- 对话评分系统:允许用户快速评价AI回复质量
- 反馈收集机制:获取用户对对话内容的详细建议
- 日志记录功能:完整记录所有交互数据供后续分析
进阶改进建议:
- 将日志存储到数据库而非文件
- 添加对话历史导出功能
- 实现多用户会话管理
- 增加对话主题分类标签
获取更多AI镜像
想探索更多AI镜像和应用场景?访问 CSDN星图镜像广场,提供丰富的预置镜像,覆盖大模型推理、图像生成、视频生成、模型微调等多个领域,支持一键部署。