代码拉取完成,页面将自动刷新
同步操作将从 Gitee 极速下载/Qwen-VL 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
# python 3.8 and above
# pytorch 1.12 and above, 2.0 and above are recommended
# CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
# based on modelscope docker image
# registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0
# registry.cn-beijing.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0
FROM registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0
ARG workdir=/var/app
RUN mkdir -p ${workdir}
RUN git lfs install
WORKDIR ${workdir}
COPY requirements.txt requirements_web_demo.txt ./
# Install Qwen dependencies
RUN pip install -r requirements.txt
# Install webUI dependencies
WORKDIR ${workdir}
RUN pip install -r requirements_web_demo.txt
# Offline mode, check https://huggingface.co/docs/transformers/v4.15.0/installation#offline-mode
ENV HF_DATASETS_OFFLINE=1
ENV TRANSFORMERS_OFFLINE=1
# set TZ, make logs dir, and expose port 8080
ENV TZ=Asia/Shanghai
RUN mkdir -p ${workdir}/logs && chmod 777 ${workdir}/logs
VOLUME /var/app/logs
# create user 20001
RUN useradd -r -m appuser -u 20001 -g 0
WORKDIR ${workdir}
# copy model
RUN git clone https://huggingface.co/Qwen/Qwen-VL-Chat-Int4
# COPY --chown=20001:20001 Qwen-VL-Chat-Int4 ./Qwen-VL-Chat-Int4
# Install AutoGPTQ
RUN pip install optimum
# RUN git clone https://github.com/JustinLin610/AutoGPTQ.git && \
# cd AutoGPTQ && \
# pip install -v .
RUN pip install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu117/
# Install OpenAI API dependencies
WORKDIR ${workdir}
COPY requirements_openai_api.txt ./
RUN pip install -r requirements_openai_api.txt
# copy fonts
ADD --chown=20001:20001 https://github.com/StellarCN/scp_zh/raw/master/fonts/SimSun.ttf ./
# COPY --chown=20001:20001 SimSun.ttf ./
# copy main app
COPY --chown=20001:20001 openai_api.py ./
EXPOSE 8080
# CMD ["python3", "openai_api.py", "-c", "./Qwen-VL-Chat", "--server-name", "0.0.0.0", "--server-port", "8080"]
CMD ["python3", "openai_api.py", "-c", "./Qwen-VL-Chat-Int4", "--server-name", "0.0.0.0", "--server-port", "8080"]
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。