当前位置: 首页 > news >正文

【视频生成大模型】 视频生成大模型 THUDM/CogVideoX-2b

【视频生成大模型】 视频生成大模型 THUDM/CogVideoX-2b

  • CogVideoX-2b 模型介绍
    • 发布时间
    • 模型测试生成的demo视频
    • 生成视频限制
  • 运行环境安装
  • 运行模型
  • 下载
  • 开源协议
  • 参考

CogVideoX-2b 模型介绍

CogVideoX是 清影 同源的开源版本视频生成模型。

基础信息:

在这里插入图片描述

发布时间

2024年8月份

模型测试生成的demo视频

https://github.com/THUDM/CogVideo/raw/main/resources/videos/1.mp4

https://github.com/THUDM/CogVideo/raw/main/resources/videos/2.mp4

生成视频限制

  • 提示词语言 English*
  • 提示词长度上限 226 Tokens
  • 视频长度 6 秒
  • 帧率 8 帧 / 秒
  • 视频分辨率 720 * 480,不支持其他分辨率(含微调)

运行环境安装

# diffusers>=0.30.1
# transformers>=0.44.0
# accelerate>=0.33.0 (suggest install from source)
# imageio-ffmpeg>=0.5.1
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg 

运行模型

import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_videoprompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance."pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-2b",torch_dtype=torch.float16
)pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
video = pipe(prompt=prompt,num_videos_per_prompt=1,num_inference_steps=50,num_frames=49,guidance_scale=6,generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]export_to_video(video, "output.mp4", fps=8)
  • Quantized Inference

PytorchAO 和 Optimum-quanto 可以用于对文本编码器、Transformer 和 VAE 模块进行量化,从而降低 CogVideoX 的内存需求。这使得在免费的 T4 Colab 或较小 VRAM 的 GPU 上运行该模型成为可能!值得注意的是,TorchAO 量化与 torch.compile 完全兼容,这可以显著加快推理速度。

# To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly.
# Source and nightly installation is only required until next release.import torch
from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXPipeline
from diffusers.utils import export_to_video
from transformers import T5EncoderModel
from torchao.quantization import quantize_, int8_weight_only, int8_dynamic_activation_int8_weightquantization = int8_weight_onlytext_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX-2b", subfolder="text_encoder", torch_dtype=torch.bfloat16)
quantize_(text_encoder, quantization())transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16)
quantize_(transformer, quantization())vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", torch_dtype=torch.bfloat16)
quantize_(vae, quantization())# Create pipeline and run inference
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-2b",text_encoder=text_encoder,transformer=transformer,vae=vae,torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()# prompt 只能输入英文
prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance."video = pipe(prompt=prompt,num_videos_per_prompt=1,num_inference_steps=50,num_frames=49,guidance_scale=6,generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]export_to_video(video, "output.mp4", fps=8)

下载

model_id: THUDM/CogVideoX-2b
下载地址:https://hf-mirror.com/THUDM/CogVideoX-2b 不需要翻墙

开源协议

License: apache-2.0

参考

  • https://hf-mirror.com/THUDM/CogVideoX-2b
  • https://github.com/THUDM/CogVideo
http://www.lryc.cn/news/464010.html

相关文章:

  • 【MR开发】在Pico设备上接入MRTK3(三)——在Unity中运行MRTK示例
  • C#中委托的应用与示例
  • 算法: 模拟题目练习
  • 软考中级科目怎么选?软考中级证书有什么用?
  • HTTP 请求的请求体是什么
  • 助力语音技术发展,景联文科技提供语音数据采集服务
  • PyTorch搭建神经网络入门教程
  • 你的电脑能不能安装windows 11,用这个软件检测下就知道了
  • BF 算法
  • SHOW-O——一款结合多模态理解和生成的单一Transformer
  • 缓存框架JetCache源码解析-缓存变更通知机制
  • Android 设置特定Activity内容顶部显示在状态栏底部,也就是状态栏的下层 以及封装一个方法修改状态栏颜色
  • 用自己的数据集复现YOLOv5
  • 如何在博客中插入其他的博客链接(超简单)最新版
  • JS通过递归函数来剔除树结构特定节点
  • javayufa
  • 软考-高级系统分析师知识点-补充篇
  • JavaScript全面指南(四)
  • 2024年诺贝尔物理学奖的创新之举
  • FileLink内外网文件交换——致力企业高效安全文件共享
  • 使用Python在Jupyter Notebook中显示Markdown文本
  • G1 GAN生成MNIST手写数字图像
  • WPFDeveloper正式版发布
  • 实现鼠标经过某个元素时弹出提示框(通常称为“工具提示”或“悬浮提示”)
  • 【GAMES101笔记速查——Lecture 17 Materials and Appearances】
  • 对于从vscode ssh到virtualBox的timeout记录
  • 鸿蒙原生应用扬帆起航
  • 《计算机视觉》—— 表情识别
  • NVIDIA Aerial Omniverse
  • QT程序报错解决方案:Cannot queue arguments of type ‘QTextCharFormat‘ 或 ‘QTextCursor‘