|
本帖最后由 ufo8272 于 2025-7-30 13:27 编辑
加载模型 'ckpts/wan2.2_text2video_14B_high_quanto_mbf16_int8.safetensors' ...
加载模型 'ckpts/wan2.2_text2video_14B_low_quanto_mbf16_int8.safetensors' ...
Traceback (most recent call last):
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 5267, in generate_video_error_handler
generate_video(task, send_cmd, **params)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 4126, in generate_video
wan_model, offloadobj = load_models(model_type)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 2762, in load_models
wan_model, pipe = load_wan_model(model_file_list, model_type, base_model_type, model_def, quantizeTransformer = quantizeTransformer, dtype = transformer_dtype, VAE_dtype = VAE_dtype, mixed_precision_transformer = mixed_precision_transformer, save_quantized = save_quantized)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 2594, in load_wan_model
wan_model = model_factory(
File "D:\Wan2GP-V10\Wan2GP-V10\wan\any2video.py", line 120, in __init__
self.model = offload.fast_load_transformers_model(model_filename, modelClass=WanModel,do_quantize= quantizeTransformer and not save_quantized, writable_tensors= False, defaultConfigPath=base_config_file , forcedConfigPath= forcedConfigPath)
File "D:\Wan2GP-V10\Wan2GP-V10\deepface\lib\site-packages\mmgp\offload.py", line 1303, in fast_load_transformers_model
raise Exception("a 'config.json' that describes the model is required in the directory of the model or inside the safetensor file")
Exception: a 'config.json' that describes the model is required in the directory of the model or inside the safetensor file
加载模型 'ckpts/wan2.2_text2video_14B_high_quanto_mbf16_int8.safetensors' ...
加载模型 'ckpts/wan2.2_text2video_14B_low_quanto_mbf16_int8.safetensors' ...
Traceback (most recent call last):
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 5267, in generate_video_error_handler
generate_video(task, send_cmd, **params)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 4126, in generate_video
wan_model, offloadobj = load_models(model_type)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 2762, in load_models
wan_model, pipe = load_wan_model(model_file_list, model_type, base_model_type, model_def, quantizeTransformer = quantizeTransformer, dtype = transformer_dtype, VAE_dtype = VAE_dtype, mixed_precision_transformer = mixed_precision_transformer, save_quantized = save_quantized)
File "D:\Wan2GP-V10\Wan2GP-V10\wgp.py", line 2594, in load_wan_model
wan_model = model_factory(
File "D:\Wan2GP-V10\Wan2GP-V10\wan\any2video.py", line 120, in __init__
self.model = offload.fast_load_transformers_model(model_filename, modelClass=WanModel,do_quantize= quantizeTransformer and not save_quantized, writable_tensors= False, defaultConfigPath=base_config_file , forcedConfigPath= forcedConfigPath)
File "D:\Wan2GP-V10\Wan2GP-V10\deepface\lib\site-packages\mmgp\offload.py", line 1303, in fast_load_transformers_model
raise Exception("a 'config.json' that describes the model is required in the directory of the model or inside the safetensor file")
Exception: a 'config.json' that describes the model is required in the directory of the model or inside the safetensor file
想问一下网主 这个是缺少文件?但是模型和文件 应该都是齐全使得的? |
|