cyizb425
发表于 2025-4-8 11:32:18
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\blocks.py", line 2042, in process_api
result = await self.call_function(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\blocks.py", line 1589, in call_function
prediction = await anyio.to_thread.run_sync(# type: ignore
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 962, in run
result = context.run(func, *args)
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\utils.py", line 883, in wrapper
response = f(*args, **kwargs)
File "<frozen app>", line 96, in process_video
gradio.exceptions.Error: '处理过程中出错: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 11.00 GiB of which 7.00 MiB is free. Of the allocated memory 7.68 GiB is allocated by PyTorch, and 2.14 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.See documentation for Memory Management(https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)'
处理过程中报错了,请求帮助!
无言以对
发表于 2025-4-8 11:34:02
cyizb425 发表于 2025-4-8 11:32
During handling of the above exception, another exception occurred:
Traceback (most recent call las ...
报错是显存不足
gts250ll
发表于 2025-4-9 10:31:54
又出新的了,继续支持!
garyzhang
发表于 2025-4-27 20:10:34
请问如何解决?
无言以对
发表于 2025-4-27 21:24:21
garyzhang 发表于 2025-4-27 20:10
请问如何解决?
你这是服务器吗?
你有别的显卡没,Tesla 系列的显卡有的不支持高版本CUDA,我刚看了历史回帖记录,你的CUDA是11.6,这个包基于CUDA12.1,CUDA低了,一键包的CUDA如果高于你的本地版本,也是用不了的。
最好用消费显卡,不要用这种工作站卡
garyzhang
发表于 2025-4-27 22:34:44
请问有没基于CUDA11.4的一键包?
无言以对
发表于 2025-4-27 22:39:54
garyzhang 发表于 2025-4-27 22:34
请问有没基于CUDA11.4的一键包?https://www.nvidia.cn/drivers/lookup/
先到这里搜索你的显卡和操作系统对应的驱动,安装,建议不低于CUDA12.6
https://developer.nvidia.com/cuda-12-4-0-download-archive
然后再根据自己的系统选择cuda套件
有不懂的,可以参考此贴 https://deepface.cc/thread-34-1-1.html
garyzhang
发表于 2025-4-27 22:57:17
提示最高只能装CUDA11.4😂
无言以对
发表于 2025-4-27 22:58:05
garyzhang 发表于 2025-4-27 22:57
提示最高只能装CUDA11.4😂
你这是租的服务器吗?
dou1231993
发表于 2025-5-1 08:57:47
你试试,我的2080