[Detector Support]: No TensorRT (AMD Machine), and no Onnx Support in Frigate 16.* #21861
Replies: 6 comments 15 replies
-
|
The error indicates your detector configuration is invalid. Based on your setup with an NVIDIA GPU, you need to use the correct Frigate image and detector configuration(1). For NVIDIA GPUs with ONNX models, use the image: ghcr.io/blakeblackshear/frigate:stable-tensorrtYour detector configuration has issues. The ONNX detector will automatically use TensorRT when the correct image is used(1). Update your detector config: detectors:
onnx:
type: onnx
model:
model_type: yolo-generic
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /models/yolov9c.onnx
labelmap_path: /labelmap/coco-80.txtKey points:
Your hardware acceleration for video decoding looks correct with 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Onnx crashes: |
Beta Was this translation helpful? Give feedback.
-
|
Please advise the 'bot that I can't run TensorRT because this is an AMD machine and Frigate has removed that functionality. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
It's in the title. Looks like I am out of luck with Frigate, after a week of work. I was assembling a HowTo for this fantastically complex app, but dead end.
What a fussy mess Docker is. But I can't use Podman because Debian 13.3 is too new for the nVidia drivers.
Version
16.3
Beta Was this translation helpful? Give feedback.
All reactions