Object Detection(目标检测)#
fastdeploy.vision.detection.PaddleDetPreprocessor#
- class fastdeploy.vision.detection.PaddleDetPreprocessor(config_file)[source]#
Create a preprocessor for PaddleDetection Model from configuration file
- Parameters
config_file – (str)Path of configuration file, e.g ppyoloe/infer_cfg.yml
- run(input_ims)#
Process input image
- Param
input_ims: (list of numpy.ndarray) The input images
- Returns
list of FDTensor
- use_cuda(enable_cv_cuda=False, gpu_id=-1)#
Use CUDA processors
- Param
enable_cv_cuda: Ture: use CV-CUDA, False: use CUDA only
- Param
gpu_id: GPU device id
fastdeploy.vision.detection.PaddleDetPostprocessor#
- class fastdeploy.vision.detection.PaddleDetPostprocessor[source]#
Create a postprocessor for PaddleDetection Model
- run(runtime_results)[source]#
Postprocess the runtime results for PaddleDetection Model
- Param
runtime_results: (list of FDTensor)The output FDTensor results from runtime
- Returns
list of ClassifyResult(If the runtime_results is predict by batched samples, the length of this list equals to the batch size)
fastdeploy.vision.detection.PPYOLOE#
- class fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a PPYOLOE model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g ppyoloe/model.pdmodel
params_file – (str)Path of parameters file, e.g ppyoloe/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)[source]#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)[source]#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.PPYOLO#
- class fastdeploy.vision.detection.PPYOLO(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a PPYOLO model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g ppyolo/model.pdmodel
params_file – (str)Path of parameters file, e.g ppyolo/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.PicoDet#
- class fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a PicoDet model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g picodet/model.pdmodel
params_file – (str)Path of parameters file, e.g picodet/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.PaddleYOLOX#
- class fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a YOLOX model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g yolox/model.pdmodel
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.YOLOv3#
- class fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a YOLOX model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g yolox/model.pdmodel
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.FasterRCNN#
- class fastdeploy.vision.detection.FasterRCNN(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a FasterRCNN model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g fasterrcnn/model.pdmodel
params_file – (str)Path of parameters file, e.g fasterrcnn/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)#
Detect a batch of input image list
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.MaskRCNN#
- class fastdeploy.vision.detection.MaskRCNN(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a MaskRCNN model exported by PaddleDetection.
- Parameters
model_file – (str)Path of model file, e.g fasterrcnn/model.pdmodel
params_file – (str)Path of parameters file, e.g fasterrcnn/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str)Path of configuration file for deployment, e.g ppyoloe/infer_cfg.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)[source]#
Detect a batch of input image list, batch_predict is not supported for maskrcnn now.
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleDetPostprocessor object of the loaded model
:return PaddleDetPostprocessor
- predict(im)#
Detect an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
DetectionResult
- property preprocessor#
Get PaddleDetPreprocessor object of the loaded model
:return PaddleDetPreprocessor
fastdeploy.vision.detection.NanoDetPlus#
- class fastdeploy.vision.detection.NanoDetPlus(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a NanoDetPlus model exported by NanoDet.
- Parameters
model_file – (str)Path of model file, e.g ./nanodet.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property reg_max#
reg_max for GFL regression, default 7
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (320, 320)
fastdeploy.vision.detection.ScaledYOLOv4#
- class fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a ScaledYOLOv4 model exported by ScaledYOLOv4.
- Parameters
model_file – (str)Path of model file, e.g ./scaled_yolov4.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOR#
- class fastdeploy.vision.detection.YOLOR(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOR model exported by YOLOR
- Parameters
model_file – (str)Path of model file, e.g ./yolor.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv5Preprocessor#
- class fastdeploy.vision.detection.YOLOv5Preprocessor[source]#
Create a preprocessor for YOLOv5
- property is_mini_pad#
is_mini_pad for preprocessing, pad to the minimum rectange which height and width is times of stride, default false
- property is_scale_up#
is_scale_up for preprocessing, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0, default true
- property padding_value#
padding value for preprocessing, default [114.0, 114.0, 114.0]
- run(input_ims)[source]#
Preprocess input images for YOLOv5
- Param
input_ims: (list of numpy.ndarray)The input image
- Returns
list of FDTensor
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
- property stride#
stride for preprocessing, only for mini_pad mode, default 32
fastdeploy.vision.detection.YOLOv5Postprocessor#
- class fastdeploy.vision.detection.YOLOv5Postprocessor[source]#
Create a postprocessor for YOLOv5
- property conf_threshold#
confidence threshold for postprocessing, default is 0.25
- property multi_label#
multi_label for postprocessing, set true for eval, default is True
- property nms_threshold#
nms threshold for postprocessing, default is 0.5
- run(runtime_results, ims_info)[source]#
Postprocess the runtime results for YOLOv5
- Param
runtime_results: (list of FDTensor)The output FDTensor results from runtime
- Param
ims_info: (list of dict)Record input_shape and output_shape
- Returns
list of DetectionResult(If the runtime_results is predict by batched samples, the length of this list equals to the batch size)
fastdeploy.vision.detection.YOLOv5#
- class fastdeploy.vision.detection.YOLOv5(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv5 model exported by YOLOv5.
- Parameters
model_file – (str)Path of model file, e.g ./yolov5.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)[source]#
Classify a batch of input image
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get YOLOv5Postprocessor object of the loaded model
:return YOLOv5Postprocessor
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threshold for postprocessing, default is 0.25
nms_iou_threshold – iou threshold for NMS, default is 0.5
- Returns
DetectionResult
- property preprocessor#
Get YOLOv5Preprocessor object of the loaded model
:return YOLOv5Preprocessor
fastdeploy.vision.detection.YOLOv5Lite#
- class fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv5Lite model exported by YOLOv5Lite.
- Parameters
model_file – (str)Path of model file, e.g ./yolov5lite.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- property downsample_strides#
downsample strides for YOLOv5Lite to generate anchors, will take (8,16,32) as default values, might have stride=64.
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property is_decode_exported#
whether the model_file was exported with decode module. The official YOLOv5Lite/export.py script will export ONNX file without decode module. Please set it ‘true’ manually if the model file was exported with decode module. False : ONNX files without decode module. True : ONNX file with decode module. default False
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv6#
- class fastdeploy.vision.detection.YOLOv6(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv6 model exported by YOLOv6.
- Parameters
model_file – (str)Path of model file, e.g ./yolov6.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv7Preprocessor#
- class fastdeploy.vision.detection.YOLOv7Preprocessor[source]#
Create a preprocessor for YOLOv7
- property is_scale_up#
is_scale_up for preprocessing, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0, default true
- property padding_value#
padding value for preprocessing, default [114.0, 114.0, 114.0]
- run(input_ims)[source]#
Preprocess input images for YOLOv7
- Param
input_ims: (list of numpy.ndarray)The input image
- Returns
list of FDTensor
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv7Postprocessor#
- class fastdeploy.vision.detection.YOLOv7Postprocessor[source]#
Create a postprocessor for YOLOv7
- property conf_threshold#
confidence threshold for postprocessing, default is 0.25
- property nms_threshold#
nms threshold for postprocessing, default is 0.5
- run(runtime_results, ims_info)[source]#
Postprocess the runtime results for YOLOv7
- Param
runtime_results: (list of FDTensor)The output FDTensor results from runtime
- Param
ims_info: (list of dict)Record input_shape and output_shape
- Returns
list of DetectionResult(If the runtime_results is predict by batched samples, the length of this list equals to the batch size)
fastdeploy.vision.detection.YOLOv7#
- class fastdeploy.vision.detection.YOLOv7(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv7 model exported by YOLOv7.
- Parameters
model_file – (str)Path of model file, e.g ./yolov7.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(images)[source]#
Classify a batch of input image
- Parameters
im – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
:return list of DetectionResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get YOLOv7Postprocessor object of the loaded model
:return YOLOv7Postprocessor
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threshold for postprocessing, default is 0.25
nms_iou_threshold – iou threshold for NMS, default is 0.5
- Returns
DetectionResult
- property preprocessor#
Get YOLOv7Preprocessor object of the loaded model
:return YOLOv7Preprocessor
fastdeploy.vision.detection.YOLOR#
- class fastdeploy.vision.detection.YOLOR(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOR model exported by YOLOR
- Parameters
model_file – (str)Path of model file, e.g ./yolor.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv7End2EndORT#
- class fastdeploy.vision.detection.YOLOv7End2EndORT(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv7End2EndORT model exported by YOLOv7.
- Parameters
model_file – (str)Path of model file, e.g ./yolov7end2end_ort.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOv7End2EndTRT#
- class fastdeploy.vision.detection.YOLOv7End2EndTRT(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOv7End2EndTRT model exported by YOLOv7.
- Parameters
model_file – (str)Path of model file, e.g ./yolov7end2end_trt.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- predict(input_image, conf_threshold=0.25)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
fastdeploy.vision.detection.YOLOX#
- class fastdeploy.vision.detection.YOLOX(model_file, params_file='', runtime_option=None, model_format=<ModelFormat.ONNX: 2>)[source]#
Load a YOLOX model exported by YOLOX.
- Parameters
model_file – (str)Path of model file, e.g ./yolox.onnx
params_file – (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- property downsample_strides#
downsample strides for YOLOX to generate anchors, will take (8,16,32) as default values, might have stride=64.
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property is_decode_exported#
whether the model_file was exported with decode module. The official YOLOX/tools/export_onnx.py script will export ONNX file without decode module. Please set it ‘true’ manually if the model file was exported with decode module. Defalut False.
- predict(input_image, conf_threshold=0.25, nms_iou_threshold=0.5)[source]#
Detect an input image
- Parameters
input_image – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
conf_threshold – confidence threashold for postprocessing, default is 0.25
nms_iou_threshold – iou threashold for NMS, default is 0.5
- Returns
DetectionResult
- property size#
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]