Semantic Segmentation(语义分割)#
fastdeploy.vision.segmentation.PaddleSegPreprocessor#
- class fastdeploy.vision.segmentation.PaddleSegPreprocessor(config_file)[source]#
Create a preprocessor for PaddleSegModel from configuration file
- Parameters
config_file – (str)Path of configuration file, e.g ppliteseg/deploy.yaml
- property is_vertical_screen#
Atrribute of PP-HumanSeg model. Stating Whether the input image is vertical image(height > width), default value is False
- Returns
value of is_vertical_screen(bool)
fastdeploy.vision.segmentation.PaddleSegModel#
- class fastdeploy.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=<ModelFormat.PADDLE: 1>)[source]#
Load a image segmentation model exported by PaddleSeg.
- Parameters
model_file – (str)Path of model file, e.g unet/model.pdmodel
params_file – (str)Path of parameters file, e.g unet/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
config_file – (str) Path of configuration file for deploy, e.g unet/deploy.yml
runtime_option – (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it’s None, will use the default backend on CPU
model_format – (fastdeploy.ModelForamt)Model format of the loaded model
- batch_predict(image_list)[source]#
Predict the segmentation results for a batch of input images
- Parameters
image_list – (list of numpy.ndarray) The input image list, each element is a 3-D array with layout HWC, BGR format
- Returns
list of SegmentationResult
- get_profile_time()#
Get profile time of Runtime after the profile process is done.
- property postprocessor#
Get PaddleSegPostprocessor object of the loaded model
- Returns
PaddleSegPostprocessor
- predict(image)[source]#
Predict the segmentation result for an input image
- Parameters
im – (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
- Returns
SegmentationResult
- property preprocessor#
Get PaddleSegPreprocessor object of the loaded model
- Returns
PaddleSegPreprocessor
fastdeploy.vision.segmentation.PaddleSegPostprocessor#
- class fastdeploy.vision.segmentation.PaddleSegPostprocessor(config_file)[source]#
Create a postprocessor for PaddleSegModel from configuration file
- Parameters
config_file – (str)Path of configuration file, e.g ppliteseg/deploy.yaml
- property apply_softmax#
Atrribute of PaddleSeg model. Stating Whether applying softmax operator in the postprocess, default value is False
- Returns
value of apply_softmax(bool)
- run(runtime_results, imgs_info)[source]#
Postprocess the runtime results for PaddleSegModel
- Parameters
runtime_results – (list of FDTensor)The output FDTensor results from runtime
imgs_info – The original input images shape info map, key is “shape_info”, value is [[image_height, image_width]]
- Returns
list of SegmentationResult(If the runtime_results is predict by batched samples, the length of this list equals to the batch size)
- property store_score_map#
Atrribute of PaddleSeg model. Stating Whether storing score map in the SegmentationResult, default value is False
- Returns
value of store_score_map(bool)