Runtime(模型推理Runtime)#
fastdeploy.Runtime#
- class fastdeploy.Runtime(runtime_option)[source]#
FastDeploy Runtime object.
Initialize a FastDeploy Runtime object.
- Parameters
runtime_option – (fastdeploy.RuntimeOption)Options for FastDeploy Runtime
- bind_input_tensor(name, fdtensor)[source]#
Bind FDTensor by name, no copy and share input memory
- Parameters
name – (str)The name of input data.
fdtensor – (fastdeploy.FDTensor)The input FDTensor.
- bind_output_tensor(name, fdtensor)[source]#
Bind FDTensor by name, no copy and share output memory
- Parameters
name – (str)The name of output data.
fdtensor – (fastdeploy.FDTensor)The output FDTensor.
- compile(warm_datas)[source]#
[Only for Poros backend] compile with prewarm data for poros
- Parameters
data – (list[str : numpy.ndarray])The prewarm data list
:return TorchScript Model
- forward(*inputs)[source]#
[Only for Poros backend] Inference with input data for poros
- Parameters
data – (list[str : numpy.ndarray])The input data list
:return list of numpy.ndarray
- get_input_info(index)[source]#
Get input information of the loaded model.
- Parameters
index – (int)Index of the input
:return fastdeploy.TensorInfo
- get_output_info(index)[source]#
Get output information of the loaded model.
- Parameters
index – (int)Index of the output
:return fastdeploy.TensorInfo
- get_output_tensor(name)[source]#
Get output FDTensor by name, no copy and share backend output memory
- Parameters
name – (str)The name of output data.
:return fastdeploy.FDTensor