FastDeploy  latest
Fast & Easy to Deploy!
Public Member Functions | Public Attributes | List of all members
fastdeploy::FDTensor Struct Reference

FDTensor object used to represend data matrix. More...

#include <fd_tensor.h>

Public Member Functions

void SetData (const std::vector< int64_t > &tensor_shape, const FDDataType &data_type, void *data_buffer, bool copy=false, const Device &data_device=Device::CPU, int data_device_id=-1)
 Set data buffer for a FDTensor, e.g

std::vector<float> buffer(1 * 3 * 224 * 224, 0);
FDTensor tensor;
tensor.SetData({1, 3, 224, 224}, FDDataType::FLOAT, buffer.data());

. More...

 
void * GetData ()
 Get data pointer of tensor.
 
const void * GetData () const
 Get data pointer of tensor.
 
void ExpandDim (int64_t axis=0)
 Expand the shape of tensor, it will not change the data memory, just modify its attribute shape
 
void Squeeze (int64_t axis=0)
 Squeeze the shape of tensor, it will not change the data memory, just modify its attribute shape
 
bool Reshape (const std::vector< int64_t > &new_shape)
 Reshape the tensor, it will not change the data memory, just modify its attribute shape
 
int Nbytes () const
 Total size of tensor memory buffer in bytes.
 
int Numel () const
 Total number of elements in tensor.
 
std::vector< int64_t > Shape () const
 Get shape of tensor.
 
FDDataType Dtype () const
 Get dtype of tensor.
 
void Allocate (const FDDataType &data_type, const std::vector< int64_t > &data_shape)
 Allocate cpu data buffer for a FDTensor, e.g

FDTensor tensor;
tensor.Allocate(FDDataType::FLOAT, {1, 3, 224, 224};

. More...

 
void PrintInfo (const std::string &prefix="Debug TensorInfo: ") const
 Debug function, print shape, dtype, mean, max, min of tensor.
 
bool IsShared ()
 Whether the tensor is owned the data buffer or share the data buffer from outside.
 
void StopSharing ()
 If the tensor is share the data buffer from outside, StopSharing will copy to its own structure; Otherwise, do nothing.
 

Public Attributes

std::string name = ""
 Name of tensor, while feed to runtime, this need be defined.
 

Detailed Description

FDTensor object used to represend data matrix.

Member Function Documentation

◆ Allocate()

void fastdeploy::FDTensor::Allocate ( const FDDataType &  data_type,
const std::vector< int64_t > &  data_shape 
)
inline

Allocate cpu data buffer for a FDTensor, e.g

FDTensor tensor;
tensor.Allocate(FDDataType::FLOAT, {1, 3, 224, 224};

.

Parameters
[in]data_typeThe data type of tensor
[in]tensor_shapeThe shape of tensor

◆ SetData()

void fastdeploy::FDTensor::SetData ( const std::vector< int64_t > &  tensor_shape,
const FDDataType &  data_type,
void *  data_buffer,
bool  copy = false,
const Device &  data_device = Device::CPU,
int  data_device_id = -1 
)
inline

Set data buffer for a FDTensor, e.g

std::vector<float> buffer(1 * 3 * 224 * 224, 0);
FDTensor tensor;
tensor.SetData({1, 3, 224, 224}, FDDataType::FLOAT, buffer.data());

.

Parameters
[in]tensor_shapeThe shape of tensor
[in]data_typeThe data type of tensor
[in]data_bufferThe pointer of data buffer memory
[in]copyWhether to copy memory from data_buffer to tensor, if false, this tensor will share memory with data_buffer, and the data is managed by userself
[in]data_deviceThe device of data_buffer, e.g if data_buffer is a pointer to GPU data, the device should be Device::GPU
[in]data_device_idThe device id of data_buffer

The documentation for this struct was generated from the following files: