sns社交网站建设,企业网页制作要注意什么,做 爱 网站小视频下载,深圳创业补贴政策2021目标检测 Graph 一 流水线上游输入处理
1 TfLiteConverterCalculator
将输入的数据转换成tensorflow api 支持的Tensor TfLiteTensor 并初始化相关输入输出节点 #xff0c;该类的业务主要通过 interpreter std::unique_ptrtflite::Interpreter interpreter_ nullptr; 实现…目标检测 Graph 一 流水线上游输入处理
1 TfLiteConverterCalculator
将输入的数据转换成tensorflow api 支持的Tensor TfLiteTensor 并初始化相关输入输出节点 该类的业务主要通过 interpreter std::unique_ptrtflite::Interpreter interpreter_ nullptr; 实现类 完成 数据在cpu/gpu 上的推理
1.1 TfLiteTensor /Tensor Tensorflow 在TensorFlow Lite中TfLiteTensor和Tensor是不同的概念。 Tensor是TensorFlow中的基本数据结构用于表示多维数组。在TensorFlow Lite中Tensor被用于输入和输出数据以及在模型中表示变量和权重。 TfLiteTensor是TensorFlow Lite中特有的数据结构它是对Tensor的封装具有一些额外的属性和方法用于支持TensorFlow Lite特定的功能和操作。例如TfLiteTensor可以包含额外的信息如quantization参数用于量化和维度用于调整输入/输出的形状。\ // A tensor in the interpreter system which is a wrapper around a buffer of
// data including a dimensionality (or NULL if not currently defined).
#ifndef TF_LITE_STATIC_MEMORY
typedef struct TfLiteTensor {// The data type specification for data stored in data. This affects// what member of data union should be used.TfLiteType type;// A union of data pointers. The appropriate type should be used for a typed// tensor based on type.TfLitePtrUnion data;// A pointer to a structure representing the dimensionality interpretation// that the buffer should have. NOTE: the product of elements of dims// and the element datatype size should be equal to bytes below.TfLiteIntArray* dims;// Quantization information.TfLiteQuantizationParams params;// How memory is mapped// kTfLiteMmapRo: Memory mapped read only.// i.e. weights// kTfLiteArenaRw: Arena allocated read write memory// (i.e. temporaries, outputs).TfLiteAllocationType allocation_type;// The number of bytes required to store the data of this Tensor. I.e.// (bytes of each element) * dims[0] * ... * dims[n-1]. For example, if// type is kTfLiteFloat32 and dims {3, 2} then// bytes sizeof(float) * 3 * 2 4 * 3 * 2 24.size_t bytes;// An opaque pointer to a tflite::MMapAllocationconst void* allocation;// Null-terminated name of this tensor.const char* name;// The delegate which knows how to handle buffer_handle.// WARNING: This is an experimental interface that is subject to change.struct TfLiteDelegate* delegate;// An integer buffer handle that can be handled by delegate.// The value is valid only when delegate is not null.// WARNING: This is an experimental interface that is subject to change.TfLiteBufferHandle buffer_handle;// If the delegate uses its own buffer (e.g. GPU memory), the delegate is// responsible to set data_is_stale to true.// delegate-CopyFromBufferHandle can be called to copy the data from// delegate buffer.// WARNING: This is an // experimental interface that is subject to change.bool data_is_stale;// True if the tensor is a variable.bool is_variable;// Quantization information. Replaces params field above.TfLiteQuantization quantization;// Parameters used to encode a sparse tensor.// This is optional. The field is NULL if a tensor is dense.// WARNING: This is an experimental interface that is subject to change.TfLiteSparsity* sparsity;// Optional. Encodes shapes with unknown dimensions with -1. This field is// only populated when unknown dimensions exist in a read-write tensor (i.e.// an input or output tensor). (e.g. dims contains [1, 1, 1, 3] and// dims_signature contains [1, -1, -1, 3]). Note that this field only// exists when TF_LITE_STATIC_MEMORY is not defined.const TfLiteIntArray* dims_signature;
} TfLiteTensor;在TensorFlow Lite中通过interpreter_-tensor(index)方法获取TfLiteTensor对象其中index是输入或输出张量的索引。
例如以下是使用TensorFlow Lite C API获取输入张量的示例代码
const int tensor_index interpreter_-inputs()[0];
TfLiteTensor* tensor interpreter_-tensor(tensor_index);
通过interpreter_-ResizeInputTensor(index, shape)方法可以调整输入张量的形状其中index是输入张量的索引shape是新的形状。例如以下是使用TensorFlow Lite C API调整输入张量形状的示例代码
const int tensor_index interpreter_-inputs()[0];
interpreter_-ResizeInputTensor(tensor_index, {height, width, channels});
1.2 Tensor
Tensor是TensorFlow中的基本数据结构用于表示多维数组。在TensorFlow Lite中Tensor被用于输入和输出数据以及在模型中表示变量和权重。
在 TensorFlow 中张量可以通过多种方式创建和操作例如使用 Python 列表或 NumPy 数组创建张量或者通过 TensorFlow 提供的各种操作来创建和操作张量。张量的形状可以是任意维度的例如一维、二维、三维等等。用于各种深度学习任务例如图像识别、语音识别、自然语言处理等等。TensorFlow 还支持各种不同的硬件和操作系统可以在各种平台上运行包括 CPU、GPU、TPU 等等
Tensor的数据结构包括以下方面
张量的形状张量的形状定义了它的大小和维度例如一个三维张量的形状可以是(10, 20, 30)表示它有10个长度为20的数组每个数组包含30个元素。张量的数据类型张量的数据类型定义了它存储的数据类型例如float32、int32等。张量的值张量的值存储在连续的内存中可以通过索引来访问和修改。
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.Licensed under the Apache License, Version 2.0 (the License);
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an AS IS BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/#ifndef TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_
#define TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_#include cstdint
#include iosfwd
#include string
#include type_traits
#include utility#include unsupported/Eigen/CXX11/Tensor // from eigen_archive
#include tensorflow/core/framework/allocator.h
#include tensorflow/core/framework/tensor_shape.h
#include tensorflow/core/framework/tensor_types.h
#include tensorflow/core/framework/types.h
#include tensorflow/core/framework/types.pb.h
#include tensorflow/core/lib/core/refcount.h
#include tensorflow/core/lib/core/status.h
#include tensorflow/core/lib/core/stringpiece.h
#include tensorflow/core/lib/gtl/inlined_vector.h
#include tensorflow/core/platform/mem.h
#include tensorflow/core/platform/types.hnamespace tensorflow {// Forward declarations. In particular, we forward declare protos so that their
// symbols can be removed from .so exports.
class AllocationDescription;
class OpKernelContext;
class Tensor;
class TensorBuffer;
class TensorCApi;
class TensorInterface;
class TensorCord;
class TensorDescription;
class TensorProto;
class Var;namespace batch_util {
Status CopyElementToSlice(Tensor element, Tensor* parent, int64_t index);
Status CopySliceToElement(const Tensor parent, Tensor* element, int64_t index);
Status MaybeMoveSliceToElement(Tensor* parent, Tensor* element, int64_t index);
Status CopyContiguousSlices(const Tensor src, int64_t src_offset,int64_t dst_offset, int64_t num_slices,Tensor* dst);
} // namespace batch_util/// ingroup core/// Interface to access the raw ref-counted data buffer.
class TensorBuffer : public core::RefCounted {public:explicit TensorBuffer(void* data_ptr) : data_(data_ptr) {}~TensorBuffer() override {}/// \brief data() points to a memory region of size() bytes.////// NOTE(mrry): The data() method is not virtual for performance reasons./// It can be called multiple times when the contents of a Tensor are/// accessed, and so making it non-virtual allows the body to be inlined.void* data() const { return data_; }/// \brief Size (in bytes) of the buffer.virtual size_t size() const 0;/// \brief If this TensorBuffer is sub-buffer of another TensorBuffer,/// returns that TensorBuffer. Otherwise, returns this.virtual TensorBuffer* root_buffer() 0;/// \brief Fills metadata about the allocation into the proto.virtual void FillAllocationDescription(AllocationDescription* proto) const 0;virtual bool GetAllocatedBytes(size_t* out_bytes) const;/// \brief Helper method to reinterpret the buffer as an array of T.template typename TT* base() const {return reinterpret_castT*(data());}/// \brief Whether this TensorBuffer owns the underlying memory.virtual bool OwnsMemory() const { return true; }/// \brief The type of the underlying memory.virtual AllocatorMemoryType GetMemoryType() const {return AllocatorMemoryType::kUnknown;}private:void* const data_;
};/// Represents an n-dimensional array of values.
class Tensor {public:/// \brief Creates a 1-dimensional, 0-element float tensor.////// The returned Tensor is not a scalar (shape {}), but is instead/// an empty one-dimensional Tensor (shape {0}, NumElements() /// 0). Since it has no elements, it does not need to be assigned a/// value and is initialized by default (IsInitialized() is/// true). If this is undesirable, consider creating a one-element/// scalar which does require initialization:////// c////// Tensor(DT_FLOAT, TensorShape({}))////// Tensor();/// \brief Creates a Tensor of the given type and shape. If/// LogMemory::IsEnabled() the allocation is logged as coming from/// an unknown kernel and step. Calling the Tensor constructor/// directly from within an Op is deprecated: use the/// OpKernelConstruction/OpKernelContext allocate_* methods to/// allocate a new tensor, which record the kernel and step.////// The underlying buffer is allocated using a CPUAllocator.Tensor(DataType type, const TensorShape shape);/// \brief Creates a tensor with the input type and shape, using/// the allocator a to allocate the underlying buffer. If/// LogMemory::IsEnabled() the allocation is logged as coming from/// an unknown kernel and step. Calling the Tensor constructor/// directly from within an Op is deprecated: use the/// OpKernelConstruction/OpKernelContext allocate_* methods to/// allocate a new tensor, which record the kernel and step.////// a must outlive the lifetime of this Tensor.Tensor(Allocator* a, DataType type, const TensorShape shape);/// \brief Creates a tensor with the input type and shape, using/// the allocator a and the specified allocation_attr to/// allocate the underlying buffer. If the kernel and step are known/// allocation_attr.allocation_will_be_logged should be set to true/// and LogMemory::RecordTensorAllocation should be called after the/// tensor is constructed. Calling the Tensor constructor directly/// from within an Op is deprecated: use the/// OpKernelConstruction/OpKernelContext allocate_* methods to/// allocate a new tensor, which record the kernel and step.////// a must outlive the lifetime of this Tensor.Tensor(Allocator* a, DataType type, const TensorShape shape,const AllocationAttributes allocation_attr);/// \brief Creates a tensor with the input datatype, shape and buf.////// Acquires a ref on buf that belongs to this Tensor.Tensor(DataType type, const TensorShape shape, TensorBuffer* buf);/// \brief Creates a tensor with the input datatype, shape and buf.////// Takes an ownership of the bufffer from the reference counted pointer.Tensor(DataType type, TensorShape shape, core::RefCountPtrTensorBuffer buf);/// \brief Creates an empty Tensor of the given data type.////// Like Tensor(), returns a 1-dimensional, 0-element Tensor with/// IsInitialized() returning True. See the Tensor() documentation/// for details.explicit Tensor(DataType type);/// \brief Initializes a tensor with the input type and shape, or returns/// an error and leaves out_tensor unmodified. This factory method should be/// used instead of the corresponding constructor if calling code cannot/// validate that the DataType is valid and supported.////// The underlying buffer is allocated using a CPUAllocator.static Status BuildTensor(DataType type, const TensorShape shape,Tensor* out_tensor);private:// A tag type for selecting the Tensor constructor overload that creates a// scalar tensor in host memory.struct host_scalar_tag {};class HostScalarTensorBufferBase;template typename Tstruct ValueAndTensorBuffer;// Creates a tensor with the given scalar value in CPU memory.template typename TTensor(T value, host_scalar_tag tag);public:// A series of specialized constructors for scalar tensors in host memory.//// NOTE: The Variant host-scalar constructor is not defined, because Variant// is implicitly constructible from many different types, and this causes// ambiguities with some compilers.explicit Tensor(float scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(double scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(int32_t scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(uint32 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(uint16 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(uint8 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(int16_t scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(int8_t scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(tstring scalar_value): Tensor(std::move(scalar_value), host_scalar_tag{}) {}explicit Tensor(complex64 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(complex128 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(int64_t scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(uint64 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(bool scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(qint8 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(quint8 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(qint16 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(quint16 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(qint32 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(bfloat16 scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(Eigen::half scalar_value): Tensor(scalar_value, host_scalar_tag{}) {}explicit Tensor(ResourceHandle scalar_value): Tensor(std::move(scalar_value), host_scalar_tag{}) {}// NOTE: The const char* host-scalar constructor is provided as a// convenience because otherwise passing a string literal would surprisingly// construct a DT_BOOL tensor.explicit Tensor(const char* scalar_value): Tensor(tstring(scalar_value), host_scalar_tag{}) {}/// Copy constructor.Tensor(const Tensor other);/// \brief Move constructor. After this call, other is safely destructible/// can be assigned to, and IsInitialized() can be called and will return/// false. Other calls on other (e.g. shape manipulation) are not valid.Tensor(Tensor other);// Explicitly delete constructor that take a pointer (except char*)// so that the pointer doesnt get implicitly cast to bool.template typename T, typename std::enable_if!std::is_sameT, char::value,T::type* nullptrexplicit Tensor(T* t) delete;~Tensor();// I/O operators.friend std::ostream // NOLINT: iosfwdoperator(std::ostream out, const Tensor tensor);/// Returns the data type.DataType dtype() const { return shape_.data_type(); }/// Returns the shape of the tensor.const TensorShape shape() const { return shape_; }/// \brief Convenience accessor for the tensor shape.////// For all shape accessors, see comments for relevant methods of/// TensorShape in tensor_shape.h.int dims() const { return shape().dims(); }/// Convenience accessor for the tensor shape.int64_t dim_size(int d) const { return shape().dim_size(d); }/// Convenience accessor for the tensor shape.int64_t NumElements() const { return shape().num_elements(); }bool IsSameSize(const Tensor b) const {return shape().IsSameSize(b.shape());}// True iff the two tensors use the same underlying refcounted storagebool SharesBufferWith(const Tensor b) const;/// \brief If necessary, has this Tensor been initialized?////// Zero-element Tensors are always considered initialized, even if they/// have never been assigned to and do not have any memory allocated.bool IsInitialized() const;/// Returns the estimated memory usage of this tensor.size_t TotalBytes() const;// Returns the size of allocated memory for this tensor.size_t AllocatedBytes() const;/// Returns true iff this tensor is aligned.bool IsAligned() const {
#if EIGEN_MAX_ALIGN_BYTES 0return true;
#elsevoid* ptr basevoid();return dtype() DT_STRING || NumElements() 0 ||(reinterpret_castintptr_t(ptr) % EIGEN_MAX_ALIGN_BYTES 0);
#endif}/// Assign operator. This tensor shares others underlying storage.Tensor operator(const Tensor other) {CopyFromInternal(other, other.shape());return *this;}/// Move operator. See move constructor for details.Tensor operator(Tensor other);/// \brief Copy the other tensor into this tensor and reshape it.////// This tensor shares others underlying storage. Returns true/// iff other.shape() has the same number of elements of the given/// shape.bool CopyFrom(const Tensor other,const TensorShape shape) TF_MUST_USE_RESULT {if (other.NumElements() ! shape.num_elements()) return false;CopyFromInternal(other, shape);return true;}/// \brief Slice this tensor along the 1st dimension./// I.e., the returned tensor satisfies/// returned[i, ...] this[dim0_start i, ...]./// The returned tensor shares the underlying tensor buffer with this/// tensor.////// NOTE: The returned tensor may not satisfy the same alignment/// requirement as this tensor depending on the shape. The caller/// must check the returned tensors alignment before calling certain/// methods that have alignment requirement (e.g., flat(), tensor()).////// NOTE: When fed with an N-dimensional tensor, this method returns a tensor/// also with N dimensions. If you want to select a sub tensor, see SubSlice.////// REQUIRES: dims() 1/// REQUIRES: 0 dim0_start dim0_limit dim_size(0)Tensor Slice(int64_t dim0_start, int64_t dim0_limit) const;/// \brief Select a subslice from this tensor along the 1st dimension.////// When fed with an N-dimensional tensor, this method returns a tensor with/// N-1 dimensions, where the returned tensor is a subslice of the input/// tensor along the first dimension. The N-1 dimensions of the returned/// tensor are the last N-1 dimensions of the input tensor.////// NOTE: The returned tensor may not satisfy the same alignment/// requirement as this tensor depending on the shape. The caller/// must check the returned tensors alignment before calling certain/// methods that have alignment requirement (e.g., flat(), tensor()).////// REQUIRES: dims() 1/// REQUIRES: 0 index dim_size(0)Tensor SubSlice(int64_t index) const;/// \brief Parse other and construct the tensor./// Returns true iff the parsing succeeds. If the parsing fails,/// the state of *this is unchanged.bool FromProto(const TensorProto other) TF_MUST_USE_RESULT;bool FromProto(Allocator* a, const TensorProto other) TF_MUST_USE_RESULT;/// \brief Fills in proto with *this tensors content.////// AsProtoField() fills in the repeated field for proto.dtype(), while/// AsProtoTensorContent() encodes the content in proto.tensor_content()/// in a compact form.void AsProtoField(TensorProto* proto) const;void AsProtoTensorContent(TensorProto* proto) const;/// \brief Return the tensor data as an Eigen::Tensor with the type and/// sizes of this Tensor.////// Use these methods when you know the data type and the number of/// dimensions of the Tensor and you want an Eigen::Tensor/// automatically sized to the Tensor sizes. The implementation check/// fails if either type or sizes mismatch.////// Example:////// c////// typedef float T;/// Tensor my_mat(...built with Shape{rows: 3, cols: 5}...);/// auto mat my_mat.matrixT(); // 2D Eigen::Tensor, 3 x 5./// auto mat my_mat.tensorT, 2(); // 2D Eigen::Tensor, 3 x 5./// auto vec my_mat.vecT(); // CHECK fails as my_mat is 2D./// auto vec my_mat.tensorT, 3(); // CHECK fails as my_mat is 2D./// auto mat my_mat.matrixint32();// CHECK fails as type mismatch.////// template typename Ttypename TTypesT::Vec vec() {return tensorT, 1();}template typename Ttypename TTypesT::Matrix matrix() {return tensorT, 2();}template typename T, size_t NDIMStypename TTypesT, NDIMS::Tensor tensor() TF_ATTRIBUTE_NOINLINE;/// \brief Return the tensor data to an Eigen::Tensor with the/// same size but a bitwise cast to the specified dtype T.////// Using a bitcast is useful for move and copy operations./// NOTE: this is the same as tensor() except a bitcast is allowed.template typename T, size_t NDIMStypename TTypesT, NDIMS::Tensor bit_casted_tensor();/// \brief Return the tensor data to an Eigen::Tensor with the/// last dimension elements converted into single elements of a larger type.////// For example, this is useful for kernels that can treat NCHW_VECT_C int8/// tensors as NCHW int32 tensors. The sizeof(T) should equal the size of/// the original element type * num elements in the original last dimension./// NDIMS should be 1 less than the original number of dimensions.template typename T, size_t NDIMStypename TTypesT, NDIMS::Tensor reinterpret_last_dimension();/// \brief Return the tensor data as an Eigen::Tensor of the data type and a/// specified shape.////// These methods allow you to access the data with the dimensions/// and sizes of your choice. You do not need to know the number of/// dimensions of the Tensor to call them. However, they CHECK that/// the type matches and the dimensions requested creates an/// Eigen::Tensor with the same number of elements as the tensor.////// Example:////// c////// typedef float T;/// Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...);/// // 1D Eigen::Tensor, size 60:/// auto flat my_ten.flatT();/// // 2D Eigen::Tensor 12 x 5:/// auto inner my_ten.flat_inner_dimsT();/// // 2D Eigen::Tensor 4 x 15:/// auto outer my_ten.shapedT, 2({4, 15});/// // CHECK fails, bad num elements:/// auto outer my_ten.shapedT, 2({4, 8});/// // 3D Eigen::Tensor 6 x 5 x 2:/// auto weird my_ten.shapedT, 3({6, 5, 2});/// // CHECK fails, type mismatch:/// auto bad my_ten.flatint32();////// template typename Ttypename TTypesT::Flat flat();template typename Ttypename TTypesT::UnalignedFlat unaligned_flat() {return unaligned_shapedT, 1({NumElements()});}/// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all/// Tensor dimensions but the last NDIMS-1 into the first dimension of the/// result. If NDIMS dims() then leading dimensions of size 1 will be/// added to make the output rank NDIMS.template typename T, size_t NDIMS 2typename TTypesT, NDIMS::Tensor flat_inner_dims();/// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all/// Tensor dimensions but the first NDIMS-1 into the last dimension of the/// result. If NDIMS dims() then trailing dimensions of size 1 will be/// added to make the output rank NDIMS.template typename T, size_t NDIMS 2typename TTypesT, NDIMS::Tensor flat_outer_dims();/// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing the/// first begin Tensor dimensions into the first dimension of the result and/// the Tensor dimensions of the last dims() - begin - NDIMS into the last/// dimension of the result. If begin 0 then the |begin| leading/// dimensions of size 1 will be added. If begin NDIMS dims() then/// begin NDIMS - dims() trailing dimensions of size 1 will be added.template typename T, size_t NDIMS 3typename TTypesT, NDIMS::Tensor flat_inner_outer_dims(int64_t begin);template typename T, size_t NDIMStypename TTypesT, NDIMS::Tensor shaped(gtl::ArraySliceint64_t new_sizes);/// \brief Return the tensor data to an Eigen::Tensor with the new/// shape specified in new_sizes and cast to a new dtype T.////// Using a bitcast is useful for move and copy operations./// The allowed bitcast is the only difference from shaped().template typename T, size_t NDIMStypename TTypesT, NDIMS::Tensor bit_casted_shaped(gtl::ArraySliceint64_t new_sizes);template typename T, size_t NDIMStypename TTypesT, NDIMS::UnalignedTensor unaligned_shaped(gtl::ArraySliceint64_t new_sizes);/// \brief Return the Tensor data as a TensorMap of fixed size 1:/// TensorMapTensorFixedSizeT, 1./// Using scalar() allows the compiler to perform optimizations as/// the size of the tensor is known at compile time.template typename Ttypename TTypesT::Scalar scalar();/// Const versions of all the methods above.template typename Ttypename TTypesT::ConstVec vec() const {return tensorT, 1();}template typename Ttypename TTypesT::ConstMatrix matrix() const {return tensorT, 2();}template typename T, size_t NDIMStypename TTypesT, NDIMS::ConstTensor tensor() const TF_ATTRIBUTE_NOINLINE;/// \brief Return the tensor data to an Eigen::Tensor with the/// same size but a bitwise cast to the specified dtype T.////// Using a bitcast is useful for move and copy operations./// NOTE: this is the same as tensor() except a bitcast is allowed.template typename T, size_t NDIMStypename TTypesT, NDIMS::ConstTensor bit_casted_tensor() const;/// \brief Return the tensor data to an Eigen::Tensor with the/// last dimension elements converted into single elements of a larger type.////// For example, this is useful for kernels that can treat NCHW_VECT_C int8/// tensors as NCHW int32 tensors. The sizeof(T) should equal the size of/// the original element type * num elements in the original last dimension./// NDIMS should be 1 less than the original number of dimensions.template typename T, size_t NDIMStypename TTypesT, NDIMS::ConstTensor reinterpret_last_dimension() const;template typename Ttypename TTypesT::ConstFlat flat() const;template typename Ttypename TTypesT::UnalignedConstFlat unaligned_flat() const {return unaligned_shapedT, 1({NumElements()});}template typename T, size_t NDIMStypename TTypesT, NDIMS::ConstTensor shaped(gtl::ArraySliceint64_t new_sizes) const;/// \brief Return the tensor data to an Eigen::Tensor with the new/// shape specified in new_sizes and cast to a new dtype T.////// Using a bitcast is useful for move and copy operations./// The allowed bitcast is the only difference from shaped().template typename T, size_t NDIMStypename TTypesT, NDIMS::ConstTensor bit_casted_shaped(gtl::ArraySliceint64_t new_sizes) const;template typename T, size_t NDIMStypename TTypesT, NDIMS::UnalignedConstTensor unaligned_shaped(gtl::ArraySliceint64_t new_sizes) const;template typename Ttypename TTypesT::ConstScalar scalar() const;template typename T, size_t NDIMS 2typename TTypesT, NDIMS::ConstTensor flat_inner_dims() const;template typename T, size_t NDIMS 2typename TTypesT, NDIMS::ConstTensor flat_outer_dims() const;template typename T, size_t NDIMS 3typename TTypesT, NDIMS::ConstTensor flat_inner_outer_dims(int64_t begin) const;/// Render the first max_entries values in *this into a string.std::string SummarizeValue(int64_t max_entries, bool print_v2 false) const;/// A human-readable summary of the tensor suitable for debugging.// num_values is the number of actual data values in the tensor// included in the message. If the tensor might be resident in// GPU/TPU memory use DeviceSafeDebugString instead.std::string DebugString(int num_values) const;std::string DebugString() const { return DebugString(3); }// Variant of DebugString() that should be used for possibly non-CPU tensors.// If the tensor is not resident on CPU, we cant read its values as// DebugString() does.std::string DeviceSafeDebugString() const;/// Fill in the TensorDescription proto with metadata about the/// tensor that is useful for monitoring and debugging.void FillDescription(TensorDescription* description) const;/// \brief Returns a StringPiece mapping the current tensors buffer.////// The returned StringPiece may point to memory location on devices/// that the CPU cannot address directly.////// NOTE: The underlying tensor buffer is refcounted, so the lifetime/// of the contents mapped by the StringPiece matches the lifetime of/// the buffer; callers should arrange to make sure the buffer does/// not get destroyed while the StringPiece is still used.////// REQUIRES: DataTypeCanUseMemcpy(dtype()).StringPiece tensor_data() const;void* data() const;/// Copy the other tensor into this tensor, reshape it and reinterpret the/// buffers datatype. If an ok Status is returned, the two tensors now share/// the same underlying storage.////// This call requires that the other tensor and the given type and shape/// are compatible (i.e. they occupy the same number of bytes).////// Specifically:////// shape.num_elements() * DataTypeSize(type)////// must equal////// other.num_elements() * DataTypeSize(other.dtype())////// In addition, this function requires:/// * DataTypeSize(other.dtype()) ! 0/// * DataTypeSize(type) ! 0////// If any of the requirements are not met, errors::InvalidArgument is/// returned.Status BitcastFrom(const Tensor other, DataType dtype,const TensorShape shape);/// Like BitcastFrom, but CHECK fails if any preconditions are not met.////// Deprecated. Use BitcastFrom instead and check the returned Status.void UnsafeCopyFromInternal(const Tensor other, DataType dtype,const TensorShape shape) {TF_CHECK_OK(BitcastFrom(other, dtype, shape));}// Returns true if the refcount on buf_ and any possible underlying root// buffer is one.bool RefCountIsOne() const;// Experimental. Returns the refcount on buf_ if it points to a regular// TensorBuffer. If buf_ points to a SubBuffer, returns -1.int RefCount() const;// Returns the type of the underlying memory.AllocatorMemoryType GetMemoryType() const { return buf_-GetMemoryType(); }private:void CheckType(DataType expected_dtype) const;void CheckTypeAndIsAligned(DataType expected_dtype) const;void CheckIsAlignedAndSingleElement() const;void set_dtype(DataType t) { shape_.set_data_type(t); }// TensorShapes InlineVector.static gtl::InlinedVectorint64_t, 4 ComputeFlatInnerDims(gtl::ArraySliceint64_t orig, int64_t num_out_dims);static gtl::InlinedVectorint64_t, 4 ComputeFlatOuterDims(gtl::ArraySliceint64_t orig, int64_t num_out_dims);TensorShape shape_;TensorBuffer* buf_;friend class DMAHelper; // For access to buf_.friend class TensorCApi; // For access to buf_.friend class TensorCord; // For access to buf_.friend class TensorReference; // For access to buf_.friend class VariableOp; // For access to set_shape.friend class AutoReloadVariableOp; // For access to set_shape.friend class TensorTestHelper; // For access to set_shape.friend class TensorInterface; // For access to set_shape.friend class CastOpBase; // For access to set_dtype.friend class ScopedAllocator; // For access to buf_.friend class PjRtTensorBufferUtil; // For access to buf_.friend Status batch_util::CopyElementToSlice(Tensor element, Tensor* parent,int64_t index); // For access to baseT().friend Status batch_util::CopySliceToElement(const Tensor parent, Tensor* element,int64_t index); // For access to baseT().friend Status batch_util::MaybeMoveSliceToElement(Tensor* parent, Tensor* element,int64_t index); // For access to baseT().friend Status batch_util::CopyContiguousSlices(const Tensor src, int64_t src_offset, int64_t dst_offset,int64_t num_slices,Tensor* dst); // For access to baseT().bool CanUseDMA() const;// Only needed by variable op to set the shape of an uninitialized// Tensor.// TODO: Remove this when we have a better story for detecting// uninitialized tensors.void set_shape(const TensorShape shape) {DataType dt dtype();shape_ shape;set_dtype(dt);}inline void CopyFromInternal(const Tensor other, const TensorShape shape) {DCHECK_EQ(shape.num_elements(), other.NumElements());// Data type will be overwritten if this other, since dtype is part of// shape.DataType other_dtype other.dtype();shape_ shape;set_dtype(other_dtype);if (buf_ ! other.buf_) {if (buf_) buf_-Unref();buf_ other.buf_;if (buf_) buf_-Ref();}}template typename TT* base() const;template size_t NDIMSvoid FillDimsAndValidateCompatibleShape(gtl::ArraySliceint64_t new_sizes,Eigen::arrayEigen::DenseIndex, NDIMS* dims) const;template typename T, size_t NDIMSvoid FillDimsAndValidateCompatibleShape(gtl::ArraySliceint64_t new_sizes,Eigen::arrayEigen::DenseIndex, NDIMS* dims) const;
};// Implementation details// START_SKIP_DOXYGENtemplate typename T
T* Tensor::base() const {return buf_ nullptr ? nullptr : buf_-baseT();
}// This routine is defined out of line for code-space savings
template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::tensor() {CheckTypeAndIsAligned(DataTypeToEnumT::v());return typename TTypesT, NDIMS::Tensor(baseT(),shape().AsEigenDSizesNDIMS());
}// This routine is defined out of line for code-space savings
template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::tensor() const {CheckTypeAndIsAligned(DataTypeToEnumT::v());return typename TTypesT, NDIMS::ConstTensor(baseconst T(),shape().AsEigenDSizesNDIMS());
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::bit_casted_tensor() {CHECK(IsAligned());return typename TTypesT, NDIMS::Tensor(baseT(),shape().AsEigenDSizesNDIMS());
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::bit_casted_tensor() const {CHECK(IsAligned());return typename TTypesT, NDIMS::ConstTensor(baseconst T(),shape().AsEigenDSizesNDIMS());
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::reinterpret_last_dimension() {if (NDIMS dims()) {return tensorT, NDIMS();}CHECK(IsAligned());CHECK_EQ(static_castint(NDIMS), dims() - 1);CHECK_EQ(static_castint64_t(sizeof(T)),shape_.dim_sizes()[NDIMS] * DataTypeSize(dtype()));Eigen::arrayEigen::DenseIndex, NDIMS dims;for (int d 0; d NDIMS; d) {dims[d] shape_.dim_sizes()[d];}return typename TTypesT, NDIMS::Tensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::reinterpret_last_dimension()const {if (NDIMS dims()) {return tensorT, NDIMS();}CHECK(IsAligned());CHECK_EQ(static_castint(NDIMS), dims() - 1);CHECK_EQ(static_castint64_t(sizeof(T)),shape_.dim_sizes()[NDIMS] * DataTypeSize(dtype()));Eigen::arrayEigen::DenseIndex, NDIMS dims;for (int d 0; d NDIMS; d) {dims[d] shape_.dim_sizes()[d];}return typename TTypesT, NDIMS::ConstTensor(baseconst T(), dims);
}template size_t NDIMS
void Tensor::FillDimsAndValidateCompatibleShape(gtl::ArraySliceint64_t new_sizes,Eigen::arrayEigen::DenseIndex, NDIMS* dims) const {CHECK_EQ(NDIMS, new_sizes.size());int64_t new_num_elements 1;for (size_t d 0; d NDIMS; d) {new_num_elements * new_sizes[d];(*dims)[d] new_sizes[d];}CHECK_EQ(new_num_elements, NumElements());
}template typename T, size_t NDIMS
void Tensor::FillDimsAndValidateCompatibleShape(gtl::ArraySliceint64_t new_sizes,Eigen::arrayEigen::DenseIndex, NDIMS* dims) const {CHECK_EQ(NDIMS, new_sizes.size());int64_t new_num_elements 1;for (size_t d 0; d NDIMS; d) {new_num_elements * new_sizes[d];(*dims)[d] new_sizes[d];}const int element_size DataTypeSize(BaseType(dtype()));if (element_size 0) {CHECK_EQ(new_num_elements * static_castint64_t(sizeof(T)),NumElements() * element_size);} else {// DataTypeSize() returns 0 for some data types. In this case, assume that T// has the same size as the buffer type.// NOTE: If we can be sure that DataTypeSize() does not return 0 for all POD// types, then we should check DataTypeToEnumT::v() dtype(). Or simply// check if element_size 0 to err when bit cast is attempted on Tensor// of unknown data type size.CHECK_EQ(new_num_elements, NumElements());}
}template typename T
typename TTypesT::Flat Tensor::flat() {// Equivalent to return shapedT, 1({NumElements()});CheckTypeAndIsAligned(DataTypeToEnumT::v());Eigen::arrayEigen::DenseIndex, 1 dims;dims[0] NumElements();return typename TTypesT, 1::Tensor(baseT(), dims);
}template typename T
typename TTypesT::ConstFlat Tensor::flat() const {// Equuivalent to return shapedT, 1({NumElements()});CheckTypeAndIsAligned(DataTypeToEnumT::v());Eigen::arrayEigen::DenseIndex, 1 dims;dims[0] NumElements();return typename TTypesT, 1::ConstTensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::shaped(gtl::ArraySliceint64_t new_sizes) {CheckTypeAndIsAligned(DataTypeToEnumT::v());Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShape(new_sizes, dims);return typename TTypesT, NDIMS::Tensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::bit_casted_shaped(gtl::ArraySliceint64_t new_sizes) {CHECK(IsAligned());Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShapeT(new_sizes, dims);return typename TTypesT, NDIMS::Tensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::UnalignedTensor Tensor::unaligned_shaped(gtl::ArraySliceint64_t new_sizes) {CheckType(DataTypeToEnumT::v());Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShape(new_sizes, dims);return typename TTypesT, NDIMS::UnalignedTensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::shaped(gtl::ArraySliceint64_t new_sizes) const {CheckType(DataTypeToEnumT::v());CHECK(IsAligned()) ptr basevoid();Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShape(new_sizes, dims);return typename TTypesT, NDIMS::ConstTensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::bit_casted_shaped(gtl::ArraySliceint64_t new_sizes) const {CHECK(IsAligned());Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShapeT(new_sizes, dims);return typename TTypesT, NDIMS::ConstTensor(baseT(), dims);
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::UnalignedConstTensor Tensor::unaligned_shaped(gtl::ArraySliceint64_t new_sizes) const {CheckType(DataTypeToEnumT::v());Eigen::arrayEigen::DenseIndex, NDIMS dims;FillDimsAndValidateCompatibleShape(new_sizes, dims);return typename TTypesT, NDIMS::UnalignedConstTensor(baseT(), dims);
}template typename T
typename TTypesT::Scalar Tensor::scalar() {static_assert(!std::is_sameT, std::string::value,std::string is no longer a scalar type, use tensorflow::tstring);CheckIsAlignedAndSingleElement();return typename TTypesT::Scalar(baseT());
}template typename T
typename TTypesT::ConstScalar Tensor::scalar() const {static_assert(!std::is_sameT, std::string::value,std::string is no longer a scalar type, use tensorflow::tstring);CheckIsAlignedAndSingleElement();return typename TTypesT::ConstScalar(baseT());
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::flat_inner_dims() {return shapedT, NDIMS(ComputeFlatInnerDims(shape_.dim_sizes(), NDIMS));
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::flat_outer_dims() {return shapedT, NDIMS(ComputeFlatOuterDims(shape_.dim_sizes(), NDIMS));
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::Tensor Tensor::flat_inner_outer_dims(int64_t begin) {gtl::InlinedVectorint64_t, 4 flat_outer ComputeFlatOuterDims(shape_.dim_sizes(), begin NDIMS);return shapedT, NDIMS(ComputeFlatInnerDims(flat_outer, NDIMS));
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::flat_inner_dims() const {return shapedT, NDIMS(ComputeFlatInnerDims(shape_.dim_sizes(), NDIMS));
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::flat_outer_dims() const {return shapedT, NDIMS(ComputeFlatOuterDims(shape_.dim_sizes(), NDIMS));
}template typename T, size_t NDIMS
typename TTypesT, NDIMS::ConstTensor Tensor::flat_inner_outer_dims(int64_t begin) const {gtl::InlinedVectorint64_t, 4 flat_outer ComputeFlatOuterDims(shape_.dim_sizes(), begin NDIMS);return shapedT, NDIMS(ComputeFlatInnerDims(flat_outer, NDIMS));
}inline Tensor::Tensor(const Tensor other): shape_(other.shape()), buf_(other.buf_) {if (buf_) buf_-Ref();
}inline Tensor::Tensor(Tensor other): shape_(std::move(other.shape_)), buf_(other.buf_) {other.buf_ nullptr;
}class Tensor::HostScalarTensorBufferBase : public TensorBuffer {public:using TensorBuffer::TensorBuffer;bool GetAllocatedBytes(size_t* out_bytes) const final;void FillAllocationDescription(AllocationDescription* proto) const final;
};// A packed representation for a single scalar value of type T, and a
// TensorBuffer implementation that describes (and manages the lifetime of)
// that value.
template typename T
struct Tensor::ValueAndTensorBuffer {class HostScalarTensorBuffer : public Tensor::HostScalarTensorBufferBase {public:explicit HostScalarTensorBuffer(void* data): HostScalarTensorBufferBase(data) {}size_t size() const final { return sizeof(T); }TensorBuffer* root_buffer() final { return this; }// Override operator delete so that calling delete this in// core::Refcounted::Unref() for an object of this type will free// the enclosing ValueAndTensorBuffer for the tensor buffer.//// NOTE(mrry): The definition of this method must be outside the class// definition in order to satisfy some compilers.static void operator delete(void* ptr);static void operator delete(void*, void*) {// Some compilers require an overridden class-specific deallocation// function, which will be called if placement new throws an// exception.}private:~HostScalarTensorBuffer() override { static_castT*(data())-~T(); }};T value;HostScalarTensorBuffer tensor_buffer;
};/* static */
template typename T
void Tensor::ValueAndTensorBufferT::HostScalarTensorBuffer::operator delete(void* ptr) {// Use a dummy object to compute to offset of// ValueAndTensorBuffer::tensor_buffer, because offsetof() is not// necessarily defined on this non-POD type (until C17).//// NOTE(mrry): Using sizeof(Tensor::ValueAndTensorBufferT) here requires// us to define this method outside the class definition, so that it is not// considered an incomplete type.typename std::aligned_storagesizeof(Tensor::ValueAndTensorBufferT),alignof(Tensor::ValueAndTensorBufferT)::typedummy_storage_;Tensor::ValueAndTensorBufferT* dummy_object reinterpret_castTensor::ValueAndTensorBufferT*(dummy_storage_);intptr_t offset reinterpret_castintptr_t(dummy_object-tensor_buffer) -reinterpret_castintptr_t(dummy_object);port::AlignedFree(static_castchar*(ptr) - offset);
}template typename T
Tensor::Tensor(T value, host_scalar_tag tag) {auto* value_and_buf static_castTensor::ValueAndTensorBufferT*(port::AlignedMalloc(sizeof(typename Tensor::ValueAndTensorBufferT),EIGEN_MAX_ALIGN_BYTES));new (value_and_buf-value) T(std::move(value));new (value_and_buf-tensor_buffer)typename Tensor::ValueAndTensorBufferT::HostScalarTensorBuffer(value_and_buf);buf_ value_and_buf-tensor_buffer;set_dtype(DataTypeToEnumT::value);
}inline Tensor Tensor::operator(Tensor other) {// Avoid self-assignment, since we might destroy our underlying buffer.if (other ! this) {shape_ std::move(other.shape_);if (buf_) buf_-Unref();buf_ other.buf_;other.buf_ nullptr;}return *this;
}// END_SKIP_DOXYGEN} // namespace tensorflow#endif // TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_
1 .3 TF 模型
TensorFlow Lite 模型以名为 FlatBuffer 的专用高效可移植格式由“.tflite”文件扩展名标识表示。与 TensorFlow 的协议缓冲区模型格式相比这种格式具有多种优势例如可缩减大小代码占用的空间较小以及提高推断速度可直接访问数据无需执行额外的解析/解压缩步骤这样一来TensorFlow Lite 即可在计算和内存资源有限的设备上高效地运行。
TensorFlow Lite 模型可以选择包含元数据并在元数据中添加人类可读的模型说明和机器可读的数据以便在设备推断过程中自动生成处理前和处理后流水线。如需了解详情请参阅添加元数据。
您可以通过以下方式生成 TensorFlow Lite 模型
使用现有的 TensorFlow Lite 模型若要选择现有模型请参阅 TensorFlow Lite 示例。模型可能包含元数据也可能不含元数据。
创建 TensorFlow Lite 模型使用 TensorFlow Lite Model Maker利用您自己的自定义数据集创建模型。默认情况下所有模型都包含元数据。
将 TensorFlow 模型转换为 TensorFlow Lite 模型使用 TensorFlow Lite Converter 将 TensorFlow 模型转换为 TensorFlow Lite 模型。在转换过程中您可以应用量化等优化措施以缩减模型大小和缩短延时并最大限度降低或完全避免准确率损失。默认情况下所有模型都不含元数据。
2 流水线
下面将分析 改算子中的核心部分process 方法
2.1 TfLiteConverterCalculator
根据输入的tag区分调用 Cpu处理还是Gpu处理 最终将输入的数据转换成标准的TfliteTensor
absl::Status TfLiteConverterCalculator::Process(CalculatorContext* cc) {if (use_gpu_) {if (cc-Inputs().Tag(kGpuBufferTag).IsEmpty()) {return absl::OkStatus();}if (!initialized_) {MP_RETURN_IF_ERROR(InitGpu(cc));initialized_ true;}// Convert to GPU tensors type.MP_RETURN_IF_ERROR(ProcessGPU(cc));} else {// Convert to CPU tensors or Matrix type.MP_RETURN_IF_ERROR(ProcessCPU(cc));}return absl::OkStatus();
}2.1.1 GPU
ProcessGPU 处理前 初始了GPU上下文 shader program
MSL/GLSL语言 intGpu
区域内2d纹理 rgba 通道采样器 vec4 #if MEDIAPIPE_TFLITE_GPU_SUPPORTED// Get input image sizes.const auto input cc-Inputs().Tag(kGpuBufferTag).Getmediapipe::GpuBuffer();mediapipe::ImageFormat::Format format mediapipe::ImageFormatForGpuBufferFormat(input.format());gpu_data_out_ absl::make_uniqueGPUData();gpu_data_out_-elements input.height() * input.width() * max_num_channels_;const bool include_alpha (max_num_channels_ 4);const bool single_channel (max_num_channels_ 1);if (!(format mediapipe::ImageFormat::GRAY8 ||format mediapipe::ImageFormat::SRGB ||format mediapipe::ImageFormat::SRGBA))RET_CHECK_FAIL() Unsupported GPU input format.;if (include_alpha (format ! mediapipe::ImageFormat::SRGBA))RET_CHECK_FAIL() Num input channels is less than desired output.;
#endif // MEDIAPIPE_TFLITE_GPU_SUPPORTED#if MEDIAPIPE_TFLITE_GL_INFERENCEMP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext([this, include_alpha, input, single_channel]() - absl::Status {// Device memory.MP_RETURN_IF_ERROR(::tflite::gpu::gl::CreateReadWriteShaderStorageBufferfloat(gpu_data_out_-elements, gpu_data_out_-buffer));// Shader to convert GL Texture to Shader Storage Buffer Object (SSBO),// with normalization to either: [0,1] or [-1,1].const std::string shader_source absl::Substitute(R( #version 310 eslayout(local_size_x $0, local_size_y $0) in;layout(binding 0) uniform sampler2D input_texture;layout(std430, binding 1) buffer Output {float elements[];} output_data;ivec2 width_height ivec2($1, $2);void main() {ivec2 gid ivec2(gl_GlobalInvocationID.xy);if (gid.x width_height.x || gid.y width_height.y) return;vec4 pixel texelFetch(input_texture, gid, 0);$3 // normalize [-1,1]int linear_index $7 * ($4 * width_height.x gid.x);output_data.elements[linear_index 0] pixel.x; // r channel$5 // g b channels$6 // alpha channel}),/*$0*/kWorkgroupSize, /*$1*/input.width(), /*$2*/input.height(),/*$3*/output_range_.has_value()? absl::Substitute(pixel pixel * float($0) float($1);,(output_range_-second - output_range_-first),output_range_-first): ,/*$4*/flip_vertically_ ? (width_height.y - 1 - gid.y) : gid.y,/*$5*/single_channel? : R(output_data.elements[linear_index 1] pixel.y;output_data.elements[linear_index 2] pixel.z;),/*$6*/include_alpha ? output_data.elements[linear_index 3] pixel.w;: ,/*$7*/max_num_channels_);MP_RETURN_IF_ERROR(GlShader::CompileShader(GL_COMPUTE_SHADER, shader_source, gpu_data_out_-shader));MP_RETURN_IF_ERROR(GlProgram::CreateWithShader(gpu_data_out_-shader, gpu_data_out_-program));return absl::OkStatus();}));#elif MEDIAPIPE_TFLITE_METAL_INFERENCERET_CHECK(include_alpha) iOS GPU inference currently accepts only RGBA input.;// Device memory.idMTLDevice device gpu_helper_.mtlDevice;gpu_data_out_-buffer [device newBufferWithLength:gpu_data_out_-elements * sizeof(float)options:MTLResourceStorageModeShared];// Shader to convert GL Texture to Metal Buffer,// with normalization to either: [0,1] or [-1,1].const std::string shader_source absl::Substitute(R(#include metal_stdlibusing namespace metal;kernel void convertKernel(texture2dhalf, access::sample in_tex [[ texture(0) ]],device float* out_buf [[ buffer(1) ]],uint2 gid [[ thread_position_in_grid ]]) {if (gid.x in_tex.get_width() || gid.y in_tex.get_height()) return;constexpr sampler texture_sampler(coord::pixel, address::clamp_to_edge);const float2 coord float2(gid.x, gid.y);$0 pixel $0(in_tex.sample(texture_sampler, coord).$1);$2 // normalize [-1,1]const int linear_index $4 * ($3 * in_tex.get_width() gid.x);out_buf[linear_index 0] pixel.x;out_buf[linear_index 1] pixel.y;out_buf[linear_index 2] pixel.z;$5 // alpha channel}),/*$0*/include_alpha ? float4 : float3,/*$1*/include_alpha ? rgba : rgb,/*$2*/output_range_.has_value()? absl::Substitute(pixel pixel * float($0) float($1);,(output_range_-second - output_range_-first),output_range_-first): ,/*$3*/flip_vertically_ ? (in_tex.get_height() - 1 - gid.y) : gid.y,/*$4*/include_alpha ? 4 : 3,/*$5*/include_alpha ? out_buf[linear_index 3] pixel.w; : );NSString* library_source [NSString stringWithUTF8String:shader_source.c_str()];NSError* error nil;idMTLLibrary library [device newLibraryWithSource:library_source options:nullptr error:error];RET_CHECK(library ! nil) Couldnt create shader library [[error localizedDescription] UTF8String];idMTLFunction kernel_func nil;kernel_func [library newFunctionWithName:convertKernel];RET_CHECK(kernel_func ! nil) Couldnt create kernel function.;gpu_data_out_-pipeline_state [device newComputePipelineStateWithFunction:kernel_func error:error];RET_CHECK(gpu_data_out_-pipeline_state ! nil) Couldnt create pipeline state [[error localizedDescription] UTF8String];
#endif // MEDIAPIPE_TFLITE_GL_INFERENCEreturn absl::OkStatus();
}
CreateSourceTexture
also Convert ImageFrame to GpuBuffer.
在MediaPipe中GpuBuffer是一个用于存储GPU内存中的数据的类。GpuBuffer可以被用于存储图像、视频帧、音频等数据。
GpuBuffer提供了一个跨平台的GPU内存抽象允许数据在不同的GPU和平台之间进行传输和操作。
GpuBuffer(GpuBuffer other) default;
auto src gpu_helper_.CreateSourceTexture(input);
GlTexture GlCalculatorHelperImpl::MapGpuBuffer(const GpuBuffer gpu_buffer,GlTextureView view) {if (gpu_buffer.format() ! GpuBufferFormat::kUnknown) {// TODO: do the params need to be reset here??glBindTexture(view.target(), view.name());GlTextureInfo info GlTextureInfoForGpuBufferFormat(gpu_buffer.format(), view.plane(), GetGlVersion());gl_context_-SetStandardTextureParams(view.target(),info.gl_internal_format);glBindTexture(view.target(), 0);}这段代码是 GlTextureCalculatorHelperImpl 类的 MapGpuBuffer 函数它的作用是将 GpuBuffer 映射到 GlTexture。代码中首先判断了 GpuBuffer 的格式是否为 GpuBufferFormat::kUnknown如果不是那么就需要根据 GpuBuffer 的格式、视图对象和 OpenGL 版本生成 GlTextureInfo 对象然后使用该对象设置标准纹理参数。
具体来说代码中首先通过 glBindTexture 函数将纹理对象绑定到指定的纹理单元然后使用 GlTextureInfoForGpuBufferFormat 函数生成 GlTextureInfo 对象。接着使用 gl_context_-SetStandardTextureParams 函数设置标准纹理参数。最后再次使用 glBindTexture 函数将纹理对象解绑。
ProcessGPU
absl::Status TfLiteConverterCalculator::ProcessGPU(CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE// GpuBuffer to tflite::gpu::GlBuffer conversion.const auto input cc-Inputs().Tag(kGpuBufferTag).Getmediapipe::GpuBuffer();MP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext([this, input]() - absl::Status {// Convert GL texture into TfLite GlBuffer (SSBO).auto src gpu_helper_.CreateSourceTexture(input);glActiveTexture(GL_TEXTURE0 0);glBindTexture(GL_TEXTURE_2D, src.name());MP_RETURN_IF_ERROR(gpu_data_out_-buffer.BindToIndex(1));const tflite::gpu::uint3 workgroups {NumGroups(input.width(), kWorkgroupSize),NumGroups(input.height(), kWorkgroupSize), 1};MP_RETURN_IF_ERROR(gpu_data_out_-program.Dispatch(workgroups));glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);glBindTexture(GL_TEXTURE_2D, 0);src.Release();return absl::OkStatus();}));// Copy into outputs.auto output_tensors absl::make_uniquestd::vectorGpuTensor();MP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext([this, output_tensors]() - absl::Status {output_tensors-resize(1);{GpuTensor tensor output_tensors-at(0);MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBufferfloat(gpu_data_out_-elements, tensor));MP_RETURN_IF_ERROR(CopyBuffer(gpu_data_out_-buffer, tensor));}return absl::OkStatus();}));cc-Outputs().Tag(kTensorsGpuTag).Add(output_tensors.release(), cc-InputTimestamp());
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE// GpuBuffer to idMTLBuffer conversion.const auto input cc-Inputs().Tag(kGpuBufferTag).Getmediapipe::GpuBuffer();idMTLCommandBuffer command_buffer [gpu_helper_ commandBuffer];idMTLTexture src_texture [gpu_helper_ metalTextureWithGpuBuffer:input];command_buffer.label TfLiteConverterCalculatorConvertAndBlit;idMTLComputeCommandEncoder compute_encoder [command_buffer computeCommandEncoder];[compute_encoder setComputePipelineState:gpu_data_out_-pipeline_state];[compute_encoder setTexture:src_texture atIndex:0];[compute_encoder setBuffer:gpu_data_out_-buffer offset:0 atIndex:1];MTLSize threads_per_group MTLSizeMake(kWorkgroupSize, kWorkgroupSize, 1);MTLSize threadgroups MTLSizeMake(NumGroups(input.width(), kWorkgroupSize),NumGroups(input.height(), kWorkgroupSize), 1);[compute_encoder dispatchThreadgroups:threadgroupsthreadsPerThreadgroup:threads_per_group];[compute_encoder endEncoding];// Copy into outputs.// TODO Avoid this copy.auto output_tensors absl::make_uniquestd::vectorGpuTensor();output_tensors-resize(1);idMTLDevice device gpu_helper_.mtlDevice;output_tensors-at(0) [device newBufferWithLength:gpu_data_out_-elements * sizeof(float)options:MTLResourceStorageModeShared];[MPPMetalUtil blitMetalBufferTo:output_tensors-at(0)from:gpu_data_out_-bufferblocking:falsecommandBuffer:command_buffer];cc-Outputs().Tag(kTensorsGpuTag).Add(output_tensors.release(), cc-InputTimestamp());
#elseRET_CHECK_FAIL() GPU processing is not enabled.;
#endif // MEDIAPIPE_TFLITE_GL_INFERENCEreturn absl::OkStatus();
}absl::Status TfLiteConverterCalculator::InitGpu(CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GPU_SUPPORTED// Get input image sizes.const auto input cc-Inputs().Tag(kGpuBufferTag).Getmediapipe::GpuBuffer();mediapipe::ImageFormat::Format format mediapipe::ImageFormatForGpuBufferFormat(input.format());gpu_data_out_ absl::make_uniqueGPUData();gpu_data_out_-elements input.height() * input.width() * max_num_channels_;const bool include_alpha (max_num_channels_ 4);const bool single_channel (max_num_channels_ 1);if (!(format mediapipe::ImageFormat::GRAY8 ||format mediapipe::ImageFormat::SRGB ||format mediapipe::ImageFormat::SRGBA))RET_CHECK_FAIL() Unsupported GPU input format.;if (include_alpha (format ! mediapipe::ImageFormat::SRGBA))RET_CHECK_FAIL() Num input channels is less than desired output.;
#endif // MEDIAPIPE_TFLITE_GPU_SUPPORTED#if MEDIAPIPE_TFLITE_GL_INFERENCEMP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext([this, include_alpha, input, single_channel]() - absl::Status {// Device memory.MP_RETURN_IF_ERROR(::tflite::gpu::gl::CreateReadWriteShaderStorageBufferfloat(gpu_data_out_-elements, gpu_data_out_-buffer));// Shader to convert GL Texture to Shader Storage Buffer Object (SSBO),// with normalization to either: [0,1] or [-1,1].const std::string shader_source absl::Substitute(R( #version 310 eslayout(local_size_x $0, local_size_y $0) in;layout(binding 0) uniform sampler2D input_texture;layout(std430, binding 1) buffer Output {float elements[];} output_data;ivec2 width_height ivec2($1, $2);void main() {ivec2 gid ivec2(gl_GlobalInvocationID.xy);if (gid.x width_height.x || gid.y width_height.y) return;vec4 pixel texelFetch(input_texture, gid, 0);$3 // normalize [-1,1]int linear_index $7 * ($4 * width_height.x gid.x);output_data.elements[linear_index 0] pixel.x; // r channel$5 // g b channels$6 // alpha channel}),/*$0*/kWorkgroupSize, /*$1*/input.width(), /*$2*/input.height(),/*$3*/output_range_.has_value()? absl::Substitute(pixel pixel * float($0) float($1);,(output_range_-second - output_range_-first),output_range_-first): ,/*$4*/flip_vertically_ ? (width_height.y - 1 - gid.y) : gid.y,/*$5*/single_channel? : R(output_data.elements[linear_index 1] pixel.y;output_data.elements[linear_index 2] pixel.z;),/*$6*/include_alpha ? output_data.elements[linear_index 3] pixel.w;: ,/*$7*/max_num_channels_);MP_RETURN_IF_ERROR(GlShader::CompileShader(GL_COMPUTE_SHADER, shader_source, gpu_data_out_-shader));MP_RETURN_IF_ERROR(GlProgram::CreateWithShader(gpu_data_out_-shader, gpu_data_out_-program));return absl::OkStatus();}));#elif MEDIAPIPE_TFLITE_METAL_INFERENCERET_CHECK(include_alpha) iOS GPU inference currently accepts only RGBA input.;// Device memory.idMTLDevice device gpu_helper_.mtlDevice;gpu_data_out_-buffer [device newBufferWithLength:gpu_data_out_-elements * sizeof(float)options:MTLResourceStorageModeShared];// Shader to convert GL Texture to Metal Buffer,// with normalization to either: [0,1] or [-1,1].const std::string shader_source absl::Substitute(R(#include metal_stdlibusing namespace metal;kernel void convertKernel(texture2dhalf, access::sample in_tex [[ texture(0) ]],device float* out_buf [[ buffer(1) ]],uint2 gid [[ thread_position_in_grid ]]) {if (gid.x in_tex.get_width() || gid.y in_tex.get_height()) return;constexpr sampler texture_sampler(coord::pixel, address::clamp_to_edge);const float2 coord float2(gid.x, gid.y);$0 pixel $0(in_tex.sample(texture_sampler, coord).$1);$2 // normalize [-1,1]const int linear_index $4 * ($3 * in_tex.get_width() gid.x);out_buf[linear_index 0] pixel.x;out_buf[linear_index 1] pixel.y;out_buf[linear_index 2] pixel.z;$5 // alpha channel}),/*$0*/include_alpha ? float4 : float3,/*$1*/include_alpha ? rgba : rgb,/*$2*/output_range_.has_value()? absl::Substitute(pixel pixel * float($0) float($1);,(output_range_-second - output_range_-first),output_range_-first): ,/*$3*/flip_vertically_ ? (in_tex.get_height() - 1 - gid.y) : gid.y,/*$4*/include_alpha ? 4 : 3,/*$5*/include_alpha ? out_buf[linear_index 3] pixel.w; : );NSString* library_source [NSString stringWithUTF8String:shader_source.c_str()];NSError* error nil;idMTLLibrary library [device newLibraryWithSource:library_source options:nullptr error:error];RET_CHECK(library ! nil) Couldnt create shader library [[error localizedDescription] UTF8String];idMTLFunction kernel_func nil;kernel_func [library newFunctionWithName:convertKernel];RET_CHECK(kernel_func ! nil) Couldnt create kernel function.;gpu_data_out_-pipeline_state [device newComputePipelineStateWithFunction:kernel_func error:error];RET_CHECK(gpu_data_out_-pipeline_state ! nil) Couldnt create pipeline state [[error localizedDescription] UTF8String];
#endif // MEDIAPIPE_TFLITE_GL_INFERENCEreturn absl::OkStatus();
}
#2.1.2 CPU
…
2.2 模型推理 TENSORS_GPU TENSORS_CPU inferemce 假设张量被正确排序(按顺序添加到模型中)。 输入张量被假定为正确的大小并且已经归一化。 当图形关闭时所有输出的tflitetenors将被销毁; (即调用graph.WaitUntilDone()之后)。 GPU张量支持需要OpenGL ES 3.1。 这个计算器默认使用FixedSizeInputStreamHandler。 2.2.1 TfLiteInferenceCalculator
下文将忽略所有cpu部分
推理支持GPU加速及其它硬件OPENCL加速 通过加载不同的 tflite_gpu_runner_ TENSOR_CPU 数据可以通过各种平台的优化指令集等加速
absl::Status TfLiteInferenceCalculator::InitTFLiteGPURunner(CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GL_INFERENCEASSIGN_OR_RETURN(model_packet_, GetModelAsPacket(*cc));const auto model *model_packet_.GetTfLiteModelPtr();tflite::ops::builtin::BuiltinOpResolverWithoutDefaultDelegatesdefault_op_resolver;auto op_resolver_ptr static_castconst tflite::ops::builtin::BuiltinOpResolver*(default_op_resolver);if (cc-InputSidePackets().HasTag(kCustomOpResolverTag)) {op_resolver_ptr (cc-InputSidePackets().Tag(kCustomOpResolverTag).Gettflite::ops::builtin::BuiltinOpResolver());}// Create runnertflite::gpu::InferenceOptions options;options.priority1 allow_precision_loss_? tflite::gpu::InferencePriority::MIN_LATENCY: tflite::gpu::InferencePriority::MAX_PRECISION;options.priority2 tflite::gpu::InferencePriority::AUTO;options.priority3 tflite::gpu::InferencePriority::AUTO;switch (tflite_gpu_runner_usage_) {case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::FAST_SINGLE_ANSWER: {options.usage tflite::gpu::InferenceUsage::FAST_SINGLE_ANSWER;break;}case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::SUSTAINED_SPEED: {options.usage tflite::gpu::InferenceUsage::SUSTAINED_SPEED;break;}case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::UNSPECIFIED: {return absl::InternalError(inference usage need to be specified.);}}tflite_gpu_runner_ std::make_uniquetflite::gpu::TFLiteGPURunner(options);switch (tflite_gpu_runner_api_) {case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::OPENGL: {tflite_gpu_runner_-ForceOpenGL();break;}case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::OPENCL: {tflite_gpu_runner_-ForceOpenCL();break;}case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::ANY: {// Do not need to force any specific API.break;}}
ProcessGpu
interpreter_-Invoke()
absl::Status TfLiteInferenceCalculator::Process(CalculatorContext* cc) {return RunInContextIfNeeded([this, cc]() - absl::Status {// 0. Declare outputsauto output_tensors_gpu absl::make_uniquestd::vectorGpuTensor();auto output_tensors_cpu absl::make_uniquestd::vectorTfLiteTensor();// 1. Receive pre-processed tensor inputs.if (gpu_input_) {MP_RETURN_IF_ERROR(ProcessInputsGpu(cc, output_tensors_gpu.get()));} else {MP_RETURN_IF_ERROR(ProcessInputsCpu(cc, output_tensors_cpu.get()));}// 2. Run inference.
#if MEDIAPIPE_TFLITE_GL_INFERENCEif (gpu_inference_ use_advanced_gpu_api_) {RET_CHECK(tflite_gpu_runner_-Invoke().ok());} else {RET_CHECK_EQ(interpreter_-Invoke(), kTfLiteOk);}
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE// Metal delegate supports external command buffer only if all input and// output buffers are on GPU.if (gpu_inference_ gpu_input_ gpu_output_) {idMTLCommandBuffer command_buffer [gpu_helper_ commandBuffer];command_buffer.label TfLiteInferenceCalculator;RET_CHECK(TFLGpuDelegateSetCommandBuffer(delegate_.get(), command_buffer));RET_CHECK_EQ(interpreter_-Invoke(), kTfLiteOk);[command_buffer commit];} else {RET_CHECK_EQ(interpreter_-Invoke(), kTfLiteOk);}
#else // MEDIAPIPE_TFLITE_GL_INFERENCERET_CHECK_EQ(interpreter_-Invoke(), kTfLiteOk);
#endif // MEDIAPIPE_TFLITE_GL_INFERENCE// 3. Output processed tensors.if (gpu_output_ || use_advanced_gpu_api_) {MP_RETURN_IF_ERROR(ProcessOutputsGpu(cc, std::move(output_tensors_cpu),std::move(output_tensors_gpu)));} else {MP_RETURN_IF_ERROR(ProcessOutputsCpu(cc, std::move(output_tensors_cpu)));}return absl::OkStatus();});
}
ProcessInputsGpu absl::Status TfLiteInferenceCalculator::ProcessInputsGpu(CalculatorContext* cc, std::vectorGpuTensor* output_tensors_gpu) {if (cc-Inputs().Tag(kTensorsGpuTag).IsEmpty()) {return absl::OkStatus();}if (use_advanced_gpu_api_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCEconst auto input_tensors cc-Inputs().Tag(kTensorsGpuTag).Getstd::vectorGpuTensor();RET_CHECK(!input_tensors.empty());for (int i 0; i input_tensors.size(); i) {MP_RETURN_IF_ERROR(tflite_gpu_runner_-BindSSBOToInputTensor(input_tensors[i].id(), i));}if (gpu_output_) {// Allocate new output tensor.output_tensors_gpu-resize(gpu_data_out_.size());for (int i 0; i gpu_data_out_.size(); i) {GpuTensor tensor output_tensors_gpu-at(i);MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBufferfloat(gpu_data_out_[i]-elements, tensor));MP_RETURN_IF_ERROR(tflite_gpu_runner_-BindSSBOToOutputTensor(tensor.id(), i));}} else {// Re-use internal output tensor.for (int i 0; i gpu_data_out_.size(); i) {MP_RETURN_IF_ERROR(tflite_gpu_runner_-BindSSBOToOutputTensor(gpu_data_out_[i]-buffer.id(), i));}}
#endif // MEDIAPIPE_TFLITE_GL_INFERENCE} else if (gpu_input_) {// Read GPU input into SSBO.
#if MEDIAPIPE_TFLITE_GL_INFERENCEconst auto input_tensors cc-Inputs().Tag(kTensorsGpuTag).Getstd::vectorGpuTensor();RET_CHECK_GT(input_tensors.size(), 0);// Explicit copy input.gpu_data_in_.resize(input_tensors.size());for (int i 0; i input_tensors.size(); i) {MP_RETURN_IF_ERROR(CopyBuffer(input_tensors[i], gpu_data_in_[i]-buffer));}
#elif MEDIAPIPE_TFLITE_METAL_INFERENCEconst auto input_tensors cc-Inputs().Tag(kTensorsGpuTag).Getstd::vectorGpuTensor();RET_CHECK_GT(input_tensors.size(), 0);// Explicit copy input with conversion float 32 bits to 16 bits.gpu_data_in_.resize(input_tensors.size());idMTLCommandBuffer command_buffer [gpu_helper_ commandBuffer];command_buffer.label TfLiteInferenceCalculatorConvert;idMTLComputeCommandEncoder compute_encoder [command_buffer computeCommandEncoder];[compute_encoder setComputePipelineState:fp32_to_fp16_program_];for (int i 0; i input_tensors.size(); i) {[compute_encoder setBuffer:input_tensors[i] offset:0 atIndex:0];[compute_encoder setBuffer:gpu_data_in_[i]-buffer offset:0 atIndex:1];constexpr int kWorkgroupSize 64; // Block size for GPU shader.MTLSize threads_per_group MTLSizeMake(kWorkgroupSize, 1, 1);const int threadgroups NumGroups(gpu_data_in_[i]-elements, kWorkgroupSize);[compute_encoder dispatchThreadgroups:MTLSizeMake(threadgroups, 1, 1)threadsPerThreadgroup:threads_per_group];}[compute_encoder endEncoding];[command_buffer commit];
#endif // MEDIAPIPE_TFLITE_GL_INFERENCE}return absl::OkStatus();
}
ProcessOutputsGpu
absl::Status TfLiteInferenceCalculator::ProcessOutputsGpu(CalculatorContext* cc,std::unique_ptrstd::vectorTfLiteTensor output_tensors_cpu,std::unique_ptrstd::vectorGpuTensor output_tensors_gpu) {if (use_advanced_gpu_api_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCEif (gpu_output_) {// Send out pre-allocated tensors.cc-Outputs().Tag(kTensorsGpuTag).Add(output_tensors_gpu.release(), cc-InputTimestamp());} else {// Download to CPU for output.const auto tensor_indexes interpreter_-inputs();for (int i 0; i tensor_indexes.size(); i) {TfLiteTensor* tensor interpreter_-tensor(tensor_indexes[i]);std::vectorfloat gpu_data(tensor-bytes / sizeof(float));MP_RETURN_IF_ERROR(gpu_data_out_[i]-buffer.Read(absl::MakeSpan(tensor-data.f, tensor-bytes)));output_tensors_cpu-emplace_back(*tensor);}// Output result tensors (CPU).cc-Outputs().Tag(kTensorsTag).Add(output_tensors_cpu.release(), cc-InputTimestamp());}
#endif // MEDIAPIPE_TFLITE_GL_INFERENCE} else if (gpu_output_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE// Output result tensors (GPU).output_tensors_gpu-resize(gpu_data_out_.size());for (int i 0; i gpu_data_out_.size(); i) {GpuTensor tensor output_tensors_gpu-at(i);// Allocate output tensor.MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBufferfloat(gpu_data_out_[i]-elements, tensor));MP_RETURN_IF_ERROR(CopyBuffer(gpu_data_out_[i]-buffer, tensor));}cc-Outputs().Tag(kTensorsGpuTag).Add(output_tensors_gpu.release(), cc-InputTimestamp());
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE// Output result tensors (GPU).output_tensors_gpu-resize(gpu_data_out_.size());idMTLDevice device gpu_helper_.mtlDevice;idMTLCommandBuffer command_buffer [gpu_helper_ commandBuffer];command_buffer.label TfLiteInferenceBPHWC4Convert;idMTLComputeCommandEncoder convert_command [command_buffer computeCommandEncoder];for (int i 0; i gpu_data_out_.size(); i) {// Allocate output tensor.output_tensors_gpu-at(i) [device newBufferWithLength:gpu_data_out_[i]-elements * sizeof(float)options:MTLResourceStorageModeShared];// Reshape tensor.[converter_from_BPHWC4_ convertWithEncoder:convert_commandshape:gpu_data_out_[i]-shapesourceBuffer:gpu_data_out_[i]-bufferconvertedBuffer:output_tensors_gpu-at(i)];}[convert_command endEncoding];[command_buffer commit];cc-Outputs().Tag(kTensorsGpuTag).Add(output_tensors_gpu.release(), cc-InputTimestamp());
#endif // MEDIAPIPE_TFLITE_GL_INFERENCE}invoke
tflite_gpu_runner.cc
absl::Status TFLiteGPURunner::Invoke() { return runner_-Run(); }
build TFLiteGPURunner
跳过opencl_runner
absl::Status TFLiteGPURunner::InitializeOpenGL(std::unique_ptrInferenceBuilder* builder) {gl::InferenceEnvironmentOptions env_options;gl::InferenceEnvironmentProperties properties;gl::InferenceOptions gl_options;gl_options.priority1 options_.priority1;gl_options.priority2 options_.priority2;gl_options.priority3 options_.priority3;gl_options.usage options_.usage;MP_RETURN_IF_ERROR(NewInferenceEnvironment(env_options, gl_environment_, properties));MP_RETURN_IF_ERROR(gl_environment_-NewInferenceBuilder(std::move(*graph_gl_),gl_options, builder));return absl::OkStatus();
}