当前位置: 首页 > news >正文

深入浅出PaddlePaddle函数——paddle.to_tensor

分类目录:《深入浅出PaddlePaddle函数》总目录
相关文章:
· 深入浅出PaddlePaddle函数——paddle.Tensor
· 深入浅出PaddlePaddle函数——paddle.to_tensor


通过已知的data来创建一个Tensor,Tensor类型为paddle.Tensordata可以是scalartuplelistnumpy.ndarraypaddle.Tensor。如果data已经是一个Tensor,且dtypeplace没有发生变化,将不会发生Tensor的拷贝并返回原来的Tensor。 否则会创建一个新的 Tensor,且不保留原来计算图。

语法

paddle.to_tensor(data, dtype=None, place=None, stop_gradient=True)

参数

  • data:[scalar/tuple/list/ndarray/Tensor] 初始化Tensor的数据,可以是scalartuplelistnumpy.ndarraypaddle.Tensor类型。
  • dtype:[可选,str] 创建Tensor的数据类型,可以是boolfloat16float32float64int8int16int32int64uint8complex64complex128。 默认值为None,如果 data为 python 浮点类型,则从get_default_dtype获取类型,如果data为其他类型,则会自动推导类型。
  • place:[可选, CPUPlace/CUDAPinnedPlace/CUDAPlace] 创建Tensor的设备位置,可以是 CPUPlaceCUDAPinnedPlaceCUDAPlace。默认值为None,使用全局的place
  • stop_gradient: [可选,bool] 是否阻断Autograd的梯度传导。默认值为True,此时不进行梯度传传导。

返回值

通过data创建的 Tensor。

实例

import paddletype(paddle.to_tensor(1))
# <class 'paddle.Tensor'>paddle.to_tensor(1)
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])x = paddle.to_tensor(1, stop_gradient=False)
print(x)
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,
#        [1])paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], place=paddle.CPUPlace(), stop_gradient=False)
# Tensor(shape=[2, 2], dtype=float32, place=CPUPlace, stop_gradient=False,
#        [[0.10000000, 0.20000000],
#         [0.30000001, 0.40000001]])type(paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64'))
# <class 'paddle.Tensor'>paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64')
# Tensor(shape=[2, 2], dtype=complex64, place=CPUPlace, stop_gradient=True,
#        [[(1+1j), (2+0j)],
#         [(3+2j), (4+0j)]])

函数实现

def to_tensor(data, dtype=None, place=None, stop_gradient=True):r"""Constructs a ``paddle.Tensor`` from ``data`` ,which can be scalar, tuple, list, numpy\.ndarray, paddle\.Tensor.If the ``data`` is already a Tensor, copy will be performed and return a new tensor.If you only want to change stop_gradient property, please call ``Tensor.stop_gradient = stop_gradient`` directly.Args:data(scalar|tuple|list|ndarray|Tensor): Initial data for the tensor.Can be a scalar, list, tuple, numpy\.ndarray, paddle\.Tensor.dtype(str|np.dtype, optional): The desired data type of returned tensor. Can be 'bool' , 'float16' ,'float32' , 'float64' , 'int8' , 'int16' , 'int32' , 'int64' , 'uint8','complex64' , 'complex128'. Default: None, infers dtype from ``data``except for python float number which gets dtype from ``get_default_type`` .place(CPUPlace|CUDAPinnedPlace|CUDAPlace|str, optional): The place to allocate Tensor. Can beCPUPlace, CUDAPinnedPlace, CUDAPlace. Default: None, means global place. If ``place`` isstring, It can be ``cpu``, ``gpu:x`` and ``gpu_pinned``, where ``x`` is the index of the GPUs.stop_gradient(bool, optional): Whether to block the gradient propagation of Autograd. Default: True.Returns:Tensor: A Tensor constructed from ``data`` .Examples:.. code-block:: pythonimport paddletype(paddle.to_tensor(1))# <class 'paddle.Tensor'>paddle.to_tensor(1)# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,#        [1])x = paddle.to_tensor(1, stop_gradient=False)print(x)# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,#        [1])paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,#        [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], place=paddle.CPUPlace(), stop_gradient=False)# Tensor(shape=[2, 2], dtype=float32, place=CPUPlace, stop_gradient=False,#        [[0.10000000, 0.20000000],#         [0.30000001, 0.40000001]])type(paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64'))# <class 'paddle.Tensor'>paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64')# Tensor(shape=[2, 2], dtype=complex64, place=CPUPlace, stop_gradient=True,#        [[(1+1j), (2+0j)],#         [(3+2j), (4+0j)]])"""place = _get_paddle_place(place)if place is None:place = _current_expected_place()if _non_static_mode():return _to_tensor_non_static(data, dtype, place, stop_gradient)# call assign for static graphelse:re_exp = re.compile(r'[(](.+?)[)]', re.S)place_str = re.findall(re_exp, str(place))[0]with paddle.static.device_guard(place_str):return _to_tensor_static(data, dtype, stop_gradient)def full_like(x, fill_value, dtype=None, name=None):"""This function creates a tensor filled with ``fill_value`` which has identical shape of ``x`` and ``dtype``.If the ``dtype`` is None, the data type of Tensor is same with ``x``.Args:x(Tensor): The input tensor which specifies shape and data type. The data type can be bool, float16, float32, float64, int32, int64.fill_value(bool|float|int): The value to fill the tensor with. Note: this value shouldn't exceed the range of the output data type.dtype(np.dtype|str, optional): The data type of output. The data type can be oneof bool, float16, float32, float64, int32, int64. The default value is None, which means the outputdata type is the same as input.name(str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.Returns:Tensor: Tensor which is created according to ``x``, ``fill_value`` and ``dtype``.Examples:.. code-block:: pythonimport paddleinput = paddle.full(shape=[2, 3], fill_value=0.0, dtype='float32', name='input')output = paddle.full_like(input, 2.0)# [[2. 2. 2.]#  [2. 2. 2.]]"""if dtype is None:dtype = x.dtypeelse:if not isinstance(dtype, core.VarDesc.VarType):dtype = convert_np_dtype_to_dtype_(dtype)if in_dygraph_mode():return _C_ops.full_like(x, fill_value, dtype, x.place)if _in_legacy_dygraph():return _legacy_C_ops.fill_any_like(x, 'value', fill_value, 'dtype', dtype)helper = LayerHelper("full_like", **locals())check_variable_and_dtype(x,'x',['bool', 'float16', 'float32', 'float64', 'int16', 'int32', 'int64'],'full_like',)check_dtype(dtype,'dtype',['bool', 'float16', 'float32', 'float64', 'int16', 'int32', 'int64'],'full_like/zeros_like/ones_like',)out = helper.create_variable_for_type_inference(dtype=dtype)helper.append_op(type='fill_any_like',inputs={'X': [x]},attrs={'value': fill_value, "dtype": dtype},outputs={'Out': [out]},)out.stop_gradient = Truereturn out
http://www.lryc.cn/news/33103.html

相关文章:

  • JavaScript高级程序设计读书分享之10章——函数
  • 第八章 使用 ^%ZSTART 和 ^%ZSTOP 例程自定义启动和停止行为 - 设计注意事项
  • 工作实战之拦截器模式
  • 某美颜app sig参数分析
  • Linux - Linux系统优化思路
  • 2.Elasticsearch入门
  • RK3399平台开发系列讲解(应用开发篇)断言的使用
  • 云原生系列之使用prometheus监控nginx
  • 第六届省赛——8移动距离(总结规律)
  • C++vector 简单实现
  • 通用缓存存储设计实践
  • sheng的学习笔记Eureka Ribbon
  • 零代码工具我推荐Oracle APEX
  • InstructGPT方法简读
  • SpringCloud-5_模块集群化
  • AQS底层源码深度剖析-BlockingQueue
  • Kotlin协程:Flow的异常处理
  • qt下ffmpeg录制mp4经验分享,支持音视频(h264、h265,AAC,G711 aLaw, G711muLaw)
  • C#读取Excel解析入门-1仅围绕三个主要的为阵地,进行重点解析,就是最理性的应对上法所在
  • 一起Talk Android吧(第五百一十八回:在Android中使用MQTT通信五)
  • 100种思维模型之混沌与秩序思维模型-027
  • Java开发 - Redis初体验
  • Python - 使用 pymysql 操作 MySQL 详解
  • 机器学习-卷积神经网络CNN中的单通道和多通道图片差异
  • 考研复试——计算机组成原理
  • 硬件设计 之摄像头分类(IR摄像头、mono摄像头、RGB摄像头、RGB-D摄像头、鱼眼摄像头)
  • PTA:C课程设计(2)
  • 第四章:面向对象编程
  • Linux 安装npm yarn pnpm 命令
  • linux SPI驱动代码追踪