2.7. 查阅文档
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in SageMaker Studio Lab

由于篇幅限制,本书不可能介绍每一个MXNet函数和类。 API文档、其他教程和示例提供了本书之外的大量文档。 本节提供了一些查看MXNet API的指导。

由于篇幅限制,本书不可能介绍每一个PyTorch函数和类。 API文档、其他教程和示例提供了本书之外的大量文档。 本节提供了一些查看PyTorch API的指导。

由于篇幅限制,本书不可能介绍每一个TensorFlow函数和类。 API文档、其他教程和示例提供了本书之外的大量文档。 本节提供了一些查TensorFlow API的指导。

2.7.1. 查找模块中的所有函数和类

为了知道模块中可以调用哪些函数和类,可以调用dir函数。 例如,我们可以查询随机数生成模块中的所有属性:

from mxnet import np

print(dir(np.random))
['__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_mx_nd_np', 'beta', 'chisquare', 'choice', 'exponential', 'gamma', 'gumbel', 'logistic', 'lognormal', 'multinomial', 'multivariate_normal', 'normal', 'pareto', 'power', 'rand', 'randint', 'randn', 'rayleigh', 'shuffle', 'uniform', 'weibull']
import torch

print(dir(torch.distributions))
['AbsTransform', 'AffineTransform', 'Bernoulli', 'Beta', 'Binomial', 'CatTransform', 'Categorical', 'Cauchy', 'Chi2', 'ComposeTransform', 'ContinuousBernoulli', 'CorrCholeskyTransform', 'CumulativeDistributionTransform', 'Dirichlet', 'Distribution', 'ExpTransform', 'Exponential', 'ExponentialFamily', 'FisherSnedecor', 'Gamma', 'Geometric', 'Gumbel', 'HalfCauchy', 'HalfNormal', 'Independent', 'IndependentTransform', 'Kumaraswamy', 'LKJCholesky', 'Laplace', 'LogNormal', 'LogisticNormal', 'LowRankMultivariateNormal', 'LowerCholeskyTransform', 'MixtureSameFamily', 'Multinomial', 'MultivariateNormal', 'NegativeBinomial', 'Normal', 'OneHotCategorical', 'OneHotCategoricalStraightThrough', 'Pareto', 'Poisson', 'PowerTransform', 'RelaxedBernoulli', 'RelaxedOneHotCategorical', 'ReshapeTransform', 'SigmoidTransform', 'SoftmaxTransform', 'SoftplusTransform', 'StackTransform', 'StickBreakingTransform', 'StudentT', 'TanhTransform', 'Transform', 'TransformedDistribution', 'Uniform', 'VonMises', 'Weibull', 'Wishart', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'bernoulli', 'beta', 'biject_to', 'binomial', 'categorical', 'cauchy', 'chi2', 'constraint_registry', 'constraints', 'continuous_bernoulli', 'dirichlet', 'distribution', 'exp_family', 'exponential', 'fishersnedecor', 'gamma', 'geometric', 'gumbel', 'half_cauchy', 'half_normal', 'identity_transform', 'independent', 'kl', 'kl_divergence', 'kumaraswamy', 'laplace', 'lkj_cholesky', 'log_normal', 'logistic_normal', 'lowrank_multivariate_normal', 'mixture_same_family', 'multinomial', 'multivariate_normal', 'negative_binomial', 'normal', 'one_hot_categorical', 'pareto', 'poisson', 'register_kl', 'relaxed_bernoulli', 'relaxed_categorical', 'studentT', 'transform_to', 'transformed_distribution', 'transforms', 'uniform', 'utils', 'von_mises', 'weibull', 'wishart']
import tensorflow as tf

print(dir(tf.random))
['Algorithm', 'Generator', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_sys', 'all_candidate_sampler', 'categorical', 'create_rng_state', 'experimental', 'fixed_unigram_candidate_sampler', 'gamma', 'get_global_generator', 'learned_unigram_candidate_sampler', 'log_uniform_candidate_sampler', 'normal', 'poisson', 'set_global_generator', 'set_seed', 'shuffle', 'stateless_binomial', 'stateless_categorical', 'stateless_gamma', 'stateless_normal', 'stateless_parameterized_truncated_normal', 'stateless_poisson', 'stateless_truncated_normal', 'stateless_uniform', 'truncated_normal', 'uniform', 'uniform_candidate_sampler']
import warnings

warnings.filterwarnings(action='ignore')
import paddle

print(dir(paddle.distribution))
['AbsTransform', 'AffineTransform', 'Beta', 'Categorical', 'ChainTransform', 'Dirichlet', 'Distribution', 'ExpTransform', 'ExponentialFamily', 'Independent', 'IndependentTransform', 'Multinomial', 'Normal', 'PowerTransform', 'ReshapeTransform', 'SigmoidTransform', 'SoftmaxTransform', 'StackTransform', 'StickBreakingTransform', 'TanhTransform', 'Transform', 'TransformedDistribution', 'Uniform', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'beta', 'categorical', 'constraint', 'dirichlet', 'distribution', 'exponential_family', 'independent', 'kl', 'kl_divergence', 'multinomial', 'normal', 'register_kl', 'transform', 'transformed_distribution', 'uniform', 'variable']

通常可以忽略以“__”(双下划线)开始和结束的函数,它们是Python中的特殊对象, 或以单个“_”(单下划线)开始的函数,它们通常是内部函数。 根据剩余的函数名或属性名,我们可能会猜测这个模块提供了各种生成随机数的方法, 包括从均匀分布(uniform)、正态分布(normal)和多项分布(multinomial)中采样。

2.7.2. 查找特定函数和类的用法

有关如何使用给定函数或类的更具体说明,可以调用help函数。 例如,我们来查看张量ones函数的用法。

help(np.ones)
Help on function ones in module mxnet.numpy:

ones(shape, dtype=<class 'numpy.float32'>, order='C', ctx=None)
    Return a new array of given shape and type, filled with ones.
    This function currently only supports storing multi-dimensional data
    in row-major (C-style).

    Parameters
    ----------
    shape : int or tuple of int
        The shape of the empty array.
    dtype : str or numpy.dtype, optional
        An optional value type. Default is numpy.float32. Note that this
        behavior is different from NumPy's ones function where float64
        is the default value, because float32 is considered as the default
        data type in deep learning.
    order : {'C'}, optional, default: 'C'
        How to store multi-dimensional data in memory, currently only row-major
        (C-style) is supported.
    ctx : Context, optional
        An optional device context (default is the current default context).

    Returns
    -------
    out : ndarray
        Array of ones with the given shape, dtype, and ctx.

    Examples
    --------
    >>> np.ones(5)
    array([1., 1., 1., 1., 1.])

    >>> np.ones((5,), dtype=int)
    array([1, 1, 1, 1, 1], dtype=int64)

    >>> np.ones((2, 1))
    array([[1.],
           [1.]])

    >>> s = (2,2)
    >>> np.ones(s)
    array([[1., 1.],
           [1., 1.]])
help(torch.ones)
Help on built-in function ones in module torch:

ones(...)
    ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor

    Returns a tensor filled with the scalar value 1, with the shape defined
    by the variable argument size.

    Args:
        size (int...): a sequence of integers defining the shape of the output tensor.
            Can be a variable number of arguments or a collection like a list or tuple.

    Keyword arguments:
        out (Tensor, optional): the output tensor.
        dtype (torch.dtype, optional): the desired data type of returned tensor.
            Default: if None, uses a global default (see torch.set_default_tensor_type()).
        layout (torch.layout, optional): the desired layout of returned Tensor.
            Default: torch.strided.
        device (torch.device, optional): the desired device of returned tensor.
            Default: if None, uses the current device for the default tensor type
            (see torch.set_default_tensor_type()). device will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: False.

    Example::

        >>> torch.ones(2, 3)
        tensor([[ 1.,  1.,  1.],
                [ 1.,  1.,  1.]])

        >>> torch.ones(5)
        tensor([ 1.,  1.,  1.,  1.,  1.])
help(tf.ones)
Help on function ones in module tensorflow.python.ops.array_ops:

ones(shape, dtype=tf.float32, name=None)
    Creates a tensor with all elements set to one (1).

    See also tf.ones_like, tf.zeros, tf.fill, tf.eye.

    This operation returns a tensor of type dtype with shape shape and
    all elements set to one.

    >>> tf.ones([3, 4], tf.int32)
    <tf.Tensor: shape=(3, 4), dtype=int32, numpy=
    array([[1, 1, 1, 1],
           [1, 1, 1, 1],
           [1, 1, 1, 1]], dtype=int32)>

    Args:
      shape: A list of integers, a tuple of integers, or
        a 1-D Tensor of type int32.
      dtype: Optional DType of an element in the resulting Tensor. Default is
        tf.float32.
      name: Optional string. A name for the operation.

    Returns:
      A Tensor with all elements set to one (1).
help(paddle.ones)
Help on function ones in module paddle.tensor.creation:

ones(shape, dtype=None, name=None)
    The OP creates a tensor of specified shape and dtype, and fills it with 1.

    Args:
        shape(tuple|list|Tensor): Shape of the Tensor to be created, the data type of shape is int32 or int64.
        dtype(np.dtype|str, optional): Data type of output Tensor, it supports
            bool, float16, float32, float64, int32 and int64. Default: if None, the data type is 'float32'.
        name(str, optional): The default value is None. Normally there is no need for user to set this property. For more information, please refer to api_guide_Name

    Returns:
        Tensor: A tensor of data type dtype with shape shape and all elements set to 1.

    Examples:
        .. code-block:: python

          import paddle

          # default dtype for ones OP
          data1 = paddle.ones(shape=[3, 2])
          # [[1. 1.]
          #  [1. 1.]
          #  [1. 1.]]

          data2 = paddle.ones(shape=[2, 2], dtype='int32')
          # [[1 1]
          #  [1 1]]

          # shape is a Tensor
          shape = paddle.full(shape=[2], dtype='int32', fill_value=2)
          data3 = paddle.ones(shape=shape, dtype='int32')
          # [[1 1]
          #  [1 1]]

从文档中,我们可以看到ones函数创建一个具有指定形状的新张量,并将所有元素值设置为1。 下面来运行一个快速测试来确认这一解释:

np.ones(4)
[07:15:36] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for CPU
array([1., 1., 1., 1.])
torch.ones(4)
tensor([1., 1., 1., 1.])
tf.ones(4)
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 1., 1., 1.], dtype=float32)>
paddle.ones([4], dtype='float32')
Tensor(shape=[4], dtype=float32, place=Place(cpu), stop_gradient=True,
       [1., 1., 1., 1.])

在Jupyter记事本中,我们可以使用?指令在另一个浏览器窗口中显示文档。 例如,list?指令将创建与help(list)指令几乎相同的内容,并在新的浏览器窗口中显示它。 此外,如果我们使用两个问号,如list??,将显示实现该函数的Python代码。

2.7.3. 小结

  • 官方文档提供了本书之外的大量描述和示例。

  • 可以通过调用dirhelp函数或在Jupyter记事本中使用???查看API的用法文档。

2.7.4. 练习

  1. 在深度学习框架中查找任何函数或类的文档。请尝试在这个框架的官方网站上找到文档。