plaidml.op

Description

The TILE standard operation library.

These operations have been shown to be useful across a variety of frameworks. (Frameworks are of course free to define their own operations in addition to these, although it’ll be easier to use them with these if a framework’s own operations are defined using the standard plaidml.tile base classes.)

Each operation is defined as a tile.Operation subclass, allowing it to be used in pattern matching. Additionally, each operation is provided via a top-level function that wraps the class, allowing composite operations to be built up using a functional programming style.

See the PlaidML Op Tutorial for information about writing your own custom operations.

Classes

ArgMax(value[, axis]) Maximum of elements along an axis.
AutoPadding
AveragePool(data, kernel_shape, pads, strides) A standard ML average pooling operator.
BinaryCrossentropy(target, output, epsilon) Computes the binary crossentropy of a value relative to a target.
Cast(x, dtype)
ClipMax(value, max_val) Clips a Value to a maximum bound.
ClipMin(value, min_val) Clips a Value to a minimum bound.
Concatenate(tensors[, axis]) Concatenates tensors to make a single larger tensor.
Convolution(data, kernel[, strides, …]) A standard ML convolution operator.
ConvolutionDataFormat
ConvolutionTranspose(x, kernel, …) A transposed convolution operator.
CumulativeSum(x[, axis]) Cumulative sum of a tensor
Dot(x, y) Dot-product of two tensors.
Elu(x[, alpha]) Exponential linear unit.
Equal(lhs, rhs) Elementwise tensor equality.
Equal_ArgMax(lhs, rhs)
Flatten(data) Flattens a tensor to a one-dimensional value.
Gather(value, indicies) Gathers elements of a tensor.
Gemm(a, b, c[, alpha, beta, broadcast, …]) Implements a general matrix multiplication.
Gradients(loss, variables) Compute the gradients of a loss with respect to a set of values
Hardmax(data) Implements a standard ML hardmax.
Identity(x) A simple identity operation.
IsMax(value, axes) True iff an input’s value is the maximum along some set of axes.
LogSoftmax(data) Implements the log() of a standard ML softmax.
MatMul(a, b) A matrix multiplication, using numpy semantics.
MaxPool(data, padding, kernel_shape, pads, …) A standard ML max pooling operator.
MaxReduce(x[, axes, keepdims]) Computes the maximum value along some set of axes.
Mean(x[, axes, keepdims, floatx]) Computes the mean value along some set of axes.
MinReduce(x[, axes, keepdims]) Computes the minimum value along some set of axes.
NotEqual(lhs, rhs) Elementwise tensor inequality.
Pow(x, p) An elementwise pow() function.
Prod(value[, axes, keepdims, floatx])
Relu(x[, alpha, max_value]) A Rectified Linear Unit.
Reshape(x, dims) Reshapes a tensor, without changing the type or number of elements.
SliceTensor(data[, axes, ends, starts]) Implements tensor slicing.
Softmax(data) Implements a standard ML softmax.
Sqrt(x) Computes the elementwise square root of a value.
Summation(value[, axes, keepdims, floatx]) Sums an input value along some set of axes.
Variance(x[, axes, keepdims, floatx])

Functions

ceiling(data) Elementwise ceiling.
clip(value, min_val, max_val)
cos(data) Elementwise cosine.
equal(lhs, rhs) Elementwise tensor equality.
exp(data) Elementwise exponential.
floor(data) Elementwise floor.
gradients(loss, variables)
hardmax(x[, axis])
log(data) Elementwise logarithm.
log_softmax(x[, axis])
max_reduce(x[, axes, keepdims])
mean(x[, axes, keepdims, floatx])
min_reduce(x[, axes, keepdims])
pad_compute(sym, input_size, filter_size, …) Computes info for an axis of a padded filter.
prod(value[, axes, keepdims, floatx])
sigmoid(data) Elementwise sigmoid.
sin(data) Elementwise sine.
softmax(x[, axis])
squeeze(x, axes)
summation(value[, axes, keepdims, floatx])
tanh(data) Elementwise hyperbolic tangent.
unsqueeze(x, axes)