PlaidML¶
A framework for making deep learning work everywhere.
- PlaidML is a multi-language acceleration framework that:
- Enables practitioners to deploy high-performance neural nets on any device
- Allows hardware developers to quickly integrate with high-level frameworks
- Allows framework developers to easily add support for many kinds of hardware
For more information, see the PlaidML Announcement, and the PlaidML GitHub Repository.
About this module¶
This module provides the low-level PlaidML Python API.
Using this API directly requires either knowledge of the Tile language (used to describe the computations that make up a neural network), or a pre-built serialized network (which encapsulates the Tile operations that define the shape of the network, and the intra-network connection weights found by training the network).
Higher-level APIs¶
plaidml.keras - Integration with the Keras machine learning framework. This is useful for easily describing and training neural networks.
plaidml.tile - Utilities for building up composite TILE functions from high-level operation semantics.
Modules¶
keras |
Patches in a PlaidML backend for Keras. |
op |
The TILE standard operation library. |
tile |
TILE program construction utilities. |
exceptions |
Classes¶
Applier (ctx, f) |
|
Composer () |
|
context.Context (lib) |
|
DType |
Describes the type of a tensor element. |
Device (ctx, device) |
|
Dimension (size, stride) |
|
Function (code[, backtrace]) |
|
Integer (value) |
|
Invocation (ctx, invoker) |
|
Invoker (ctx, f[, inputs, outputs]) |
|
library.Library (lib[, logger]) |
A loaded PlaidML implementation library. |
Placeholder (dims) |
|
Real (value) |
|
Shape (ctx, dtype, *args) |
|
Tensor (dev, shape[, copy_buffer]) |
|
Var (v) |
An abstract variable. |