Subscribe to Our Bi-Weekly AI Newsletter

Comparison of AI Frameworks

Content

Frameworks

Pytorch & Torch

A Python version of Torch, known as Pytorch, was open-sourced by Facebook in January 2017. PyTorch offers dynamic computation graphs, which let you process variable-length inputs and outputs, which is useful when working with RNNs, for example. In September 2017, Jeremy Howard’s and Rachael Thomas’s well-known deep-learning course fast.ai adopted Pytorch. Since it’s introduction, PyTorch has quickly become the favorite among machine-learning researchers, because it allows certain complex architectures to be built easily. Other frameworks that support dynamic computation graphs are CMU’s DyNet and PFN’s Chainer.

Torch is a computational framework with an API written in Lua that supports machine-learning algorithms. Some version of it is used by large tech companies such as Facebook and Twitter, which devote in-house teams to customizing their deep learning platforms. Lua is a multi-paradigm scripting language that was developed in Brazil in the early 1990s.

Pros and Cons:

  • (+) Lots of modular pieces that are easy to combine
  • (+) Easy to write your own layer types and run on GPU
  • (+) Lots of pretrained models
  • (-) You usually write your own training code (Less plug and play)
  • (-) No commercial support
  • (-) Spotty documentation

TensorFlow

  • Google created TensorFlow to replace Theano. The two libraries are in fact quite similar. Some of the creators of Theano, such as Ian Goodfellow, went on to create Tensorflow at Google before leaving for OpenAI.
  • For the moment, TensorFlow does not support so-called “inline” matrix operations, but forces you to copy a matrix in order to perform an operation on it. Copying very large matrices is costly in every sense. TF takes 4x as long as the state of the art deep learning tools. Google says it’s working on the problem.
  • Like most deep-learning frameworks, TensorFlow is written with a Python API over a C/C++ engine that makes it run faster. Although there is experimental support for a Java API it is not currently considered stable, we do not consider this a solution for the Java and Scala communities.
  • TensorFlow runs dramatically slower than other frameworks such as CNTK and MxNet.
  • TensorFlow is about more than deep learning. TensorFlow actually has tools to support reinforcement learning and other algos.
  • Google’s acknowledged goal with Tensorflow seems to be recruiting, making their researchers’ code shareable, standardizing how software engineers approach deep learning, and creating an additional draw to Google Cloud services, on which TensorFlow is optimized.
  • TensorFlow is not commercially supported, and it’s unlikely that Google will go into the business of supporting open-source enterprise software. It’s giving a new tool to researchers.
  • Like Theano, TensforFlow generates a computational graph (e.g. a series of matrix operations such as z = sigmoid(x) where x and z are matrices) and performs automatic differentiation. Automatic differentiation is important because you don’t want to have to hand-code a new variation of backpropagation every time you’re experimenting with a new arrangement of neural networks. In Google’s ecosystem, the computational graph is then used by Google Brain for the heavy lifting, but Google hasn’t open-sourced those tools yet. TensorFlow is one half of Google’s in-house DL solution.
  • Google introduced Eager, a dynamic computation graph module for TensorFlow, in October 2017.
  • From an enterprise perspective, the question some companies will need to answer is whether they want to depend upon Google for these tools, given how Google developed services on top of Android, and the general lack of enterprise support.
  • Caveat: Not all operations in Tensorflow work as they do in Numpy.
  • A Critique of Tensorflow
  • Keras shoot-out: TensorFlow vs MXNet
  • PyTorch vs. TensorFlow

Pros and Cons

  • (+) Python + Numpy
  • (+) Computational graph abstraction, like Theano
  • (+) Faster compile times than Theano
  • (+) TensorBoard for visualization
  • (+) Data and model parallelism
  • (-) Slower than other frameworks
  • (-) Much “fatter” than Torch; more magic
  • (-) Not many pretrained models
  • (-) Computational graph is pure Python, therefore slow
  • (-) No commercial support
  • (-) Drops out to Python to load each new training batch
  • (-) Not very toolable
  • (-) Dynamic typing is error-prone on large software projects

Caffe

Caffe is a well-known and widely used machine-vision library that ported Matlab’s implementation of fast convolutional nets to C and C++ (see Steve Yegge’s rant about porting C++ from chip to chip if you want to consider the tradeoffs between speed and this particular form of technical debt). Caffe is not intended for other deep-learning applications such as text, sound or time series data. Like other frameworks mentioned here, Caffe has chosen Python for its API.

Pros and Cons:

  • (+) Good for feedforward networks and image processing
  • (+) Good for finetuning existing networks
  • (+) Train models without writing any code
  • (+) Python interface is pretty useful
  • (-) Need to write C++ / CUDA for new GPU layers
  • (-) Not good for recurrent networks
  • (-) Cumbersome for big networks (GoogLeNet, ResNet)
  • (-) Not extensible, bit of a hairball
  • (-) No commercial support
  • (-) Probably dying; slow development

RIP: Theano and Ecosystem

Yoshua Bengio announced on Sept. 28, 2017, that development on Theano would cease. Theano is effectively dead.

Many academic researchers in the field of deep learning rely on Theano, the grand-daddy of deep-learning frameworks, which is written in Python. Theano is a library that handles multidimensional arrays, like Numpy. Used with other libs, it is well suited to data exploration and intended for research.

Numerous open-source deep-libraries have been built on top of Theano, including Keras, Lasagne and Blocks. These libs attempt to layer an easier to use API on top of Theano’s occasionally non-intuitive interface. (As of March 2016, another Theano-related library, Pylearn2, appears to be dead.)

Pros and Cons

  • (+) Python + Numpy
  • (+) Computational graph is nice abstraction
  • (+) RNNs fit nicely in computational graph
  • (-) Raw Theano is somewhat low-level
  • (+) High level wrappers (Keras, Lasagne) ease the pain
  • (-) Error messages can be unhelpful
  • (-) Large models can have long compile times
  • (-) Much “fatter” than Torch
  • (-) Patchy support for pretrained models
  • (-) Buggy on AWS
  • (-) Single GPU

Caffe2

Caffe2 is the long-awaited successor to the original Caffe, whose creator Yangqing Jia now works at Facebook. Caffe2 is the second deep-learning framework to be backed by Facebook after Torch/PyTorch. The main difference seems to be the claim that Caffe2 is more scalable and light-weight. It purports to be deep learning for production environments. Like Caffe and PyTorch, Caffe2 offers a Python API running on a C++ engine.

Pros and Cons:

  • (+) BSD License
  • (-) No commercial support

CNTK

CNTK is Microsoft’s open-source deep-learning framework. The acronym stands for “Computational Network Toolkit.” The library includes feed-forward DNNs, convolutional nets and recurrent networks. CNTK offers a Python API over C++ code. While CNTK appears to have a permissive license, it has not adopted one of the more conventional licenses, such as ASF 2.0, BSD or MIT. This license does not apply to the method by which CNTK makes distributed training easy – one-bit SGD – which is not licensed for commercial use.

Chainer

Chainer is an open-source neural network framework with a Python API, whose core team of developers work at Preferred Networks, a machine-learning startup based in Tokyo drawing its engineers largely from the University of Tokyo. Until the advent of DyNet at CMU, and PyTorch at Facebook, Chainer was the leading neural network framework for dynamic computation graphs, or nets that allowed for input of varying length, a popular feature for NLP tasks. By its own benchmarks, Chainer is notably faster than other Python-oriented frameworks, with TensorFlow the slowest of a test group that includes MxNet and CNTK.

DSSTNE

Amazon’s Deep Scalable Sparse Tensor Network Engine, or DSSTNE, is a library for building models for machine- and deep learning. It is one of the more recent of many open-source deep-learning libraries to be released, after Tensorflow and CNTK, and Amazon has since backed MxNet with AWS, so its future is not clear. Written largely in C++, DSSTNE appears to be fast, although it has not attracted as large a community as the other libraries.

DyNet

DyNet, the Dynamic Neural Network Toolkit, came out of Carnegie Mellon University and used to be called cnn. Its notable feature is the dynamic computation graph, which allows for inputs of varying length, which is great for NLP. PyTorch and Chainer offer the same.

  • (+) Dynamic computation graph
  • (-) Small user community

Gensim

Gensim is a fast implementation of word2vec implemented in Python. While Gensim is not a general purpose ML platform, for word2vec, it is at least an order of magnitude faster than TensorFlow. It is supported by the NLP consulting firm Rare Technologies.

Gluon

Named after a subatomic particle, Gluon is an API over Amazon’s MxNet that was introduced by Amazon and Microsoft in October 2017. It will also integrate with Microsoft’s CNTK. While it is similar to Keras in its intent and place in the stack, it is distinguished by its dynamic computation graph, similar to Pytorch and Chainer, and unlike TensorFlow or Caffe. On a business level, Gluon is an attempt by Amazon and Microsoft to carve out a user base separate from TensorFlow and Keras, as both camps seek to control the API that mediates UX and neural net training.

Keras

Keras is a deep-learning library that sits atop TensorFlow and Theano, providing an intuitive API inspired by Torch. Perhaps the best Python API in existence. It was created by Francois Chollet, a software engineer at Google.

  • (+) Intuitive API inspired by Torch
  • (+) Works with Theano, TensorFlow and Deeplearning4j backends (CNTK backend to come)
  • (+) Fast growing framework
  • (+) Likely to become standard Python API for NNs

MxNet

MxNet is a machine-learning framework with APIs is languages such as R, Python and Julia which has been adopted by Amazon Web Services. Parts of Apple are also rumored to use it after the company’s acquisition of Graphlab/Dato/Turi in 2016. A fast and flexible library, MxNet involves Pedro Domingos and a team of researchers at the University of Washington.

Paddle

Paddle is a deep-learning framework created and supported by Baidu. Its name stands for PArallel Distributed Deep LEarning. Paddle is the most recent major framework to be released, and like most others, it offers a Python API.

BigDL

BigDL, a new deep learning framework with a focus on Apache Spark, has a focus on Scala.

Machine-learning frameworks

The deep-learning frameworks listed above are more specialized than general machine-learning frameworks, of which there are many. We’ll list the major ones here:

  • sci-kit learn - the default open-source machine-learning framework for Python.
  • Apache Mahout - The flagship machine-learning framework on Apache. Mahout does classifications, clustering and recommendations.
  • SystemML - IBM’s machine-learning framework, which performs Descriptive Statistics, Classification, Clustering, Regression, Matrix Factorization and Survival Analysis, and includes support-vector machines.
  • Microsoft DMTK - Microsoft’s distributed machine-learning toolkit. Distributed word embeddings and LDA.
  • A curated list of Python data science tools

Chris Nicholson

Chris Nicholson is the CEO of Skymind. He previously led communications and recruiting at the Sequoia-backed robo-advisor, FutureAdvisor, which was acquired by BlackRock. In a prior life, Chris spent a decade reporting on tech and finance for The New York Times, Businessweek and Bloomberg, among others.

A bi-weekly digest of AI use cases in the news.