A.I. Wiki

Do you like this content? We'll send you more.

Apache Spark for Deep Learning Workloads

Apache Spark is a fast, in-memory data processing engine with elegant and expressive development APIs to allow data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets. Spark is a distributed computing library and can run across multiple, if not hundreds, of machines.

Spark is designed for data science and its abstraction makes data science easier. Data scientists commonly use machine learning – a set of techniques and algorithms that can learn from data. These algorithms are often iterative, and Spark’s ability to cache the dataset in memory greatly speeds up such iterative data processing, making Spark an ideal processing engine for implementing such algorithms.

Spark also includes MLlib, a library that provides a growing set of machine algorithms for common data science techniques: Classification, Regression, Collaborative Filtering, Clustering and Dimensionality Reduction.

Spark’s ML Pipeline API is a high level abstraction to model an entire data science workflow. The ML pipeline package in Spark models a typical machine learning workflow and provides abstractions like Transformer, Estimator, Pipeline & Parameters. This is an abstraction layer that makes data scientists more productive.

  • (Distributed Deep Learning with DL4J and Keras on Apache Spark)[https://deeplearning4j.org/docs/latest/deeplearning4j-scaleout-intro]
  • (Deeplearning4j and Keras on Spark: How-to Guides)[https://deeplearning4j.org/docs/latest/deeplearning4j-scaleout-howto]

Free Consultation

Schedule a 30-minute Q&A with our AI experts.

TALK TO A SKYMIND EXPERT