Skymind Intelligence Layer

Productionizing Machine Learning Workflows

What is SKIL?

The Skymind Intelligence Layer (SKIL) launches data science projects into production, quickly and easily. SKIL bridges the gap between the Python ecosystem and the JVM with a cross-team platform for Data Scientists, Data Engineers, and DevOps/IT.

Major Libraries Supported


Support for Python, Java, and Scala languages.

Why SKIL?

Data Scientists

Interoperability with widely used data science libraries.

LEARN MORE →

Executives

SKIL helps innovation teams accelerate time to value.

LEARN MORE →

Architects

Designed to be embeddable into IT environments.

LEARN MORE →

DevOps/SRE

Scalable, distributed, fault-tolerant machine learning.

LEARN MORE →

End-To-End Workflow

SKIL enables interoperability between data science and big data frameworks by standardizing and orchestrating AI workflows within a single, consolidated platform.

Command Line Interface

Set up internal services and manage data science workflows from the command line.

          import skil_client
          uploads = client.upload("tensorflow_rnn.pb")
          new_model = DeployModel(name="recommender_rnn", scale=30, file_location=uploads[0].path)
          model = client.deploy_model(deployment_id, new_model)
          ndarray = INDArray(array=base64.b64encode(x_in))
          input = Prediction(id=1234, prediction=ndarray, needsPreProcessing=false)
          result = client.predict(input, "production", "recommender_rnn")
        

Features

Feature Description Community Enterprise
Interoperability
Deeplearning4j Deep Learning for the JVM on Hadoop & Spark
Tensorflow Protobuf Import Pre-Trained Models from TensorFlow
Keras H5 Import Pre-Trained Models from Keras
PMML Import traditional machine learning models
ONNX Import from Caffe2, PyTorch, Apache MXNet, and Other Frameworks.
DataVec Transforms Data ETL Normalization and Vectorization Pipelines
SKIL Platform
Model Serving Embeddable Model Hosting, Management, and Version Control LIMITED
Multi-Node Support Distribute training and inference across clusters of servers
Scale Fault tolerance, load balancing, and leader election
Installation Deployable via Docker and Bare Metal on Cloud, On-Prem, or Hybrid Systems.
Model Import Importing Models from Widely Used Machine Learning Libraries
Hardware Acceleration Managed CUDA for GPU and MKL for CPU
Integrations Native Integration with Big Data Tools such as Hadoop and Spark
Application
Robotic Process Automation Add an AI Layer on top of Existing RPA Applications.
AI Infrastructure SKIL is Pre-Packaged on Cisco and Huawei Servers.
Support
Online Community Access to Community Forum, Videos, and Documentation
Development Support General Feature Engineering and Model Tuning Advice
SLA Guaranteed Uptime and Response Times
Cost Free Contact Us

Free Consultation

Schedule a 30-minute Q&A with our AI experts.

TALK TO A SKYMIND EXPERT