Subscribe to Our Bi-Weekly AI Newsletter

Single-layer Network

A single-layer neural network in deep learning is a net composed of an input layer, which is the visible layer, and a hidden output layer.

The single-layer network’s goal, or objective function, is to learn features by minimizing reconstruction entropy.

This allows it to autolearn features of the input, which leads to finding good correlations and higher accuracy in identifying discriminatory features. From there, a multilayer network leverages this to accurately classify the data. This is the pretraining step.

Each single-layer network has the following attributes:

  • Hidden bias: The bias for the output
  • Visible Bias: The bias for the input
  • Weight Matrix: The weights for the machine

Training a single-layer network

Train a network by joining the input vector to the input layer. Distort the input with some Gaussian noise. This noise function will vary depending on the network. Then minimize reconstruction entropy through pretraining until the network learns the best features for reconstructing the input data.

Learning rate

A typical learning-rate value is between 0.001 and 0.1. The learning rate, or step rate, is the rate at which a function steps within a search space. Smaller learning rates mean higher training times, but may lead to more precise results.


Momentum is an extra factor in determining how fast an optimization algorithm converges.

L2 regularization constant

L2 is the lambda discussed in the equation here.

Chris Nicholson

Chris Nicholson is the CEO of Skymind. He previously led communications and recruiting at the Sequoia-backed robo-advisor, FutureAdvisor, which was acquired by BlackRock. In a prior life, Chris spent a decade reporting on tech and finance for The New York Times, Businessweek and Bloomberg, among others.

A bi-weekly digest of AI use cases in the news.