Semi-Stochastic Gradient Descent Methods

Download Report

Transcript Semi-Stochastic Gradient Descent Methods

BASP Frontiers Workshop
January 28, 2014
Semi-Stochastic
Gradient Descent Methods
Jakub Konečný
University of Edinburgh
Based on

Basic Method: S2GD


Mini-batching (& proximal setting): mS2GD


Konečný and Richtárik. Semi-Stochastic Gradient
Descent Methods, December 2013
Konečný, Liu, Richtárik and Takáč. mS2GD: mS2GD:
Minibatch semi-stochastic gradient descent in the
proximal setting, October 2014
Coordinate descent variant: S2CD

Konečný, Qu and Richtárik. Semi-stochastic coordinate
descent, December 2014
Introduction
Large scale problem setting

Problems are often structured
Structure – sum of functions

is BIG
Frequently arising in machine learning
Examples

Linear regression (least squares)


Logistic regression (classification)

Assumptions

Lipschitz continuity of derivative of

Strong convexity of
Gradient Descent (GD)

Update rule

Fast convergence rate

Alternatively, for accuracy we need
iterations

Complexity of single iteration –
(measured in gradient evaluations)
Stochastic Gradient Descent (SGD)

Update rule
a step-size parameter

Why it works

Slow convergence

Complexity of single iteration –
(measured in gradient evaluations)
Goal
GD
SGD
Fast convergence
Slow convergence
gradient evaluations in
each iteration
Complexity of iteration
independent of
Combine in a single algorithm
Semi-Stochastic Gradient Descent
S2GD
Intuition


The gradient does not change drastically
We could reuse the information from “old” gradient
Modifying “old” gradient

Imagine someone gives us a “good” point

Gradient at point , near , can be expressed as
Gradient change
We can try to estimate

Approximation of the gradient
and
Already computed gradient
The S2GD Algorithm
Simplification; size of the
inner loop is random,
following a geometric rule
Theorem
Convergence rate
For any fixed , can be made
arbitrarily small by increasing
Can be made arbitrarily
small, by decreasing

How to set the parameters
?
Setting the parameters
Fix target accuracy

The accuracy is achieved by setting
# of epochs
stepsize
# of iterations

Total complexity (in gradient evaluations)
# of epochs
full gradient evaluation
cheap iterations
Complexity

S2GD complexity

GD complexity
iterations
complexity of a single iteration



Total
Related Methods

SAG – Stochastic Average Gradient
(Mark Schmidt, Nicolas Le Roux, Francis Bach, 2013)





SAGA (Aaron Defazio, Francis Bach, Simon Lacoste-Julien, 2014)


Refresh single stochastic gradient in each iteration
Need to store gradients.
Similar convergence rate
Cumbersome analysis
Refined analysis
MISO - Minimization by Incremental Surrogate
Optimization (Julien Mairal, 2014)



Similar to SAG, more general
Elegant analysis, not really applicable for sparse data
Julien is here
Related Methods

SVRG – Stochastic Variance Reduced Gradient
(Rie Johnson, Tong Zhang, 2013)


Arises as a special case in S2GD
Prox-SVRG
(Tong Zhang, Lin Xiao, 2014)


Extended to proximal setting
EMGD – Epoch Mixed Gradient Descent
(Lijun Zhang, Mehrdad Mahdavi , Rong Jin, 2013)


Handles simple constraints,
Worse convergence rate
Experiment (logistic regression on: ijcnn, rcv, real-sim, url)
Extensions
Sparse data

For linear/logistic regression, gradient copies
sparsity pattern of example.

But the update direction is fully dense
SPARSE

DENSE
Can we do something about it?
Sparse data



Yes we can!
To compute
, we only need coordinates of
corresponding to nonzero elements of
For each coordinate , remember when was it
updated last time –



Before computing
in inner iteration number ,
update required coordinates
Step being
Compute direction and make a single update
Number of iterations when the coordinate was not updated
The “old gradient”
Sparse data implementation
High Probability Result


The result holds only in expectation
Can we say anything about the concentration of the
result in practice?
Paying just logarithm of probability
Independent from other parameters
For any
we have:
Code

Efficient implementation for logistic regression available at MLOSS
http://mloss.org/software/view/556/
mS2GD (mini-batch S2GD)


How does mini-batching influence the algorithm?
Replace
by

Provides two-fold speedup


Provably less gradient evaluations are needed
(up to certain number of mini-batches)
Easy possibility of parallelism
S2CD (Semi-Stochastic Coordinate Descent)

SGD type methods


Coordinate Descent type methods


Sampling rows (training examples) of data matrix
Sampling columns (features) of data matrix
Question: Can we do both?

Sample both columns and rows
S2CD (Semi-Stochastic Coordinate Descent)

Comlpexity

S2GD