Adaptive Stochastic Control for the Smart Grid

Download Report

Transcript Adaptive Stochastic Control for the Smart Grid

Adaptive Stochastic Control for the Smart grid
Qinghua Shen
Smart grid meeting
Outline
• Introduction to the control of smart grid
adaptive stochastic control, smart grid
• Adaptive Stochastic Control
basis of stochastic system, policy search and approximation, convergence
• Example: distributed generation despatch with storage
ADP for resource allocation, value function approximations
• Challenges
1
Introduction to the control of smart grid
• Control of the smart grid
 goal for control: instantly, corrective and dynamically
 Self-healing: auto repair or removal of potentially faulty
equipment
 Flexible: rapid and safe interconnection of distributed
generation and storage
 Predictive: statistics, machine learning, predictive models
 interactive: appropriate information is provided transparently
in near real time
 Optimal : operators and customers efficiently and economically
 Secure : cyber- and physical-security
2
Introduction to the control of smart grid
• Major Components
3
Outline
• Introduction to the control of smart grid
adaptive stochastic control, smart grid
• Adaptive Stochastic Control
basis of stochastic system, policy search and approximation, convergence
• Example: distributed generation despatch with storage
ADP for resource allocation, value function approximations
• Challenges
4
Adaptive Stochastic Control
• Stochastic system
 State variables
 physical state: energy amount, status of a generator
 information state: current and historical demand, price and weather
 belief state: probability distributions
 The decisions
whether or charge/discharge, use backup
 The exogenous Information
all the dimensions of uncertainty
 The Transition Function
given the state, decisions and exogenous information, determines next state
 The Objective Function
metrics that governs how we make those decisions and evaluate the
performance of policies of the controller designs
5
Adaptive Stochastic Control
• Policies

maps the information in state S to a decision x.
, which is the state variables, capturing energy resources Rt,
exogenous information pt, and belief state Kt.
 The problem
is known variously as the value of the a policy or the cost to go function.
can be a cost function if we minimize, or contribution function if we
maximize.
Cost include generating electricity, purchasing fuel, losses due to energy
conversion, cost of repair, and penalties for curtailing loads
 policy for what
includes whether to charge/discharge, when to run a distributed generator,
how much energy draw from grid for every customer in every networks and
the utility
6
Adaptive Stochastic Control
• Design a robust policy: four classes
 Myopic Policies
minimize next-period cost without decisions for future( special structure good)
 Look-ahead Policies
Optimize over some time horizon using a forecast of the possible variability of
exogenous events such as weather. Forecast can either be deterministic
forecasts or stochastic forecasts
 Policy Function Approximations
Functions return an action given a state, without solving any form of
optimization, including: rule-based lookup table; Parameterized
rules(threshold hold); statistical functions
 Policy based on Value Function Approximations
Optimal policy obtained from HJB equation, to avoid curses of dimensionality
a) Approximate to eliminate the expectation; b) replace the value function with
a computationally tractable approximation; c) solve the resulting deterministic
maximization problem using a commercial solver
7
Adaptive Stochastic Control
• ADP and the Post–Decision State
 Value function approximation
when structure of a policy is not obvious, estimates the value of being in a state
When x is a vector, solve the maximization problem is problematic(expectation
hard to compute exactly)---refer to stochastic search
 Post-Decision state
Post decision state determined through current state and action
8
Adaptive Stochastic Control
• Design policy
 look up tables
 Parametric models
With this strategy, we face the challenge of first identifying the basis functions,
and then tuning the parameters
 Nonparametric models
handle high-dimensional, asymptotically unbiased
• Kernel regression;
• Support Vector regression;
• Neural networks;
• Dirichlet process mixtures.
9
Adaptive Stochastic Control
• Policy search
 Direct policy search
Depend on Monte Carlo sampling ----stochastic search
Methods: sequential kriging
Using the knowledge gradient
Applied when the policy structure is apparent
 Bellman residual minimization for value function approximations
This is the most widely used strategy for optimizing policies, and encompasses
a variety of algorithmic approaches that include approximate value iteration
(including temporal difference learning) and approximate policy iteration
10
ASC for distributed generation despatch
• Approximate Dynamic Programming for resource allocation
 Resource allocation
how much energy to store in a battery, whether a diesel generator should be
turned on, and whether a mobile storage device (and/or generator) should be
moved to a congested location.
 A general model
Rta is the number of resources with attribute vector a
xtad is the number of resrouces we act on with a decision
of type d.
a decision d can be (-1,0,1) to discharge, hold, or recharge a battergy
(0,1) to turn a distributed generator off or on
11
Adaptive Stochastic Control
• Value Function Approximations for resource allocation
 Approximate the value
function
resource allocation utility
function: concavity property
Approximate value function
by the post decision resource
vector Separable piece-wise
linear function
Estimate piecewise linear
concave functions by
iteratively stepping forward
through time and updating
value functions
12
Adaptive Stochastic Control
• Experimental work
 evaluate the results
Resource determine the
quality of the resulting policy
is a major challenge
fit the value functions for a
deterministic problem, and
compare the resulting
solution to the optimal
solution for the deterministic
problem, obtained by using a
commercial solver
 limited by the size
13
Challenges
 Convergence
Only some structure can be proofed to be convergence with approximation
Concavity is an important category
 For smart grid
Beneficial to both utility and end users– enough incentive
The track of key performance metrics
14