Motivation for Interacting Heterogeneous Agents

Download Report

Transcript Motivation for Interacting Heterogeneous Agents

CEF 2009 Pre-Conference Tutorial on “Heterogeneous and Multi-Agent Modelling”

15th International Conference Computing in Economics and Finance, University of Technology at Sydney, Sydney, Australia July 14, 2009

Shu-Heng Chen, [email protected]

http://www.aiecon.org/

Department of Economics National Chengchi University Taipei, Taiwan

Time Table

 9:00-10:30 Session 1  10:30-11:00 Coffee Break  11:00-12:30 Session 2  12:30-14:00 Lunch  14:00-15:30 Session 3  15:30-16:00 Coffee Break  16:00-17:00 Session 4

Plan of the Tutorial

 There are two ways to see what will be covered in this 5.5-hour tutorial.

 First, what kinds of questions have been raised and addressed?

 Second, what specific models have, therefore, been motivated to substantiate the study of questions above?  Of course, the tutorial will be more contained, if one can also see the complement from the two.

Plan of the Tutorial

 This plan naturally leads us to two summary pages of tutorial.

(Feature Page)

The first page is a summary of views, perceptions, insights, …regarding the nature of agent based modeling in economics.

(Illustration Page)

Then the second one is a summary of agent-based models covered in this totorial.

(Extension Page)

Third, it is a list for what have not been said much or not said at all in this tutorial, but they are not the least.

Plan of the Tutorial

 A plan should also include time allocation.

 We have a total of 5.5 hours, separated into four sessions, three 90-minute sessions and one 60-minute session.  This time budget will be allocated within the feature page as well as the illustration page.

Feature Page

What is

agent-based computational economics (ACE)?

Why ACE , in light of the development of other

disciplines?

 

What is the relation between ACE and experimental economics (EE)?

 What are the relation between software agents and human agents?

What is the relation between ACE and behavioral economics or psychological economics ?

 From Homo Economicus to Homo Sapiens (Thaler, 2000)

ACE and evolutionary economics

?

From Homogeneity to Heterogeneity

Feature Page

 Why ACE (Cont’d)?

 ACE and Econometrics

?

 ACE and Social Networks ?

 Concluding Remarks

Illustration Page: Models

 Thomas Schelling’ Segregation Model  Agent-Based Cobweb Model  Agent-Based OLG Model  Agent-Based Double Auction Market  Agent-Based Lottery Market  Agent-Based Financial Market

Illustration Page: Tools

 Reinforcement Learning  Classifier System  Fuzzy Logic  Fuzzy Classifier System  Genetic Algorithms  Genetic Programming  Self-Organizing Maps

Computational Social Sciences: What and Why?

   The tutorial tries to answer two questions which we consider quite fundamental to the study of agent-based economic models, namely, what and why ?  What is the agent-based computational economics?  Why do we need agent-based economic modeling of economy?

These two questions are generally shared by other social scientists who are also interested in agent-based modeling. Therefore, they are better addressed in a broader background, i.e., agent-based computational social sciences .

What

 To answer the first question, it would be nice if we can start with some very simple agent-based social or economic models which are, however, have all the essences of agent-based models.

 In fact, some of the earliest agent-based economic and social models satisfy this need.

 Cellular automata first introduced by Von Neumann and later used by Thomas Schelling provide such an illustration.

Schelling's Segregation Model

 http://ccl.northwestern.edu/netlogo/

Computational Social Sciences: What are they?

   Over the last decade, we evidenced that agent-based modeling and simulation (ABMS) were extensively used among different disciplines of social sciences. This tendency makes agent-based social scientists able to find a same language between them, and facilitate the resultant interdisciplinary communication and collaboration, which in turn defines a common interest among social scientists.

This gathering also causes the emergence of a new discipline across social sciences, which is known as computational social sciences .

CSS is also known as…

 agent-based social sciences (Trajkovski and Collins, 2009)  bottom-up social sciences (Epstein and Axtell, 1996)  algorithmic (behavioral) social sciences  generative social sciences (Epstein, 2006)

Software Agents

     Schelling's segregation model provides a lucid illustration of the constituents of an agent-based model. First, it answer why CSS is called algorithmic social sciences, because each agents (actors) is represented by an algorithm or a computational program. This algorithm (program) corresponds to the decision rules, behavioral models or even preferences that characterize the agents. In a sense, it is a simple

model of man

.

Borrowing the term from computer science, one may also call it software agents or autonomous agents.

Embedness

 In Schelling's model, the embedness is a two dimensional cellular automata which defines the geography of the space in which agents live.

 The geography (topology) of the city further defines a social network for each agent.  In addition to the geographies, social networks, other embednesses include institutions, cultures, histories..., etc.

Aggregation (Emergent)

    Third, it also answer why CSS is called bottom up social sciences. The segregation phenomenon as an aggregation phenomenon is a sum of the interactions of fairly tolerant people.

Obviously, this is not a linear scaling-up. ``From the bottom up'' normally refers to the surprising phenomena that would not be predicted from the model itself, which focuses on the actions of individual agents rather than overarching downward-focused principles.

Thomas Schelling’s Segregation

Agents mobil agents with preference for identity Embedness 2-dimensional chess-board like network Interactions local interaction

Distinguishing Features

 micro-meso-macroscopic structure of social phenomena  micro-macro relations: aggregations, emergence

Thomas Schelling was one of the pioneers in the field of agent-based social modeling. He emphasized the value of starting with rules of behavior for individuals and using simulation to discover the implications for large-scale outcomes. He called this ``micromotives and macrobehavior.’’

Segregation Model

        The space is a checkerboard with 64 squares representing places where people can live.

There are two types of agents, and they are represented by pennies and nickels. (One can imagine the agents as Greens and Reds.) The coins are placed at random among the squares, with no more than one per square.

An agent will be content if more than one-half of its immediate neighbors are of the same type as itself.

The immediate neighbors are the occupants of the adjacent squares. For example, if all the eight adjacent squares were occupied, then the actor is content if at least five of them are the same type itself as itself.

If an actor is content, it stays put. If it is not content it moves.

In Schelling's original model, it would move to one of the nearest squares where it would be content.

Cellular Automata

Schelling’s Segregation Model (2-dimensional Cellular Automata)

Schelling’s Segregation Model

(Replicate from Pans and Vriend, 2005)

Sehelling’s Segregation Model

(Replicate from Pans and Vriend, 2005)

Agent’s Preference (Strictly prefers for integration)

Sehelling’s Segregation Model

(Replicate from Pans and Vriend, 2005)

Why?

 The Usual Defense

 Scalable Extensions of Human-Subject Experiments

The Usual Defense

 The usual defense for agent-based modeling is its superiority to its alternatives, mainly, the top down, system dynamics or equation-based systems.  The superiority can mean  better understanding (explanations) of social phenomena,  better forecasting of future,

 others .

Epstein’s Long List (Epstein, 2008)

1. Explain (very distinct from predict) 2. Guide data collection 3. Illuminate core dynamics 4. Suggest dynamical analogies 5. Discover new questions 6. Promote a scientific habit of mind 7. Bound (bracket) outcomes to plausible ranges 8. Illuminate core uncertainties.

9. Offer crisis options in near-real time 10. Demonstrate tradeoffs / suggest efficiencies 11. Challenge the robustness of prevailing theory through perturbations 12. Expose prevailing wisdom as incompatible with available data 13. Train practitioners 14. Discipline the policy dialogue 15. Educate the general public 16. Reveal the apparently simple (complex) to be complex (simple)

Scalable Extensions of Human Subject Experiments

   An easy answer for using agent-based modeling is that it is an extension of experimental social sciences, including experimental economics, experimental political sciences, …, etc.

Experimental social sciences have difficulties scaling up.    The most obvious limitation is money:   limited number of subjects, limited number of scenarios,  limited number of repetitions. Human fatigue. Physical constraints of on-site lab. Hence, replacing human agents with software agents seems to be an attractive alternative when the above mentioned constraints are stringent.

Is this a real defense?

     However, if this defense is true, then what we really expect to see from the development of the literature is the following research pattern.

An experimental economist conduce an experiment with 20 human subjects lasting for 2 hours.

An agent-based computational economist scale up this experiment with 2000 software agents lasting for 2 days.

Unfortunately, if we search for the literature, we will disappointed by seeing there is not many ACE papers motivated in this vein. In fact, the second wave of ACE models appearing in the middle 1990s were in pursuit of a much modest goal:

understanding

human behavior observed from experiments.

   So, what is wrong?

The answer is that human agents are not that easily replaceable. This leads to our next subject.

ACE and Experimental Economics

 Historical Background

 Three-Stage Development

 Replication

(John Duffy, 2006)

 Competition

(Tesfatision, 2009)  Cooperation and Coordination (Chen, 2009)  A Wrap-Up Example 

Example 4: Agent-Based Double Auction Markets

Historical Background

    In the middle 1990s, after the cellular-automata tradition, another class of agent-based economic models appears. This series of ACE models has some distinguishing characteristics:    They are strongly motivated by human-subject experiments They illustrate how a neo-classical (homogeneous rational expectations) model can be rewritten as an agent-based economic model by using heterogeneous interacting learning agent model. They show that the rational expectations equilibrium can be approached via these agent-based models. Or, in the case that they are multiple equilibria, ACE can help us make a selection.

Example 2: Agent-Based Cobweb Models

Example 3: Agent-Based Overlapping Generations Models

Cobweb Model

 This model plays an important role in macroeconomics, because it is the place in which the concept rational expectations originated (Muth,1961).

 It is also the first neo-classical macroeconomic prototype to which an agent-based computational approach was applied (Arifovic, 1994).

Cobweb Model

 A competitive market composed of

n

firms.

 Cost function of each firm:

c i

,

t

xq i

,

t

1 2

ynq i

2 ,

t

,

i

1 , 2 ,...,

n

 Expected Profit of firm

i

e i

,

t

P i e

,

t q i

,

t

c i

,

t

Cobweb Model

 Expected Profit Maximization

q i

,

t

1

yn

(

P i e

,

t

x

)

 Market Equilibrium

P t

A

B i n

  1

q i

,

t

A

B i n

  1 1

yn

(

P i

,

t e

x

)

Cobweb Model

 Homogeneous Expectations

P i

,

t e

P t e

, 

i

P t

A

B i n

  1 1

yn

(

P i

,

t e

x

) 

A

y B i n

  1 (

P t e

x

)  Homogeneous Rational Expectation Equilibrium

P t e

P t

    

P t

*

Q t

*  

Ay B A B

   

Bx y x y

Dynamics of the Cobweb Model

   Question typically asked in 1990s is:  Would actual price converge to homogeneous rational expectation equilibrium (HREE) price, even though agents are bounded rational and do not have rational expectations?

Earlier studies show that in general the market will not converge to HREE (Ezekiel,1938; Bray, 1982, Marcet and Sargent, 1989) These studies indicate that depending on the so called cobweb ratio, the market dynamics can be separated into the stable case and the unstable case.   

B

/

B y

/

y

 1  , 1 ,

stable unstable case case

Cobweb Experiments

 Experimental evidence, however, show that even under the unstable case, the cobweb model is still stable (Wellford, 1989). The following two figures are from Arifovic (1994).

Replication of the Cobweb Experiments

    Can we replicate the cobweb experimental results with bounded rational agents, at least qualitatively?

Yes, in 1994, Arifovic (1994) gave the first positive result, and two years later, Chen and Yeh (1996) also replicated this result, while with a different setup.

What is commonly shared in these two studies is a market composing of agents (firms) who started with heterogeneous beliefs (in either quantities or prices).

Then they learn and adapt under a social or individual learning process driven by evolutionary algorithms, such as genetic algorithms or genetic programming.

Chen and Yeh (1996)

 Take Chen and Yeh as an example.

 Each agent is initially given an arbitrary forecasting function of price.

P i

,

t e

f i

,

t

(

P t

 1 ,

P t

 2 ,...),

i

 1 , 2 ,...,

n

 As time goes on, each agent will change her own forecasting by learning from herself (individual learning) or others (social learning).

Market Price Bottom-Up

      

P P P

 1 ,

t

2

t e P n e

,

t e

,

t

              

P P P

2 

t

1 ,

t e P n e

,

t e

,

t

 1  1  1  1               

P P P

2

e

,

t P n t e

1 ,

t

e

,

t

 2  2  2  1           

P

P t e

t

    

P t

 1

P t e

 1     

P t

P t e

  2 2    ...

where

P t e

 

P

1 ,

t e P

2

e

,

t

P n e

,

t

' 

Social Learning

 In Chen and Yeh (1996), the learning of the population of the agents is driven by an evolutionary algorithm known as genetic programming.

P t e

GP 

P t e

1

GP 

P t e

 2 GP  

Cobweb Model: Parameters

Agent-Based Simulation of the Cobweb Model

Chen and Yeh (2000): Firms Adapting to Structural Change

Overlapping Generations (OLG) Models

 The OLG model has been extensively applied to studies of savings, bequests, demand for assets, prices of assets, inflation, business cycles, economic growth, and the effects of taxes, social security, and budget deficits.

2-Period OLG Model

 It consists of overlapping generations of two period-lived agents.

 At time

t

,

N

young agents are born.

    Each of them lives for two periods (

t,t+1

).

units at time

t+1

.

e

1 

e

2  0

e

1 of a perishable consumption good and with An agent born at time t consumes in both periods.

(

c t

1 ,

c t

2 ) All agents have identical preference given by

U

(

c t

1 ,

c t

2 )  ln(

c t

1 )  ln(

c t

2 )

2-Period OLG Model

 In addition to the perishable consumption good, there is an asset called money which is held distributively by the old generation.

H t

: the nominal money supply at time

t h t

H t N

Optimization

( max

c i

1 ,

t

,

c i

2 ,

t

) ln(

c i

1 ,

t

)  ln(

c i

2 ,

t

)

s

.

t

.

c i

1 ,

t

m i

,

t P t

e

1 ,

c i

2 ,

t

e

2 

m i

,

t P t

 1

F

.

O

 .

C c i

1 ,

t

 1 2 (

e

1 

e

2 

e i

,

t

 1 )

m i

,

t

: the nominal money balances that agent

i

acquires at time period

t

and spends in time period

t

1

.

P t

: the nominal price level at time

t

.

t

 1 : the gross inflation rate,

P t

 1

P t

s i

,

t G

: saving

Price Dynamics

of agent

i

at time

t s i

,

t

e

1 

c i

1 ,

t

: government spending

m i

,

t

s i

,

t P t

m i

,

t P t

s i

,

t i N

  1

m i

,

t

H t

H t

 1 

GP t i N

  1

s i

,

t P t

i N

  1

s i

,

t

 1

P t

 1 

GP t

t

P t P t

 1 

i N

  1

s i

,

t

 1

i N

  1

s i

,

t

G

Steady State Equilibria

e i

,

t

 

t

(perfect foresight) 

t

 1 

t

 1 

e

1

e

2  1  2

g e

2 

e

1 (

e

2 )(  1

t

 

t

(steady state) ),

g

G N

L

*  1 

e e

2 1  2

g e

2  

H

*  1 

e e

2 1  2

g e

2  ( 1 

e e

2 1 2  2

g

)

e

2  4

e e

2 1 ( 1 

e e

2 1 2  2

g

)

e

2  4

e e

2 1

Multiple Equilibria

 *

H

 1 

e e

2 1  2

g e

2  ( 1 

e e

2 1  2

g

)  4

e

2

e e

2 1 2  *

H

 1 

e e

2 1  2

g e

2  ( 1 

e e

2 1 2  2

g e

2 )  4

e e

2 1

OLG Experiments

    The two equilibria of the 2-period OLG model raise the issue of equilibrium selection.

This can be particularly interesting because the two equilibria, high inflation vs. low inflation, does have different welfare implication.

To solve this problem, experiments with human subjects have been applied to see which equilibrium is more likely to appear (Marimon and Sunder, 1993, 1994; Bernasconi and Kirchkamp 2000). The results consistently show that the low inflation equilibrium is chosen.

OLG Experiments

The figure is borrowed from Arifovic (1995).

Agent-Based OLG Model

     Can agent-based models replicate the these experimental results?

The answer, to some extent , is yes.

The general idea is very similar to the one in the agent based cobweb model.

The homogeneous rational expectations (perfect foresight) is replaced with bounded rational agents with heterogeneous beliefs in terms of consumption (Arifovic, 1995) or price expectation (Bullard and Duffy, 1999; Chen and Yeh 1999).

This population of heterogeneous beliefs are then revised either via individual learning or social learning.

Chen and Yeh (1999)

 Take Chen and Yeh (1999) as an example.

 Agents are initially endowed with arbitrary inflation expectation functions. 

e i

,

t

f i

,

t

(

e i

,

t

 1

,

e i

,

t

 2

,...)

Parameters of the OLG Models

Inflation Bottom Up

        

N

1 ,

t e

2

e e

 ,

t

,

t

 1  1  1             

s s s N

 1 ,

t

2 ,

t

,

t

         

P t i

,

t

           

N

1 ,

t e

2

e e

 ,

t

,

t

   2 2 2             

s s s

1 ,

t

2 ,

t N

 ,

t

 1  1  1          

P t i

,

t

 1  1    Forecasting errors as feedbacks to trigger further review and revision

Inflation Expectations Dynamics

          

N

1 , 2

e e e t

 ,

t t

,  1

t

 1  1  1                   

N

1 , 2

e e e t

,  ,

t t

t

  3  3 3 3            

Π

t

Π

t e

 1

t e

1

   GP    

Π

t

t e

 3 3

Π t e

 3 GP                 

N

1 , 2

e e e t

,

t

 ,

t

t

   2 2 2 2                   

N

1 , 2

e e e t

,

t

t

 ,

t

   4 4 4 4            

t

Π

t e

 2 2  

Π t e

 2 GP     

t

Π

t e

 4 4  

Π t e

 4 GP    

Equilibrium Selection

Three-Stage Development

   Duffy J (2006) Agent-based models and human subject experiments. In: Tesfatsion L, Judd K (eds), Handbook of computational economics: Agent-based computational economics, Vol. 2. Elsevier, Oxford, UK, 949-1011.

 Tesfatsion, L (2009) From Human-Subject Experiments to Computational-Agent Experiments (And Everything In Between), Keynote Speech at 2009 International Meetings of the Economic Science Association,

www.econ.iastate.edu/tesfatsi/ESA2009.LT.pdf

Chen, S.-H (2009) Collaborative computational intelligence in economics , in C.L. Mumford, and L.C. Jain (eds.),

Systems Reference Library

Computational

Intelligence: Collaboration, Fusion and Emergence, Intelligent , Vol. 1, Chapter 8. Springer, 2009.

Mirroring: A Statistical Criteria or An AI Criteria?

  Presumably, software agents are naturally related to human agents since the former are frequently designed to replicate or mirror the latter at different degrees of precision.

Arthur’s “

calibrating artificial agent

” or his version of the

Turing Test

probably provides the earliest guideline for this mirroring function.

 The statistical criteria have now been taken into account by agent-based economic model builders, whereas not many follow the AI criteria.

Arthur (1993): Calibrating Artificial Agents

 An important question then for economics is how to construct economic models that are based on an actual human rationality that is bounded or limited. As an ideal, we would want to build our economic models around theoretical agents whose rationality is bounded in exactly the same way human rationality is bounded, and whose decision-making behavior matches or replicates real human decision-making behavior. (Arthur, 1993, p. 2.)

Calibrating Artificial Agents

 To build the software agents upon empirical grounds, he further suggested a statistical approach to design software agents, i.e.,  first, to parameterize software agents in terms of their decision algorithms, and,  second, to calibrate them.

Universal Decision Algorithms?

    The next important question is the choice of the parametric software agents or the choice of parametric decision algorithms. Although he did suggest to use those learning algorithms already widely used in economics, he did not believe the existence of the universal economic agent which is characterized as an universal decision algorithm that is applicable to all economic problems.

Instead, it is context-dependent decision problem.

; the behavior can differ considerably from one decision problem to another

Example:

Reinforcement Learning

in N-Armed Bandit Problem (Arthur, 1993)

N-Armed Bandit Problem

 In the

N

-armed bandit problem, the agent is provided with a set of

N

alternatives from which he has to choose one.

 The consequence of choosing the

n

th

n

=1,2,...,

N

.

 The stochastic structure of is unknown

n

to the agent.

N-Armed Bandit Problem

 The N-armed bandit problem is formally introduced as follows.

{

d

1 ,

max

d

,..,

d

2

H

} 

H

i H

  1

r d i

,

d i

{ 1 , 2 ,...,

N

}

r n iid

~

f

(

r n

),

E

(

r n

)

u n

,

n

1 , 2 ,...,

N

Robillard’s Experiment

 Conducted by Laval Robillard at Harvard in 1952-53 (Bush and Mosteller, 1955)  2-armed bandit problem

Option A

:    1 with 0 with probabilit probabilit y y 1 -

f A f A Option B

:    1 with 0 with probabilit probabilit y y 1 -

f B f B Experiment al Design

: (

f A

,

f B

)

Robillard’s Results

Table is from Arthur (1993)

Reinforcement Learning

      (Law of Effect) The essence of reinforcement learning is very simple: choices that have led to good outcomes in the past are more likely to be repeated in the future.

A psychologically-motivated learning algorithm (from behavioral psychology, behaviorism, animal experiments) Very popularly used in agent-based modeling, equally popular as opposed to evolutionary algorithms.

There are several different versions of reinforcement learning (RL).

Arthur’s 2-parameter version mainly focuses on the speed of learning (Arthur, 1993).

Roth and Erev’s 3, 4 or 5-parameter version (Roth and Erev, 1995; Erev and Roth, 1998) further extends it to cover several different psychological or cognitive considerations, such as memory, attention, aspiration (reference point), etc.

Arthur’s 2-Parameter RL

stochastic choice

:

p n

(

t

) 

q n

(

t

) ,

n Q

(

t

)  1 , 2 ,...,

N Q

(

t

)

q n

(

t

  1 )

n N

  1

q n

(

t

) (

q n

(

t

)  : strength of the alternativ  

q n

(

t

)   (

t

), if

n

is activated in time

q n

(

t

), if

n

is not activated.

e

n

) t,

n

 1 , 2 ,...,

N

(

normalizat ion

)

Q

(

t

) 

Qt

 ,

if

  0 

Q

(

t

) 

Q

( a constant)

q n norm

(

t

) 

q n

(

t

)

Qt

 (Arthur'

Q

(

t

) s calibratio n 

Q

 31 ,   0 )

Fitting Performance

Arthur (1993): Turing Test

 What would it mean to calibrate a behavioral algorithm? In designing an algorithm to represent human behavior in a particular context, we would be interested not only in reproducing statistically the characteristics of human choice , but also in reproducing the "style" in which humans choose, possibly even the ways in which they might depart from perfect rationality. The ideal would be algorithmic behavior that could pass the Turing test of being indistinguishable from human behavior with its foibles, departures and errors, to an observer who was not informed whether the behavior was algorithm generated or human-generated (Turing 1956). Calibration ought not to be merely a matter of fitting parameters, but also one of building human-like qualitative behavior into the algorithm specification itself. (Arthur, 1993, p. 3)

Turing Test

 Of course, the AI one is much broader than the statistical one, and its implementation may be harder; hence, it draw much less attention from ACE economists.

 Ecemis, Bonabeau, and Ashburn (2005)

and Arifovic, McKelvey, and Pevnitskaya (2006) are the only examples known to us.

Market Frenzy

   Using an technique called interactive evolutionary computation to mimic a a market frenzy that occurred on the London Stock Exchange in September 2002.

The event began at 10:10 am and within 5 minutes the FTSE100 index rose from 3,860 to 4,060. Within another few minutes, the index fell to 3,755, before returning to a value slightly above its original level at the end of the 20 minutes (Figure 2).

A “qualitative match” includes to match the amplitude, period, phase, and damping rate of the approximate wave – and of course the size, shape, and location of the price history. 

Competing Interaction

 In addition to mirroring the behavior of human agents, software agents are also used directly to interact with human agents (Chen and Tai, 2005).

U-Mart

   The agent-based financial system U-MART provides one illustration (Shiozawa, Nakajima, Matsui, Koyama, Taiguchi, and Hashimoto, 2006).

U-MART stands for

UnReal Market as an Artificial Research Test bed

.

U-MART enables us to address two basic questions in such kind of an integrating system.

  Can human agents compete with the software agents when they are placed together in the market?

Can the participation of software agents lead to different dynamics, such price convergence, market efficiency, …, etc.

Agent-Based Electronic Market

   Grossklags and Schmidt (2006) studied whether market efficiency can be enhanced when software agents are introduced to the markets which were originally composed solely of human agents.

They designed a continuous double auction market in the style of the Iowa electronic market, and introduced software agents with a passive arbitrage seeking strategy to the market experiment with human agents.

They then went further to distinguish the case where human agents are informed of the presence of software agents from the case where human agents are not informed of this presence.

Agent-Based Electronic Market

   Whether or not the human agents are well informed of the presence of the software agents can have significant impacts upon market efficiency (in the form of price deviations from the fundamental price). They found that if human agents are well informed, then the presence of software agents triggers more efficient market prices when compared to the baseline treatment without software agents. Otherwise, the introduction of software agents results in lower market efficiency.

Agent-Based Double Auction Markets

   

The double auction (DA) market

is probably the most illuminating illustration of the connection between agent-based computational economics and experimental economics.

Having said that, we notice that DA is the context in which various versions of agents , crossing both the realm of EE and ACE, have

been proposed.

It further serves as a backbone for the more sophisticated development of agent-based models involving the elements of trading, such as agent-based fish market (Kirman and Vriend, 2001), agent-based power market (Weidlich, 2008) and agent-based financial market.

So, it is kind of a connection from the past to the future, from the basic to the advanced. Therefore, it will be nice tto have a quick review of the development of agent-based double auction market.

Agent-Based Double Auction Markets

 Experiments with Human Subjects

 Gode and Sunder (1993)

 Rust, Miller and Palmer (1993, 1994)

 Andrew and Prager (1994)

 Chen, Chie and Tai (2002, 2003)

Double Auction Market

     Both sides of the market, buyers and sellers, are able to submit prices (bids or asks.

The bids and asks are then matched under different trading rules.

Most rules will matches those most competitive bids (highest few bids) with those most competitive asks (lowest few asks).

(Aurora Rule) In an extreme case (one unit per transaction), only the holder of the highest bid (current bid) and the lowest ask (current ask) can be matched, of course, under the condition that the former is greater than the older.

The transaction price will then be somewhere between the current bid and current ask, say the middle of the two.

Four Kinds of Agents

 Zero-Intelligent Agents (Gode and Sunder, 1993)  Programmed Agents (Rust, et al, 1993, 1994)  Calibrated Agents (Arthur, 1991, 1993; Chen and Hsieh, 2009)  Autonomous Agents (Andrew and Prager, 1994; Dawid, 1999; Chen, 2000)

Double Auction Experiments with Human Subjects

Buyer 1 Buyer 2 Buyer 3 Buyer N 1 DA Market Token-Value Generation Process Token-Vaule Table Demand and Supply Curves Market Structure Curves’ Shapes Competitive Equilibria Total Surplus = Consumers’ + Producers’ Surplus A Trading Period Bid-and-Ask AURORA A Trading Step: Buy-and-Sell S-Step Loop Actual Price and Actual Surplus Seller 1 Seller 2 Seller 3 Seller N 2

Double Auction Experiments

 To place a real double auction environment into a laboratory, one need to create the “incentive” for market participants.

 It is normally done through a token-value generating mechanism.

Token-Value Generation

Gode and Sunder (1993)

    There are two research questions in the agent-based double auction markets.

The first one is: to achieve the degree of market efficiency which we observed from the market experiments with human subjects, what is the minimum degree of intelligence required for our artificial agents? In other words, if we want to replace the human agents in the market experiment with the software agents, how smart should we expect for these software agents.

Gode and Sunder (1993)’s “zero-intelligence agent” is mainly an answer to this question.

Random Random Random Random

Gode and Sunder (1993)

Buyer 1 Buyer 2 Buyer 3 Buyer N 1 DA Market Token-Value Generation Process Token-Vaule Table Demand and Supply Curves Market Structure Curves’ Shapes Competitive Equilibria Total Surplus = Consumers’ + Producers’ Surplus A Trading Period Bid-and-Ask AURORA A Trading Step: Buy-and-Sell S-Step Loop Actual Price and Actual Surplus Seller 1 Seller 2 Seller 3 Seller N 2 Random Random Random Random

Intelligent-Irrelevance Hypothesis

    The zero-intelligent agent is a concept of random behaved agents, who are not purposive and are unable to learn.

To trade, they simply bid (ask) randomly but are constrained by their true reservation price (zero-profit price) They showed that the market efficiency coming out of a group of zero-intelligent agent can match, or sometimes even perform better, than what we observed from human-subject experiments. Therefore, their work, to some extent, verified the long held “intelligence-irrelevance hypothesis” in the double auction market experiments.

ZI Agents Replicate Human-Subject Experiments

Extensions

     Cliff (1997) showed that Gode Sunder’s ZI agents work

only for the symmetric markets, but not asymmetric markets.

Cliff (1997) and experiments.

Cliff and Bruten (1997) then argued that the software agents need to be smarter to match human subject They, therefore, add a little learning capability to the ZI agent, which they called ZI-Plus agent, or ZIP agent.

Nevertheless, ZI agent has now been extensively used in agent-based economic and financial models (see Ladley, 2009 for a survey on this).

The virtue of the device of the ZI-agent is its simplicity, and therefore, analytical tractability. Hence, it is a benchmark of ACE.

Santa Fe Double Auction Tournament

      The weakness of the ZI agents is that they are not purposive, but human agents are.

Hence, it helps us little to see the market dynamics from a game theoretic viewpoint, be static or evolutionary.

The second research question of the agent-based double auction market is exactly about how to be winners.

An inquiry into a effective characterization of the ``optimal'' trading strategies used in the double auction market has led to a series of tournaments, known as the Santa Fe Double Auction Tournament.

This tournament organized by the Santa Fe Institute invited participants to submit trading strategies (programs) and test their performance relative to other submitted programs in the Santa Fe Token Exchange, an artificial market which is operated by the double auction mechanism.

They received 25 submission, and the best performing strategy is called the Kaplan Strategy (background player strategy).

Rust, Miller and Palmer (1993, 1994)

Buyer 1 Buyer 2 Buyer 3 Buyer N 1 DA Market Token-Value Generation Process Token-Vaule Table Demand and Supply Curves Market Structure Curves’ Shapes Competitive Equilibria Total Surplus = Consumers’ + Producers’ Surplus A Trading Period Bid-and-Ask AURORA A Trading Step: Buy-and-Sell S-Step Loop Actual Price and Actual Surplus Seller 1 Seller 2 Seller 3 Seller N 2

Tournament vs. Experiment: Off Line vs On-Line

   The idea of using tournament as an agent-based modeling originated from Axelrod (1984)’s Iterated Prisoners’ Dilemma tournament.

Software agents in the SFI-DA model are programmed agents, but they are hand-written by human, and hence they can also be consider as human agents. However, the off-line setup of this tournament makes the participants unable to revise the programs once being submitted, and hence their incarnations are unable to learn as what they may.

Behavioral Economics

    Behavioral economists care what people actually do and why so instead of what people ought to do in light of pure logic.

Behavioral economists are not satisfied with the act-as-if methodology, because it hides the real process by which the actual decision is made.

This tendency drives them to learn more about the “hardware” in which the agent’s decision is made, such as their cognitive capacity, personality, culture background, and even down to neurophysiological details. Inevitably, this leads to an interdisciplinary study overarching economics, psychology and neuroscience.

ACE with Homo Sapiens

      Obviously, over the last decade, we have seen a fast development of the so-called behavioral experiments, which take cognitive capacity, intelligence, personality, emotion, risk attitude, and culture as control attributes of experiments with human subjects. Needless to say, soon, this development will be further rooted down to brain and its underlying DAN. While ACE is very sympathy with the idea of bounded rationality, most ACE models developed so far has not taken this development into account, while they may have the potential to do so.

Given the relation between ACE and EE discussed earlier, it is expected to see how ACE can develop various autonomous agents or software agents such that this cognitive, personality, and cultural background can be incorporated. This is certainly an important step toward successful models of Home Sapiens.

Example 5: Agent-Based Lottery Markets (Chen and Chie, 2008)

Agent-Based Lottery Markets

 Lottery market is an area where gambling psychology also plays a quite active role.

 Chen and Chie (2008) develope an agent-based lottery market, whose autonomous agents are grounded largely in this gambling psychology.

 Their artificial agents have potential to develop

three psychology characteristics

:  Halo Effect (Lottomania)  Conscious Selection (Neglect of probability)  Regret-Aversion (Interdependent preference)

Agent-Based Lottery Markets

 What differs their model from the typical behavioral models is that these three characteristics are not imposed exogenously,

but only have the possibility to emerge as an evolutionary outcome .

 This exemplifies how ACE can work with behavioral economics and make the implicit selection process explicit, and provides a stability test for these behavioral patterns.  This model is also used to answer the question

of the optimal lottery tax rate

.

Fuzzy Inference System

Fuzzy Inference System

Fuzzy

Rules)

  

(Sugeno Style of

(

J

)

If

J t r

is

A i

, then

a i

.

A i

(

i

 1 ,...,

k a i

(

i

 1 ,...,

k

) ) : Fuzzy Sets : Proportion of Income to Bet

J t r

: The jackpot size updated at the

r

days of the

t

th issue.

Autonomous Agents with Three Psychological Characteristics

Evidence of Weaker Conscious Selection

Where is the optimal?

Evolutionary Economics

   Economics is about change, and that subject has been very clearly stated in Alfred Marshall's following famous quotation.

“Economics, like biology, deals with a matter, of which the inner nature and constitution, as well as outer form, are constantly changing.” (Marshall, 1924, p. 772) ACE has been considered as the ``modern’’ computerized evolutionary economics, and taking this legacy of Alfred Marshall (Tesfastion, 2001)

ACE and the Legacy of Marshall

    In terms of the legacy of Marshall, one unique feature of the ACE model is its capability to modeling intrinsically constant changes.

(Co-evolution) One essential ingredient of triggering constant changing is to equip agents with a novelty-discovering or chance discovering capability so that they may constantly exploit the surrounding environment, which causes the surrounding environment to act or react and hence to change constantly. If economics is about constantly changing, and that happens because autonomous agents keep on searching for chances and novelties, then the change in each individual and the change in microstructure must accompany the holistic picture of constant

changes ( N-type agent-based financial model for an example

). We have already experienced the novelty-discovering agents in the agent-based double auction markets, we shall now focus more on the microstructure dynamics, which is about heterogeneities and the respective dynamics.

Kampouridis, Chen, and Tsang (2009)

Microstructure Dynamics

Heterogeneity

      Heterogeneity is another feature of ACE.

Due to its computational flexibility, ACE can accommodate different kinds of heterogeneity to different degree .

Some kinds of heterogeneity are given, such as genetic materials, DAN, intelligence, personality traits, preference, endowments, cultures, but some are endogenously determined, such as income, wealth, size of the firm (Robert Axtell), market share, life expectancy, expectations, beliefs. Some are partially exogenous, but still may have the potential to change over time, such as personality, culture, etc. Degree of heterogeneity (distribution) can change with the evolution, and new species which do not exist before can appear. For those exogenously-given heterogeneities, ACE can study their long-term effects (survival analysis); for those endogenously-given heterogeneity, ACE can study its cause and predict its future.

Approaches to Microstructure Dynamics and Heterogeneity

      Microstructure dynamics can be manifested in light of statistical mechanical approach, also the so-called mesoscopic approach (Aoki, 1996, 2002, 2006).

This approach actually connects ACE to physics or sociophysics or econophysics.

With this approach, the set of the behaviors or strategies, used to characterize agents, are finite or bounded. A finite set does allow us to study the microstructure dynamics with a solid ground, but it inevitably implies the absence of novelties and their discovery processes.

Hence, alternative approaches exist to extend the analysis of the microstructure dynamics into an infinite set so that rich microstructure dynamics are embedded within novelties discovering processes.

Example 6: Agent-Based Financial Markets (Santa Fe)

Santa Fe Artificial Stock Markets: Origin

  Origin: Brain Arthur at Santa Fe Institute (SFI) Arthur, B. (1992), “On Learning and Adaptation in the Economy,” 92-07-038.

  Palmer, R. G., W. B. Arthur, J. H. Holland, B. LeBaron, and P. Tayler (1994), “

Artificial Economic Life: A Simple Model of a Stockmarket

,”

Physica D

, 75, pp. 264-274.

Tayler, P. (1995), “Modelling Artificial Stocks Markets Using Genetic Algorithms,” in S. Goonatilake and P. Treleaven (eds.),

Intelligent Systems for Finance and Business

, pp.271-288.

Artificial Stock Markets: Further Development

    Arthur, W. B., J. Holland, B. LeBaron, R. Palmer and P. Tayler (1997), ``

Asset Pricing under Endogenous Expectations in an Artificial Stock Market

, '' in W. B. Arthur, S. Durlauf & D. Lane (eds.),

The Economy as an Evolving Complex System II

, Addison-Wesley, pp. 15-44.

LeBaron, B., W. B. Arthur and R. Palmer (2000), ``

Time Series Properties of an Artificial Stock Markets,

’’

Journal of Economic Dynamics and Control

.

LeBaron, B. (1999), ``

Building Financial Markets with Artificial Agents: Desired goals and present techniques

G. Karakoulas (ed.),

Computational Markets

, MIT Press. ,’’ in LeBaron, B. (2001), ``

Evolution and Time Horizons in an Agent Based Stock Market,’’ Macroeconomic Dynamics, 5, pp. 225--254.

Santa Fe Institute Artificial Stock Markets

 Two assets: one risky and one riskless  Dividends and interests are exogenously given. Dividend follow a stochastic process and the interest rate is exogenously fixed.

 Agents share a common CARA utility function, and myopically maximize their next-period expected wealth by find out the optimal portfolio.

Utility Function

Max U

(

W i

,

t

)    exp(  

W i

,

t

: degree of risk aversion )

W i

,

t

M i

,

t

P t h i

,

t M i

,

t

: money hold by agent

i

at time

t h i

,

t

: share hold by agent

i

at time

t W i

,

t

 1  ( 1 

r

)

M i

,

t

h i

,

t

(

P t

 1 

D t

 1 )

Optimal Investment

Max E i

,

t

(

U

(

W i

,

t

 1 )) 

s

.

t

.

W i

,

t

 1  ( 1 

r

)

M i

,

t E

(  exp(  

W i

,

t

 1 ) |

I i

,

t

) 

h i

,

t

(

P t

 1 

D t

 1 )

D t

 ~

Gaussian

(

d

, 

d

2 )

h i

* ,

t

E i

,

t

(

P t

 1 

D t

 1 )  2

i

,

t

 ( 1 

r

)

P t

Homogeneous Rational Expectations Equilibrium

i Market Equilibriu m

:

N

  1

h i

* ,

t

(

P t

) 

i N

  1

E i

,

t

(

P t

 1 

D t

 1 )  2

i

,

t

 ( 1 

r

)

P t

H Homogeneou s Expectatio ns

:

i N

  1

h i

* ,

t

(

P t

) 

N

(

E i

,

t

(

P t

 1 

D t

  1 ) 2

i

,

t

 ( 1 

r

)

P t

) 

H Homogeneou s Rational Expectatio ns Equilbr ium

:

P t

 1 [

d r

 

d

2 (

H N

)]  1 [

d r

 

d

2

h

]

Heterogeneous Expectations

 The main departure of the SFI agent-based financial model is to assume that information is imperfect, e.g., the stochastic nature of dividend is unknown to agents, which makes homogeneous expectations hard to realize.

 They then build models of heterogeneous expectations in spirit of bounded rationality.  The focus is how agents form their expectations:

E i

,

t

(

P t

 1 

D t

 1 )

Classifier System +Genetic Algorithms

E i

,

t

(

P t

 1 )  

i

,

t

 

i

,

t P t

E i

,

t

(

P t

 1 )      (

i

( 1 ) ,

t

,

t

2 )      

i i i

( ( ,

t

,

t k

3 ) )      ( 1 )

i

,

t

i

( ,

t

2 )

P t P t

,

if

,

if

 ( 3 )

i

,

t P t

,

if S t S t S t

i

,

t

(

k

)

P t

 ,

if S t

  

A i

( 1 ) ,

t A i

( ,

t

2 )

A i

( ,

t

3 ) 

A k

(

k

,

t

)

SFI’s Artificial Adaptive Agents

    They model each adaptive agent with classifier system, which is evolved over time using genetic algorithms. Each constituent of the classifier system is a linear regression; given the condition, it has to meet before activation, it is more like a local linear regression. The entire classifier system is, therefore, a non-linear forecasting function composing of many local linear forecasts. The set of conditions (thresholds) and the regression coefficients are all open to changes as agents learn from their experiences.

SFI agent-based financial markets

    The SFI artificial stock market has been used to understand when the market will behave close to what fundamental equilibrium predicted, and when it will deviate away from it, by manipulating various parameters of the models, such as speed of learning, time-horizon of learning, etc.

It was also used to explained some financial stylized facts (we will talk more on this later) A comprehensive review of the SFI stock market can be found in Ehrentreich (2007).

In terms of modeling autonomous agents, there are a number of further variations. The two which are more in

vein of the SFI model are

Tay and Linn (2001) and

Chen and Yeh (2001).

SFI Agent-Based Financial Markets

    One essential advantage of the ACE model is that it can implement survival analysis in an evolutionary context (Axtell, 2004; LeBaron, 2006) Within this context, one can test various behavioral hypotheses proposed by either neoclassical economists and behavioral economist.

There are two test forms: a weak one and a strong one. For the former, the hypothesis is directly planted into the model, and watch its survivability; for the latter, the hypothesis is tested as an emergent property. More examples would be    Chen and Yeh (2002) on the

martingale believers

Chen and Huang (2008) on

the risk-preference-irrelevance hypothesis

Chen and Chie (2008) on the neo-classical gamblers

Variations of SFI

 Tay and Linn (2001) applies the fuzzy classifier system instead of the original crisp classifier system proposed by the SFI.

 Using fuzzy logic, Tay and Linn (2001) is able to model linguistic decision rules such as:

E i

,

t

(

P t

 1 )  

i

,

t

 

i

,

t P t If price/f undamental value is high, then both α and β should be high.

Variations of SFI

    Chen and Yeh (2001) applies genetic programming to give a general non-linear and nonparametric expectation formation, which leaves the size and shape to be determined through evolution.

E i

,

t

(

P t

 1 ) 

f

(

P t

,

P t

 1 , ...,

V t

,

V t

 1 ,...)

V t

:

trading volume at time t

Using genetic programming, they are able to measure the

complexity of the rules used by the agents, and watch the dynamics of that complexity .

Using genetic programming, they can also observe what kind of variables agents will use in their forecasting of the future price, through they provided a alternative test for the sunspot equilibrium

( Chen, Liao and Chou, 2008

).

Furthermore, in vein of evolution, their model can also be used to illustrated the d

inosaur hypothesis

originally proposed by Arthur (1992).

Chen and Yeh (2001): Dynamics of Complexity

Chen, Liao and Chou (2008)

Dinosaur Hypothesis (Chen and Yeh, 2001)

Survival of Martingale Believers

Neo-Classical Gamblers

ACE and Econometrics

      Earlier, we have mentioned the relation between ACE and EE with specific reference to the mirroring function (Recall: The calibration work done by Arthur on the reinforcement learning model).

This goal generally applies to other ACE models which actually directly deal with field data instead of experimental data.

While there are still a lot of ACE researchers considering their model mainly for exclusively thought experiments, there is an increasing interest in the quality of build ``empirically based, agent based models.’’ (Janssen and Ostrom, 2006).

This requires ACE researchers to validate their models with real data, and has further developed ACE into an econometric models which may be estimated by standard econometrics or other less standard estimation approaches. Maybe the most mature area to see the connection between ACE and Econometrics is once again agent-based financial models.

Example 6:

Agent-Based Financial Markets

(Chen, Chang and Du, 2009)

ACE and Econometrics

 Chen, Chang and Du (2009) present the development of agent-based computational economics in light of its relation to econometrics.

 They propose a three-stage development and illustrate the development using the literature of agent-based financial modeling.  The three-stage development is

 Presenting ACE with Econometrics

 Building ACE with Econometrics

 Emerging Econometrics with ACE

ACE and Econometrics

1st, 2nd stage Agent-Based Computational Economics Econometrics 3 rd stage

ACE and Econometrics

     The agent-based financial market has made itself as a promising example for agent-based social sciences.

It, to an extent, successfully replicated some familiar stylized facts, and points to the possible causes of them, so it enriches theory of financial economics.

However, based on the progress achieved so far, 2-type or 3-type models seems to be good enough. In this manner, finance is more like the complex science in the 1980s.

Econometric estimations of the agent-based financial models enables us to learn further from the data, in particular, the behavioral aspects of financial agents.

When applied to different markets, it may also shed light on the heterogeneity of financial agents in different markets.

ACE and Econometrics

      Nevertheless, there are also few questions observed. First, is the observed sustaining heterogeneity of financial agents across different markets is an empirical fact or is just an spurious outcome from this simple agent-based model.

Second, introducing the addition type of financial agents to the market can, in some cases, result in significant change of the estimated parameters. This also requires a careful addressing. Complex agent-based financial models are not unemployed. In fact, it is expected, when we move to other less exploited stylized facts, the autonomous-agent designs may become more helpful as a few studies recently have already indicated.

Nevertheless, it remains to be an issue whether one should seriously estimate this complex agent-based models. Why and how?

The SFI-like agent-based models should not be evaluated purely based on with their econometric or forecasting performance.

ACE and Econometrics

 Instead of searching for an econometric foundation of these models, one may think in a reverse way, that the best role for them to play is to serve as an agent-based foundation of econometrics, as they can contribute to our study of aggregation problem.

 Solving aggregation problem involve the various use of micro-macro models, and these complex agent-based models may enabled us to know more about the complex micro-macro relations than the simple agent-based model.

Stylized Facts

ACF Models (50)

N-Type Designs (38)

2-Type Designs (18)

3-Type Designs (9)

Many-Type Designs (11)

Autonomous-Agent Designs (12)

Returns Trading Volumes Returns Trading Duration Transaction Size Spread

Collection of ACF Models

    In this paper, we survey a large number of agent-based financial market models, to be exact, 50.

This size of survey allows us to examine models crossing many different classes.

While in the literature there are already some taxonomies of agent-based financial models, our perspective here concerns more with the simplicity and complexity of the models, in particular, the number of possible behavioral rules used in the model .

This concern draws our attention to the software-agent designs groups: and divide the literature into the following two   N-Type Designs (N can be few, such as 2 or 3, or many) Autonomous-Agent Designs

The Two Groups

     The first group corresponds to the survey given by Hommes (2006), whereas the second group corresponds to the survey given by LeBaron (2006). The two groups can also been put into an interesting contrast. If we considers heterogeneity each of the three. , adaptation , and i nteractions as three essential ingredients of ACF, then the first group tends to be simpler in each of these three elements, while the later are more complex in This contrast, from simple to complex, therefore, enables us to reflect upon the heating discussion on the models are.

simplicity principle in modeling complex adaptive systems. The specific question, for example, is what the ``marginal gains’’ by making more complex Alternatively put, what are the minimum number of clusters of financial agents required to replicate financial stylized facts?

N-Type Models and the SFI models

   Models with the N-type designs mainly cover the three major classed of ACF, namely,    Kiram’s

Ant

Models (Kirman, 1991, 1993) Lux’s IAH Models (Lux, 1995, 1997, 1998; Lux and Marchesi (1999, 2000) Brock and Hommes’

ABS

Models (Brock and Hommes, 1998) They also include some others which may be distinguished from the three above, such as the Ising models, minority games ($ games) models, prospect theory-based models, and threshold models. Models with the autonomous-agent designs are mainly either SFI (Santa Fe Institute) models or their variants.

Distribution of the 50

 This sample is by no means exhaustive, but we hope that it well represent the population underlying it.  Sample Size: 50  N-Type Designs: 38  2-Type Designs: 18  3-Type Designs: 9  Many-Type Designs: 11  Autonomous-Agents Designs: 12

Demographic Structure

 These four tables are by no means exhaustive, but just a sample of a large pile of existing studies.  Nonetheless, we believe that they well represent some basic characteristics of the underlying large pile of literature.

 The largest class of ACF models is the few-type design (50%).

Two-Type Models: Facts Explained

Three-Type Models: Facts Explained

Many-Type Models: Facts Explained

AA-Type Models: Facts Explained

Facts Explained

Two Remarks

 We do not verify the model, and hence do not stand in a position to give a second check on whether the reported results are correct. On this regard, we assume that the verification of each model has been confirmed during the referring process.

 We, however, do make a minimal effort to see whether proper statistics have been provided to support the claimed replication. The study which does not satisfy this criterion will not be taken seriously.

  There are four stylized facts which obviously receive more intensive attention than the rest of others.  These four are     fat tails (41 counts), volatility clustering (37), absence of autocorrelations (27), and long memory of returns (20).

Second, we also notice that all stylized facts explained are exclusively pertaining to asset prices; in particular, all these efforts are made to tackle with the low-frequency financial time series.

The Role of Heterogeneity and Learning

 Do many-type models gain additional explanation power than the few-type models?

 Many-type models do not perform significantly better than the few-type models.

 Would more complex learning behavior help?

 Little marginal gain over the baseline models (2 or 3 type models).

 Furthermore, baselines models facilitate the estimation or calibration work, which characterizes the second-stage development.

Building ACE with Econometrics

    In the second stage, an ACE model is treated as a parametric model, and its parameters are estimated using real financial data.

What concerns us are no longer just the stylized facts, but also the behavior of financial agents and their embeddings.

Up to the present, only the three major N-type models (ANT, IAH and ABS) have been seriously estimated. Given the differences among the three models, what are estimated are obviously different, but, generally, they include two things, namely, the behavioral of financial agents and their embeddings.

Existing Econometric Agent-Based Financial Models

What to Estimate and What to Know

 Despite their technical details and differences, the three estimation works share a common interest, namely, the evolving fraction of financial agents.  Two features are involved.

 first, large swing between fundamentalist and chartists;  second, dominance of one cluster of financial behavior for a long period of time.  Putting them together, we may call it market fraction hypothesis.

What to Estimate

  In addition to the evolving market fractions, more details of financial agents’ behavior, such as  Beliefs: reverting coefficients, extrapolating coefficients,      Memory: memory in fitness and memory in belief formation, Intensity of Choices, Risk perceptions, The length of the moving-average window (fundamentalists), Fitness measure (realized profits or risk-adjusted profits), but they received relative less attention.

Amilon (2008) addressed the behavioral aspects found in his empirical study of a 2-type and 3-type ABS models.

Aggregation Problems: Aggregation over Evolving Interacting Heterogeneous Agents  Aggregation problems are among the most difficult problems faced in either the theoretical or empirical study of economics. …There is no quick, easy, or obvious fix to dealing with aggregation problems in general (Blundell and Stoker, 2005, JEL)

Aggregation Problems

s i

f i

(

x i

)

s i

E

(

S

)

f

(

x i

f

) (

E

(

X

))

E

(

S

) 

f

(

E

(

X

))  ...

p p

lim lim

f

ˆ

f

ˆ  

f f

bias (represent ative agents)

bias How big is the bias?

Can we gauge a possible range of the bias?

Example: Agent-Based CCAPM

 Chen and Huang (2008, JEBO) and Chen, Huang and Wang (2009) .

 We assume that all financial agents have unitary risk aversion coefficient, and starting from there we can generate a series of artificial data from the artificial market.

Data Generated

{

c t i

}, indiviudal consumptio { 

t i

}, indiviudal { 

i m

.

t

}, indiviudal saving rate portfolio n

i

{

q m

.

t

}, indiviudal holding share of each asset {

R t i

}, individual return (

R t i

M m

  1 

i m

,

t R m

,

t

)

{

c t

} : aggregate consumptio n {

p m

,

t

} : asset price {

R m

,

t

} : return

Estimated Risk Aversion Coefficient

Estimated Risk Aversion Coefficient

 So, basically, regardless of using data at individual level or at macro level, we are far away from the true value (which is one) but the one with aggregated data are further away.

 If we ignore the error, and take the econometric findings without hesitation, then we can ever come up with some spurious relations, for example, the relation between risk aversion and wealth.

Information Sciences (2007)

 ``If agents are heterogeneous, some standard procedures (e.g. cointegration, Granger causality, impulse response functions of structural VARs) loose their significance. Moreover, neglecting heterogeneity in aggregate equations generates spurious evidence of dynamic structure.’’

ACE and Networks

      Most ACE models which we consider in this lecture do not contain an explicit network for interaction (Thomas Schelling’s model is the only exception).

However, a lot of ACE models do take networks (physical networks or social networks) into account.

In these models, the network topology become an exogenous variable or an addition parameters for the ACE models, which may have non-trivial real effect. However, recent interdisciplinary studies overarching economics, game theory and sociology have started to endogenize the formation of network topologies using ACE models.

Of course, further complexification can arise when one consider a full cycle between ACE and the embedded network topologies; they do feedback to affect each other.

This has been a very active area for study when data from WWW become extremely huge.

Concluding Remarks

  Q: If I have no experience in ACE, but interested in learning more about it, and possibly considering making investment here, where should we start?

A: Prof Leigh Tesfatsion’s maintained website provides a total solution for the beginners. With this website, I can peacefully stop here.  http://www.econ.iastate.edu/tesfatsi/