Transcript Surveys

Surveys
Josh Lerner
Empirical Methods in Corporate
Finance
A long tradition
• Critical to much of social science research,
e.g.:
– Human resource management.
– Psychology.
– Marketing.
– Technology diffusion.
– Political science.
• But relatively little use in corporate finance.
The birth of political surveying:
The Straw Poll
• Many early censuses (in U.S. dating to
1790; elsewhere, far earlier), but surveys
far later:
– The “straw poll”: first conducted by the
Harrisburg Pennsylvanian in 1824.
– Mail out ballots and tally returned votes.
– Also used as a marketing ploy.
Reliability of straw polls
• Typically, no effort to control for respondents.
• Depend on people to return mail-in cards.
– Pierre du Pont straw poll concerning Prohibition
was only returned by people who favored
repealing it.
• People polled can be unrepresentative
(haphazard sample)
– 1936 Literary Digest poll predicted Alf Landon
(57%) would be elected president over FDR (43%).
• Same issues surface in Internet polling.
The birth of the modern survey
• Gallup in 1936 predicts FDR wins (55.7% even
though FDR won 60.8%).
• Used scientific “quota sampling” of only about
1,200 people compared to the 2 million in the
Literary Digest straw poll.
– Illustrated power of model…
– Though, of course, possibility of error, should be
something that can quantified.
Standard Error 
pˆ 1  pˆ 
N
Types of sampling
• Simple Random Sampling –
everyone has an equal
chance of being selected
• Quota Samples – use the
census to find a certain
number of people in
different groups to force
sample to be representative
of population:
– This method failed in 1948:
• Multi-stage Cluster
Sampling – a combination
of the two approaches
The problem of non-response
• One example:
– 56% of people contacted responded to the 2000 National
Election Survey.
– 5% of households don’t have phones
– Solution is to weight the surveys to match the census,
but…
• Census is not entirely accurate
• People who choose not to respond may hold different opinions
than those that do, even within the same demographic category
Bias
• Bias refers to anything that causes the estimate from
the survey to differ from the true population
– Sampling: how representative the poll is
• Sampling error
• Non-response error
– Survey design:
• Question wording
• Item ordering
• Question ordering
– Interviewer and response:
• Social desirability
Question wording
CNN/USA Today/Gallup Poll.
Dec. 9-11, 2005. Nationwide.
N=503, MoE ± 5
"Do you think legal immigrants mostly
help the economy by providing low cost
labor, or mostly hurt the economy by
driving wages down for many
Americans?" Options rotated.
Mostly Help
Mostly Hurt
Neither (vol.)
Both (vol.)
Unsure
42%
52%
3%
2%
1%
Time Poll conducted by
Schulman, Ronca & Bucuvalas
(SRBI) Public Affairs. Nov. 29Dec. 1, 2005. N=1,004 adults
nationwide. MoE ± 3.
"Overall, do you think illegal immigrants hurt
or help the U.S. economy?" Options rotated
Help
26%
Hurt
Depends (vol.)
Unsure
64%
5%
5%
Item ordering
FOX News/Opinion Dynamics Poll. Latest: Aug. 6-7, 2002. N=900 registered
voters nationwide. MoE ± 3.
"...If the congressional election were held today, would you vote for the
Republican candidate in your district or the Democratic candidate in your
district?" If undecided: "Well, if you had to vote, which way would you lean?“
Rep: 39%
Dem: 36%
Other/Not Sure: 25%
CNN/USA Today/Gallup Poll. Latest: Aug. 19-21, 2002. N=689 registered
voters nationwide. MoE ± 4
"If the elections for Congress were being held today, which party's candidate
would you vote for in your congressional district: [rotate] the Democratic
Party's candidate or the Republican Party's candidate?" If undecided: "As of
today, do you lean more toward [rotate] the Democratic Party's candidate or
the Republican Party's candidate?“
Rep: 42%
Dem: 50%
Other(vol.)/Undecided: 8%
Framing
1980
“The U.S. should let Communist newspaper reporters
from other countries come here and send back to
their papers the news as they see it”
55% “Yes”
When preceded by a question about U.S. reporters sent
to Communist countries
75% “Yes”
Problem: We often don’t know all of the questions
asked and in what order
Other problems with polls
• Do they capture true feelings on sensitive
issues such as race?
– Efforts to combine with coin flip.
• Interviewer bias
– Can control partly with fixed effects.
• Multiple stimuli versus balanced arguments
• Non-attitudes and response acquiesce
• The surprise poll draws attention, but is it
representative of the population?
Well-defined best practices
•
•
•
•
•
•
•
•
•
Advisory and technical committees.
Development of multiple questions to detect “gaming.”
Pre-testing.
Translation and back-translation.
Sampling frames.
Stratification.
Field staff training.
Field staff incentivization and supervision.
Post-survey data processing: i.e., interviewer fixed
effects.
Well defined best practices (2)
• Documentation:
– Objectives
– Survey organization – actors in planning and
implementation
– Target population and sample design
– Sample design
– Questionnaires
– Other survey tools
– Data collection – fieldwork
– Data processing
– Analysis
Skepticism in finance
• George Stigler and profit maximization survey.
• Cliff Smith and the chef story.
• These prejudices are deeply engrained in the
field!
Barriers to dissemination in finance
• Focus on corporations as subject of study:
– Limited interest in surveys.
– Will the right person fill out?
• GM R&D survey story.
• Interest in observable behavior.
• Distrust of reported motivations.
• Unfamiliarity with methodology.
Views of Financial Economists on
the Equity Premium…
Welch
JB, 2000
A practical problem
• kE=rF+β[E (rM – rF)]
– Crucial in valuation problems.
– But few systematic treatments.
– Most practitioners rely on rules of thumb,
academics don’t discuss.
– Many methodologies have relied on…
• Historical average premia (but arithmetic? Geometric?
Time period? Country mix?)
• Dividend yields? (But nonsensical answers)
• Theory? (But non-robust and typically very low)
Survey design
• Survey 1:
– Web posting.
– Scattered mail-out:
• 11 schools, UCLA colleagues, associate editors.
– Ambiguous questions.
– No reporting of response rates.
• Survey 2:
–
–
–
–
Web posting only.
Clearer question.
Prompting with Ibbotson number.
Year delay, with rising equity markets.
Analysis
• Aggregate two sets of response:
– Need to adjust first responses for failure to correct
for arithmetic/geometric confusion.
• Eliminate one high outlier:
– No clear justification.
• Figure 1 and Table 2.
Looking at consensus
• Ask economists to identify consensus pick:
– Higher than actual average.
– Lower standard deviation.
• Not surprisingly, belief in higher premium also
translates into higher consensus estimate.
• Tables 4 and 5.
Concerns
• Methodology is disappointly crude:
– Poor question wording.
– Lack of re-testing.
– Lack of real sampling frame.
– Non-reporting of key statistics.
• Suggests why surveying has bad name in
corporate finance.
The Theory and Practice of
Corporate Finance: Evidence from
the Field
Graham and Harvey
JFE, 2001
A different approach
• Rather than looking at a single firm, survey
large number of firms.
• Ask about a wide variety of financial
strategies.
• See as complement to large-sample, clinical
studies.
Objective of study
• Seek to understand firm’s choices regarding:
– Capital budgeting.
– Cost of capital.
– Capital structure.
Methodology
• Sent surveys to two groups:
– Selected ~1/4 of Financial Executive Institute
members (4400):
• Diverse array of firms.
– All Fortune 500 CFOs.
– 9% response rate:
• 392 responses in all.
• Not great, but consistent with earlier surveys.
Addressing non-respondent bias
• Comparing on-time and late respondents:
– Late respondents should provide clue as to
features of non-respondents.
• Look at industry mix, public status vs. FEI
population as a whole.
• Compare financial policy of firms to random
samples of Compustat firms as a whole:
– Sample based on firm size.
Capital budgeting
• NPV and IRR by far most common.
– NPV more common in larger firms.
– Similarly for public, dividend-paying firms.
• Payback is as common among small firms,
those with old CEOs (especially without
MBAs).
Cost of capital
• CAPM by far most common way to compute
cost of capital:
– Especially among large, public firms.
• Little use of Fama-French, other factors.
• Few distinctions between firm, project risk.
Capital structure
• Not surprisingly taxes, credit ratings matter.
• Only 44% of firms have real target capital
structure.
• Little evidence that common hypotheses are
influential:
– Signaling, product market concerns, free cash
flow, etc.
Concerns
• Could do more to address selection biases?
– Non-respondent surveys?
• Could do more to assess consistency of
responses?
– Multiple questions?
Final thoughts
• Typically, empirical research looks at actual
behavior.
• In-depth clinical studies ideally allow one to
get behind simple responses.
• To what extent to do survey responses reflect
reality, or just beliefs (or what would like to
believe)?