Wealthy people cannot drive consumption, because they spend a far smaller part of their income on it.
edit: I intentionally use the prime age employment rate (as is convention of late) in order to avoid objections that the low employment rate is the fault of a large segment of baby boomers retiring.
That unemployment graph is rather breathtaking, both because it illustrates how impactful the '08 crash was, and how little we've recovered since then. Is there any chance that the graph could be misleading, or should it be believed at face value that one in four people are unemployed?
Neither are really misleading or particularly enlightening by themself. In this particular case the Y-axis alone is not enough to draw conclusions because we don't know what magnitudes of movement are normal, what are healthy and what are unhealthy.
Choosing 0 to 100 doesn't add anything because what does it even mean to have 0% employment or 100% employment. Those are not even reasonable bounds to start with given the topic at hand. I can't even imagine any conditions under which either extreme is plausible.
A peers comp here would be far more illustrative than choosing any particular Y-axis scale. If for example the same employment figures for all other countries were plotted on the same graph, we would see extremes, from the country with the lowest employment rates to the highest employment rates. Those figures would serve to provide reasonable bookends for a graph. If no economy in the world has an employment rate of less than 45%, then a lower bound of 40% would be reasonable. If no country exceeds 85%, than an upper bound of 90% would be reasonable. The scale chosen should illustrate plausible ranges of employment for any modern economy.
What does matter in all this is the time series plotting and whether or not this data suggests that there may be a virtuous/vicious feed back loop when the employment deviates from some stable level. For example, once employment falls below a certain point, does that contribute to a perception that businesses should be tightening their belts, further exacerbating the problem. Likewise, when the figures are moving up, businesses anticipate economic growth and therefor start hiring in preparation for the growth.
This graph is the equivalent of measuring distance, when measuring the first (velocity) or second (acceleration) derivative might be more illustrative of how serious all this is. When matters is not how much something has fallen or rising, but how fast it is falling or rising and if it is starting to speed up or slow down.
The recession is a tiny little 2% blip, if you're one of the ~85% of people who were not laid off and didn't have any trouble finding a job. And it's a very large 2% blip if you're one of the people who are.
My point is that perspective matters - a lot. By the numbers, this recession is worse than any since the Great Depression. By the numbers, this recession is only a small percentage of the total U.S. economy. Which numbers are correct? Well, they're actually the same numbers, what matters is how you use them and what conclusion you're trying to draw.
From my personal perspective, the recession was great. It meant I could actually get decent rents in the Bay Area for the first two years after I moved out here, and I could pick up stocks for relatively cheap, and I had no problem getting a job. I realize this is not the perspective of many other people, and the numbers help tell me how many other people. And one of my professional startup interests is finding better ways to help people manage their careers, so I'm quite interested in hearing other people's perspectives (if you have stories or want to vent, feel free to e-mail me...I'm quite happy to listen, my e-mail address is in my profile.) But understand that numbers always require interpretation - whether the graph is misleading or not depends entirely on where you're leading people.
Very good points! From my perspective, the recession was devastating. Not for me, but for a friend's family. One of them owned a side business involving construction. Not enough to pay all the bills, but in '07 suddenly it was. So he quit his regular full-time job, even though he had a full family to support. It wasn't a stupid decision though, because he had so many contracts (guaranteed cash flows) that he could support his whole family twice over. I remember that business was so good that he made a big show of unveiling a wonderful grand piano for the whole family. He had the contracts to support such a lifestyle. Then '08 happened, and the contracts were all canceled. (Illegally, but that's beside the point; he couldn't afford to pay any bills, let alone a lawyer to go after anybody.) I later found out he had contemplated an elaborate suicide to collect on death insurance to provide for his own family.
Not trying to contradict anything you've said; merely adding an alternate perspective.
Agreed. My point is that the data that we're examining are the changes in employment to population surrounding the housing bubble. Measuring on a 0% to 100% scale whether we've begun a recovery from a recent drop in the rate of employment is like measuring the tides from the sea floor.
Personally, my salary has tripled since the recession, and my wealth increased by a factor of five or six. The adult, established upper-middle class is doing fine. My pre-tax savings rate is over 50% though, so I'm not doing a consumption-driven economy any favors.
I was mostly thinking about the "it illustrates how impactful the '08 crash was" in the original comment. Of course it looks like a huge change if you restrict the Y-axis, but if you plot it on the full axis you realize that it was actually still relatively small. It did of course affect millions of Americans and might have been the biggest decrease in a long time, but it was nowhere near the 80% drop that it looked like in the original plot.
This is an employment to population ratio, so 1 in 4 people, ages 25-54 do not work. That does not mean that they are unemployed! Unemployed people are those looking for work that don't have employment. Plenty of people voluntarily leave the "labor force" (employed + unemployed people), such as stay-at-home parents, trust fund kids, or independently wealthy and retired folks.
That said, "unemployment" is a very tough figure to calculate (as are most macroeconomic figures), and there's a lot of pseudo-controversy about what the numbers mean. Wiki does a good job summarizing how the BLS calculates unemployment [1].
It does mean that they are unemployed. Merriam-Webster defines unemployment as "the state of not having a job". Its very binary: you either have a job or you don't.
You have mistaken bureaucratic doublespeak for English. I think it clear we are speaking in standard international English here. Letting the US government redefine words such as "unemployment" to mean something other than "the state of not having a job" is something straight out of a dystopian novel.
The graph doesn't start at 0 on the Y-axis. It would be a lot less striking if the scale ran from 0-100 instead of the cherrypicked 74-81 scale.
Also, this is labor force participation, not unemployment. If you carried this back to the 1950s, it would be around 60%, because most women didn't work back then.
The scale is cherry-picked to fit the change in the graph - you can see that just by looking at the Y-axis. It makes sense for what I assume the BLS assumed the purpose of the graph is, visualizing the direction of the trend over time. However, you can't draw conclusions on the magnitude when the magnitude is arbitrarily chosen.
So yes, I am really accusing the Bureau of Labor Statistics of cherry-picking the scale. I don't assume any malice on their part, only that they intended the graph to be used for purposes other than how it is used here.
You honestly think that the scale was hand picked, rather than being determined by the software being used to generate every graph on the site? The scale looks to be determined by the floor of the lowest value, and the ceiling of the highest.
I didn't say hand-picked. By cherry-picked I mean that the software picked out two arbitrary values that are just outside of the range of the data. The values are still arbitrary: they show direction well but magnitude poorly.
That's not what cherry-picked means. Cherry-picked means more what you are referring to when you say "hand-picked" here. It means to carefully choose the best.
That's also not what arbitrary means. If you select numbers that are just outside the range of a particular set of data, that's anything but arbitrary. Arbitrary means to choose based on someones discretion or judgment rather than to determine by rule. Comes from the same root as arbitration.
I'm not saying all of that to criticize your use of words, I'm saying it because I honestly don't know what you're trying to say. I think you're probably changing the goalposts.
> It also is only 'prime' so 25-54. 55-65, people still work. Same with under 25.
Does the Social Security Administration release data on the number of people collecting SS benefits who are also working? Very interested in the number of people delaying retirement in order to keep working (either because base SS doesn't cover their living expenses, or because they're delaying collecting SS to increase their monthly benefit).
Turning 65 is no guarantee that you will be ready or willing to retire. A rapidly growing number of Americans are continuing to work beyond their 65th birthday. The proportion of people age 65 and older in the workforce grew to 16.1 percent by 2010, up from 12.1 percent in 1990, according to a recent Census Bureau report. And the percentage of people between ages 65 and 69 who are working grew 9 percentage points to 30.8 percent in 2010.
Its still age 65 for the people who have hit age 65 so that is close enough I think?
One potentially misleading aspect of the chart is that it covers only 25-54 year olds and one result of the downturn was that older workers have remained in their jobs longer, delayed retirement, etc.
It indeed does, the graph is of Labor Force Participation rate among ages 25 to 54, so that is the portion of people that age that are working over the total people that age eligible to work. Now the unemployment numbers(U3) that you hear on the TV do not include these people typically, that is the discourages workers and those that are underemployed, it only includes people that are actively looking for a job.
The articles said that if you take health care out of the index the economy still shrank at an annualized rate of 2.7%. So it had some effect, but you can't blame the whole headline on it.
TL;DR: annual US GDP declined 2.9% because consumers spent less. In fact, they are not yet spending at rates anywhere near the rates at which they were spending before the financial crisis in 2008.
When consumers spend less, businesses in the aggregate sell less of everything, because every dollar spent by a consumer is a dollar earned by someone else -- usually a business.
Before the financial crisis, consumers borrowed aggressively to finance consumption, like drunken sailors... but unlike the financial sector, they never got a bailout. They were forced by the circumstances to follow Steve Martin's advice from Saturday Night Live: "don't buy stuff you cannot afford."[2]
Are we really that surprised that many consumers are reluctant to borrow and/or spend like before the crisis?
--
Edits: modified first paragraph to convey what I actually meant to say, in response to ctl's comment. The original paragraph was poorly written. (Thanks for pointing out the inconsistency, ctl!)
Part of this slowdown in consumer spending also has to do with the tightening of credit as well, making it more difficult for people without means to spend. In my opinion this is a great thing. When people are backed against a wall because they have no other way to pay for their livelihood than their salary, they're not going to be ok with stagnant wages and companies low balling them.
Bring on the pain, I say. Real growth comes from companies investing in their workforce, not people drowning themselves in debt.
Put another way: it used to be that you could buy a house for the equivalent of a year's salary. Now it seems most houses are about 5-6x the average salary of a given area. If that were still the case today, most Bay Area residents should be making near 1 mil/year or houses here should be closer to about $150-200k, more likely the latter since credit is primarily the reason for the inflated cost.
Interesting observation. It's really hard to shake off the shackles of presentism and imagine what the US would look like on an alternative timeline where the US government had not aggressively promoted home ownership and easy credit. The economy and legislation are now so thoroughly distorted in favor of home ownership that it's difficult to imagine how things could have been different.
Specifically, I'm wondering if there was a time when we were having the same debate about housing that we are now having about college tuition, but that it has been long enough (and there have been economic gains for homeowners) that we conveniently forget how much our longstanding policies drive up housing prices. The idea of buying a home on a full year's salary sounds crazy, but then again there was a time when 4 years of college could be paid off fully via part time work while at school, something that is distant pipe dream these days for the majority of jobs within the reach of most college students.
A cost revolution in creating housing is much less likely than MOOCs killing the 3rd tier colleges.
I'd like to be in an alternate universe without the Iraq and Afghanistan wars. Afghanistan is famously The Graveyard of Empires, and I wonder if we are not just staggering around with a mortal wound, with another crash coming to bring us down comprehensively. Nationalizing the ibanks instead of TARP would be a nice alternate universe, too.
I think we are the alternate universe. Bush's win over Gore in 2000 was so ridiculous that a half-assed intervention by time-traveling experimental historians almost seems like a reasonable explanation.
It is interesting, and sad, to think of how much may have ended up hanging on a couple of hundred Florida voters. Of course, it may be wrong to think that a Gore presidency would have been all that different.
I wouldn't say that credit is driving up the costs of Bay Area homes, but the amount of wealthy workers in the area.
And also consider that homes built today have to go through tons of regulatory costs (building codes, politics with permits), there are new technologies installed, AC, internet, upgraded roofing, better foundations, upgraded wiring, possibly an HOA, rec center, etc, paved sidewalks in front of every house. The US Census Bureau has also determined that the average size of a home is 2,480 square feet vs 1,600sqft in 1970. All the progress that we have made with homes today costs.
Sorry, but I vehemently disagree. With my salary and credit history, I can qualify pretty easily for a $6-800k home. That all-of-a-sudden puts me in a market that I wouldn't have dreamed of being part of before the loan was offered, if, instead, I had to pay cash, or pay a substantial portion of the amount of the home in cash (50-75%, for instance).
If this was true across the board (tightened credit), homes simply would not sell at their current prices. Most people, with the expenses of day-to-day living, would have a very difficult time scrounging up $600k for the down payment of a home, if they could do it at all in their lifetime. Meanwhile the stock of unsold homes and anxious homeowners looking to move would grow. Market forces would eventually push these prices down.
This isn't even theory. This literally happened after the crash when banks stopped loaning money. People wanted to buy houses, and why not? Loan rates were insanely low. But a substantial portion of the population found it difficult to extend their credit further, or get first-time credit, and thus house prices tanked around the country.
You're right that the Bay Area is unique. We have a dangerous combination of: The aforementioned credit, many highly paid people, and people who are younger, and perhaps less intelligent with how they evaluate the value of things, using credit as a sledgehammer to get what they want. But even well-payed engineers couldn't afford the houses around here without substantial credit extension.
I know that people will make political hay of this, but there is danger to managing to quarterly statistics. If your economy needs some adjustments to achieve long-run sustainability then it's possible that GDP may fall for a quarter or two and there's nothing wrong with that.
GDP is a great metric for long-term or inter-national comparison. It's terrible as a quarterly national management metric.
Spending was muted because of the harsh winter. Recent economic indicators have been very positive so expectations are for a significant rebound for Q2.
Looks to me like it's how the government is accounting for spending in healthcare that is the biggest contributor to these numbers. Next quarter they will report big gains, just in time for campaign season and the run up to the election.
Weather and botched implementation of the healthcare exchanges probably also played a part in that. But for whatever reason seems the increase in healthcare spending will be delayed for a quarter or two.
I'm talking only about consumer borrowing and spending prior to the crisis in 2008. My understanding is that, five years after the crisis, consumer spending still has not recovered[1], which seems rather unusual for an economic recovery.
PS. I modified the first paragraph in my comment above, so it more clearly conveys what I actually meant to say.
> Are we really that surprised that many consumers are reluctant to borrow and/or spend like before the crisis?
Seems like it is not only unsurprising but also a good thing. People learned a lesson about spending recklessly and some economists would claim that's a bad thing. GDP is not the end-all, be-all metric of a healthy economy.
edit: "Personal Savings in the United States increased to 4 percent in April of 2014 from 3.60 percent in March of 2014. Personal Savings in the United States averaged 6.82 Percent from 1959 until 2014, reaching an all time high of 14.60 Percent in May of 1975 and a record low of 0.80 Percent in April of 2005."
A lower Personal Savings Rate doesn't necessarily mean people are "consuming" more. At least not the sort of consumption (discretionary) that many businesses care about. It simply means that the proportion of income being saved is lower. There are any number of factors that could make that true, including rising costs of living or inflation vis-a-vis stagnant income.
In fact, while personal income has been increasing on a low nominal basis, income growth rate has been slowing.
Yes, PSR accounts for "disposable" income, but that term can be somewhat misleading. Disposable income is simply the net of income minus taxes. It's not necessarily available for discretionary purposes.
I would say that savings rates are quite low right now, and that accordingly, nominal consumption is high. But there's a deeper story there.
For example, let's say I made 50k in 2008 after taxes and saved 10% (5k), and in 2013 I made 55k, but saved only 5% (2.5k). My PSR went down by 50%. The first conclusion one might make is that I decided to spend more and save less. i.e. I've become more consumptive. However an alternate explanation is that inflation growth has outpaced my income growth, and that I am in fact consuming the same amount (in terms of measurable benefits), but that it costs me more do consume that same amount. So using the example above where my income grew 10%, it is also possible that inflation over the same period increased by 16.67%, outpacing my income growth enough to reduce my savings rate without me making any qualitative increases in my consumption.
Inflation inflates income dollars and cost dollars, because they're both made of dollars.
Are you talking about some kind of consumption that isn't measured in dollars? I'm not talking about the amount of things that people are consuming, I'm talking about the percentage of their income that they are spending on consumption. Whether that buys a can of peas one year and a yacht the next makes very little difference. If during that first year, everybody bought a can of peas, and in the second, everybody bought a yacht, consumption hasn't increased.
In other words, if your savings rate had to lower because you make less money and things are more expensive, your consumption has gone up.
edit: I'm not trying to make a statement about Americans being wild spenders; I'm trying to make a statement about the possibility of a consumption driven recovery. Americans are very close to the limit of the ability that they have to spend more.
Nominal dollar values is but one way to measure consumption.
The original comment I was elaborating on was talking about other factors that can impact PSR without a change in consumption patterns as measured by what goods and services are received not the nominal value spent on them. One important and very real factor is that nominal income growth for most workers has not grown as fast as inflation. On a macro level you're right, but at the individual level it's that mismatch that matters.
I wasn't talking about measuring consumption in dollars but in value received. If I buy a can of peas this year for $1 and a can of peas next year for $1.10 (assuming no factors specific to the peas or can market), then I have consumed the same amount in terms of the benefit I get. You're also right that I've consumed more in terms of nominal dollar values, but nominal dollar values wasn't the measurement stick I was using when I said they consume the same amount.
It's not that there's something left after saving and spending; it's that there are different types of spending. PSR is a useful starting point, but not a complete story, in assessing consumer health and markets.
For instance, it is not incongruous right now to say that a) people are saving proportionately less, and b) people are not buying as much stuff. Obviously they are spending in direct proportion to what they're not saving -- but the categories of their spending matter a great deal.
Tl;dr is I'm not disagreeing with you at all; I'm just expanding.
The main thriving product of the 21st century American economy is Stock of publicly traded US companies. The stock market is largely decoupled from economic reality. As long as the total market cap of all US companies is going up (which was the case in Q1), there is a "recovery". The settlement between this recovery and the real economic decline will only occur around 2040, when US dollar is debased and finally ceases to serve as the world reserve currency. Feel free to downvote.
The decline is seasonally adjusted vs Q4 2013. Since both periods are after the recession, your TL;DR make no sense, and is a misleading interpretation.
The most interesting component of this was the healthcare drag in the revisions taking it down from the initial ones.
While a large element of this was the affordable care act, it is also notable that healthcare inflation went negative for the first time in 20 odd years last quarter (education did too, but thats another story).
Given the US currently spends twice the amount of other countries on its health care as a % of GDP (14%) for similar health outcomes and individuals are being significantly disincented to spend on healthcare via subsidies and high deductibles respectively, I think we will see increasing normalisation of healthcare spend to levels under 10% of GDP, a huge drag on GDP.
This will be further accelerated by smartphone and biometric proliferation allowing for easier and more reliable diagnosis and treatment choice.
The root of the problem is that GDP doesn't measure changes in capital stock. For example, after Fukushima, Japan's GDP ticked up slightly because of the expenditures from disaster recovery. Obviously it's absurd to conclude that Fukushima was good for the economy--the loss of capital more than outweighed the money spent fixing things. The same principle has many different implications. For example, if we reduced air pollution, expenditures on treating lung cancer would go down, hurting GDP. Countries that are heavily based on oil or mineral exports have artificially inflated GDP, because the measure excludes the decline in the value of reserves.
Normally, I make the same point[1], but I think you're falling into the opposite mistake here: if you're using GDP as a proxy for economic activity, regardless of cause, then it is correctly capturing the insight that "people started doing more stuff as a result of the disaster [though they may be poorer]". This (usefully) distinguishes it from e.g. "people were impeded from rebuilding, and so continued on as before".
The problem you allude to, rather, is that:
1) People use GDP as a proxy for some kind of "economic goodness" (not simply "activity").
2) The two concepts generally correlate.
3) But there are known cases where the two diverge -- when economic badness happens, and yet GDP goes up.
4) But people still call it good despite the recognition of the special case.
[1] I think the appropriate way to economically characterize window-breaking/repair scenarios is: "It used to be hard to identify good uses of scarce resources; now that vital stuff got destroyed, it's easy." See: http://blog.tyrannyofthemouse.com/2011/12/broken-windows-par...
But getting a true measure of the ability of the US economy to create a comfortable life for as many people as possible also needs to take a very hard look at military spending and the share of the manufacturing economy that goes into actual and potential destruction of capital. Are those police rifles and APCs "economic output?"
When the Soviet Empire was stripped of an outsize military and internal security apparatus, what was left was a GDP the size of Italy.
So it cuts both ways: Health care deflation is probably good, and calls for a higher quality of life number. But that monstrous security state we are dragging around also should get moved to the negative column.
Interesting point regarding the parable, thank you for sharing.
If GDP does included anything stock, would that really be accurate? Not an expert, but isn't stock priced on the future value of something (ie dependent upon future sales forecasts). since GDP is backward looking, i don't think it should be counted? stock could be used as a predictor of future GDP since the stock is trying to put a value to future sales.
Also, wouldnt that also double count certain sales? ie each iPhone sold would increase goods sold as well as the companies stock price?
In this context, "capital stock" just refers to the accumulated wealth of the country, not shares of a corporation. It's the value of existing bridges, oil reserves, factories, etc. In the sense I'm using it, I'd even include human health (i.e. human capital).
>Obviously it's absurd to conclude that Fukushima was good for the economy--the loss of capital more than outweighed the money spent fixing things.
That depends on your definition of "economy," doesn't it? When I picture an economy, I picture goods and services being consumed and money changing hands. That's what GDP measures. Should money locked up as capital be counted as part of the economy?
Unfortunately the ACA doesn't do much to address increasing healthcare costs. There are a few initiatives (comparative medicine funding is one that comes to mind) that are good starts, but the ACA is likely to lead to overall higher GDP expenditures on healthcare in the US, not lower, as more and more uninsured have access to care.
Your analysis assumes healthcare is like any other commodity.
While the primary aim of the ACA is not overall cost reduction (but, rather, make it affordable for the uninsured to get insurance), there are several reasons it could go down that path.
The marketplaces and minimum standards for insurance plans make it easier to comparison shop.
The fact that people will have insurance will lead to earlier and cheaper treatments for potentially serious disease.
Businesses can get out of the insurance business altogether. As businesses remove themselves from the equation individuals will be more attuned to the cost of their insurance plans and shop accordingly.
There are some direct provisions for paying providers for disease prevention and effective treatments that lead to patients not having to return for the same issue rather than just for treatments reversing that twisted financial incentive.
Granted, there are plenty of ways for these potential cost benefits to go sour but there is a good chance that the ACA will actually reduce costs relative to what they would have been without the ACA.
Check out the Oregon Medicaid study and you'll find evidence that counters your claims, namely that access to healthcare improves health outcomes (i.e. early detection and treatment did not lead to measurably better outcomes). [1]
It's a pretty impressive study. They took a number of uninsured in Oregon and held a lottery. Half of the people got free access to Medicaid and the other half remained uninsured.
The study found that costs go up (because people now have health insurance), but people weren't better off in terms of health (except for those with depression).
Can anybody who understands this stuff better than I do comment on the winter weather factor?
To me, the "winter sucked" excuse sounds like complete BS, a transparent lie along the lines of, "Well, I just didn't want to order the tide to stop right now." This is a massive change (down a total of 7% or so from two quarters prior) and surely the weather, while unusual, wasn't that strong.
But maybe it really is a big factor. I'm far from an expert. Can anyone comment on whether the weather (heh) reasoning is sane?
A quarter is roughly 91 days. About 65 of which are week days, and that is where most economic activity happens.
Suppose that bad weather costs you 30% of economic activity in half the country for just one of those week days. That works out to losing 0.23% of the quarter's economic activity. But we are using a quarter to estimate annual growth, so that now looks like losing 0.92% annualized growth. From one day of disruption over part of the country.
As you see, blip from a bad winter storm on the East Coast really can cause the economy look like it is headed in a much worse direction than it is for a quarter.
- Damage due to harsh weather causing a net drag on businesses, causing them to be more conservative with spending.
- All of the store, school, etc closures due to inclement weather.
- Even without store closures, many people are less likely to leave their homes (i.e. spend money) when the weather is bad, even if the weather isn't 'snow day' bad.
I feel that it's probably unlikely to be entirely due to the weather, but I don't think that we can discount the effects a harsh winter can have.
It's not intuitive, but since building sites must halt work during strong winter days, a large increase in the number of 'snow days' has an outsize effect on the total output of the construction sector --one of the largest sectors of the US economy-- during Q1.
This past winter was peculiar in that regard because not only there were more snowstorms overall than the previous year, but also there were major snowstorms in regions that seldom see that kind of weather like the South. And it shows in the figures for residential and commercial building during the quarter.
These seasonalities however are well understood and don't matter nearly as much, in part because construction companies often catch up on their delayed projects during the subsequent quarter. This is one the reasons why reading too much into annualized quarterly fluctuations can be misleading at times [1].
Interesting point about catching up on work in the subsequent quarter. This sort of thing is why I thought the impact of weather was overstated, but I thought that the catchup would have happened during the same quarter. If it takes just a little bit longer then that would skew things a lot.
Also, I love how I can ask this question and get a ton of quality answers in quick succession.
There are a few adjustments to the data for getting a better impression of the data. For example "real" data attempt to minimize the distortions that come from changes in prices, while the raw "nominal" data measure economic activity. Another adjustment is for the seasonality of data. Here's a graph of some a few retail data series, where not seasonally adjusted appear with the seasonally adjusted data. [1]
Here you can take a look at the most recent release of the Employment Situation.[2] It states that the survey of businesses indicated 217,000 more people working. Now, look at the table of the data.[3] You have
May: 139,192 thous
Apr: 138,272 thous
=> Surveys of businesses therefore indicated that 920 thousand more people were working!
So, what's going on? It is that no one cares about reporting the seasonal cycles in employment, because it isn't news that helps you understand economic trends. So, people actually look at the seasonally adjusted data.
May: 138,463 million
Apr: 138,246 million
=> 217 thousand more people
In order to come with the seasonal factors the Census (and BLS and BEA) use an autoregressive integrated moving average. Back to your original question, what does it mean to say that winter sucked? It means that people's prosperity was lowered by the winter, just like it always is, but while the seasonal factors usually remove that effect this time they are part of the seasonally adjusted data because the seasonal factors are derived from past data with less severe winters.
Really it comes down to the fact that a single score is inappropriate for an understanding of the economy. GDP measures GDP, and something else with a different methodology would measure something else. You can find the fallacy acted out online with sites that purport to improve the data with adjustments without explaining the tradeoffs that come with adjustments by definition.
Real interest rates are negative and credit is tight. Credit has been tight for some time, but there is some debate as to when the real interest rate went below zero. These two factors will cause the US economy to (eventually) shrink because ~65% of its component parts reflect economic activities based on consumption. The economy can not expand because dollars that have historically gone into consumption are being invested into Treasuries. This has the effect of keeping yields on treasury notes down, which causes real interest rates to go below zero. A negative real interest rate environment implies investors are willing to take a small loss if it means they can avoid bigger losses on other investments.
Everything and nothing. Nothing to do with computers, everything to do with hype and name recognition.
Besides, computers are just tools. A means to an end. To the hammer designer, is a discussion about home building relevant? The economics of home building? The economics of people buying homes? The psychology of homebuyers? Is psychology relevant to hammer design? Maybe. Relevance is complicated.
"Recession-like. It is astounding that we are using any variation of that word five years into the recovery."
No, it's not. There is little to no recovery. The only surprising thing about analysis like this is how far the author goes to avoid explicitly stating this when it's so implicit in the article.
Seems to me the idea of declaring a shrinking economy in terms of $ doesnt make sense. What if the average price of things is dropping (eg Healthcare)?
To me it seems the goal should just be how much stuff did we make? If that number went up, then good (except corn).
>Seems to me the idea of declaring a shrinking economy in terms of $ doesnt make sense. What if the average price of things is dropping (eg Healthcare)?
We know what inflation is, and that the price of healthcare is not dropping.
>To me it seems the goal should just be how much stuff did we make? If that number went up, then good (except corn).
Do you mean the total creation of goods in the US? To compare apples to oranges (literally) we compare their dollar values and call it GDP. comparing the 'count' of things wouldn't be as useful as we could make a whole bunch of things that no one wants and call it productivity.
Math pedant to the rescue! Actually, rates combine in a multiplicative way -- losing 3% and then 4% does not lose 7%, (though it's close). What if the economy lost 1% each day for a year? It would not lose 365%.
If we assume the annualized rate, the economy would be 97.1% of its current size in a year. The fourth root is 99.267%, so the economy shrank about 0.733% this quarter. If it does so for four consecutive quarters, it will decrease by 2.9%.
I know that's what you said in the first place, but when extrapolating further into the future, it makes a difference.
If you calculate 1-(1-x)^(1/4), the difference between the true answer and x/4 is approximately 3 * x^2/32. For a figure of y percent, this is a relative error of 0.375 * y percent. Ignore it when y is small.
The GDP figure is only known to something like 1 or 2 decimal places.
I remember idle moments during soccer games I played as a kid. I would sometimes think to myself "OK, we're up 1-0. If the rest of the game goes at this rate, we will definitely win." The problem with that reasoning is the word "if".
"If every quarter this year was the same as this one, it would be a 2.9% decline for the year."
sp332 uses the ~= symbols for approximately equal because technically you can't just divide the percentage by four like that. 2.9%/4 = .725% (exactly), but technically four quarters at .725% shrink each would produce 100%-((100%-.725%)^4) for the year, which is a 2.869...% decrease, not exactly 2.9%. But for small percents being compounded a small number of times it is a decent approximation, in this case thoroughly dominated by measurement inaccuracies and noise levels; the small percentage and the small number of compoundings work together to keep the error very small.
Anyways, for the reasons given above, it's indistinguishable. If the economy was changing rapidly enough for it to matter, well, since it can pretty much only shrink at sufficient rates to make a difference we'd probably all have better things to do than argue about compounding correctly...
But it is worth reminding people you can't do than in general, because it is easy to forget. See also the Rule of 72: http://en.wikipedia.org/wiki/Rule_of_72
if the economy shrinks by 0.05% on the Sunday before thanksgiving, you could annualize that by multiplying by 365 and getting an 18.25% shrink in the size of the American economy on that day.
It would be an extremely false statement to write, "The U.S. Economy contracted by 18.25% on Sunday before Thanksgiving", which is how our title reads. ("The economy shrank almost 3 percent in Q1").
The number 3% doesn't appear anywhere, just as the number 18.25% doesn't appear anywhere. sp332's very good point is that it should include the word "annualized" or read "0.7%". A huge difference!
Meaning that if the rate of decline experienced in Q1 continued throughout the year, then the annual shrinkage would be about 3%. 1.0074^4 is about 1.03, so the actual shrinkage for the quarter in question was 0.74%.
Prime-age employment: http://data.bls.gov/timeseries/LNS12300060
Median wealth, and the median wealth of various segments: http://finance.yahoo.com/blogs/daily-ticker/for-most-familie...
Median income: http://advisorperspectives.com/dshort/updates/Median-Househo...
Wealthy people cannot drive consumption, because they spend a far smaller part of their income on it.
edit: I intentionally use the prime age employment rate (as is convention of late) in order to avoid objections that the low employment rate is the fault of a large segment of baby boomers retiring.
The general numbers look worse, not better: http://data.bls.gov/timeseries/LNS12300000