Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm sorry but this is a bit ridiculous

There are dozens of completely arbitrary assumptions and nobody could possibly know if they were anything close to being right

> If you disagree, write code.

Ok..

"With a Basic Job, you can’t - your choices are a Basic Job for $14480 or possibly a McJob for marginally more (say $15000). I’ll model this by assuming the number of non workers will increase by up to 5%, or decrease by up to 20%:"

I disagree, I made my model assuming that the number of non-workers increases by between 60 and 80%

Oh look! My results show that basic income is better! Great, that's what I believed before hand, my model must be good.

edit: I'm all for mathematical modelling, but there needs to be some kind of basis for the assumptions, and it helps if there are less than a bajillion assumptions too. Anyone could tweak all the assumptions so they still sound reasonable and come up with any conclusion they wanted from this model



The purpose of doing a simulation like this is to bring the discussion down to more concrete levels.

E.g. "I’ll model this by assuming the number of non workers will increase by up to 5%, or decrease by up to 20%: I disagree, I made my model assuming that the number of non-workers increases by between 60 and 80%."

Okay, now instead of arguing about basic income in the abstract, you're arguing about projections of how much the number of non-workers changes. The latter is a lot more concrete and easier to analyze further.


No. Making up a model is an absurd way to predict anything, because there is no validation. Useful models have shown to be predictive over time (except when they aren't, and then someone amends the model). "I'll model this by assuming..." is only fooling yourself through confirmation bias. A new model can tell you anything you want it to tell you... An accurate model is the only useful one, and there is rarely any basis for accurate macroeconomic models until decades after a change happens.

Statistics don't come after lies and damn lies accidentally. A naive model of liquid cooling ends in a fatal explosion and the discovery of phase change.


I think models of this type are a useful way of "thinking things through" further than you'd be able to do in your head, and a push to make sure you're being concrete about what you are and are not considering. It's not "doing science", it's not meant to give reliable predictions - there are too many degrees of freedom; it's meant to help with reasoning. It would follow, though, that "The simulation says X, therefore we should Y" is an inappropriate use of it.


Right. The doing science thing is a red herring. Its not claiming to generate reliable conclusions because it is scientific. Its claiming to generate insightful analysis because it is systematic.


Very well said.


Compulsive gamblers and tarot card readers are also systematic. Thinking that systematic implies insightful is deeply flawed.

(Disappointed by the downvotes... I'm not trying to be pithy.)


That it is systematic implies you can use it to generate insights into things that are causally linked. In the case of tarot cards, that's maybe a little about the traditions it grew out of but not a lot about the present or future. In this case, it's about the reasoning and assumptions you are using in trying to work out whether the policy is a good idea. This is only indirectly tied to the question of whether the policy is in fact a good idea. Insights about that need support of, y'know, evidence. But this kind of thing can totally be useful for understanding what questions need to be asked.

Incidentally, I am not convinced that tarot cards cannot be useful at generating insights - privileging random hypotheses and reconsidering your situation could be a useful means of reducing the impact of hypotheses you're privileging for other reasons. Everything must still be evaluated in light of actual evidence, though, of course, before conclusions are drawn.


That it is systematic implies you can use it to generate insights into things that are causally linked.

Only if the model is accurate. If (as in the case of tarot reading and failed gambling) you have a system that doesn't correspond to the world you will draw conclusions that are detailed and reproducible but not insightful of the world. Inaccurate models are only misleading, not vaguely insightful.


Systematic interaction with a model can be used to learn things about the model whether or not the model is accurate. Otherwise we are in agreement, I think.


It works for the IPCC it can work for this guy too. If your model doesn't predict anything just make a new model predicting even worse things and go get more funding.


Having once looked at an IPCC report, I can tell you they incorporate a huge amount of data. Whether they are treating that data correctly and whether the process is warping the conclusions in inappropriate ways are not quite the same question. In this case, there is no testing of the model at all.


And in addition, after making the 60%-80% adjustment, if you still think the results look wrong, you now know that the model itself may require tweaking or redesign.

I really don't understand all the negativity here.


Because this is just mathematical cargo-cult scientism. If the basis for deciding "the model itself may require tweaking or redesign" is that "you still think the results look wrong", you're not doing science.


It's not scientism. You're simply jumping to an invalid conclusion. Stucchio could not have been clearer that simple Python monte carlo models were tools for discussion, not magic oracles.


Tools for discussion about what? Python coding? Monte Carlo methods?


Unrealistic results are an indicator that something might be wrong with the model design or model choice which deserves investigating. Why is that such a bad thing?

If you're modeling investment growth and your model is:

  future_value = investment_amt * (interest_rate ^ num_years)
or

  future_value = investment_amt * (1 + (interest_rate * num_years))
you might realize after running it a few times and seeing incorrect results that the model is flawed (first case) or inappropriate (second case) and fix it to:

  future_value = investment_amt * ((1 + interest_rate) ^ num_years)


You are doing science. The first part: forming a hypothesis.


You really don't understand the negativity? The OP offered a ridiculously simplistic model in an obnoxious and aggressive style, oblivious to the gaps in his own knowledge claimed that this analytic framework was the end-all in terms of analyzing the policy.

So, despite the fact that there was some trifling value to the post, he's really going to garner a lot of negativity.


Can you quote where he "claimed that this analytic framework was the end-all in terms of analyzing the policy"?


At the start:

> The basis of any informed discussion is a mathematical model.

And then towards the end:

> My conclusions are simply the logical result of my assumptions plus basic math - if I’m wrong, either Python is computing the wrong answer, I got really unlucky in all 32,768 simulation runs, or one of my assumptions is wrong.

> My assumption being wrong is the most likely possibility. Luckily, this is a problem that is solvable via code.

That sounds to me like him asserting mathematical models, expressed in code are the only reasonable means we have for knowledge. I don't think he's quite made that case.


For the proper definition of "analytic framework",

"shut the fuck up and write some fucking code"

This is explicitly an assertion that there won't be anything important that can be added to the debate that cannot be expressed in code.

On the flip side, he absolutely doesn't assert that the particular model is the end-all in terms of anything - he expresses desire to see improvements.


I was contrasting using a mathematical model to simply repeating talking points and poorly thought out assertions (i.e., every discussion about BI that shows up on HN).

Obviously gathering data would be an incredibly valuable contribution to the discussion. I should have been more clear on that point, I guess I just thought it went without saying.


I can't speak for others in this thread, but I for one certainly didn't interpret anything you'd said as devaluing evidence. Obviously, data can be expressed as code...


Run the model with your altered assumption. The result might surprise you.

http://imgur.com/KwJIZoM

Even with what I consider to be a wildly unrealistic assumption, the Basic Job is still cheaper than the Basic Income in well over 50% of possible worlds.

I repeat: write some fucking code. Don't just say you did, actually do it.


All of your concerns are addressed in his article:

> The basis of any informed discussion is a mathematical model. The best way to think of a mathematical model is a way to force everyone to clearly enumerate all assumptions being made, and to accept all logical reasoning that follows from those assumptions. Given a model, everyone involved in a discussion can agree that either the conclusions of the model are correct, or one of the assumptions going into the model must be false. This is very important when disagreement is reached - disagreement in the conclusions implies disagreement with some assumption, so it makes sense to figure out which assumption is the cause of disagreement.

It’s also important not to be blinded by a model. The involvement of numbers does not make an argument empirically correct - it simply makes it more understandable and less likely to be logically flawed.

He knows the limitations of mathematical modeling and is simply presenting a basic simulation of these two policies subject to various assumptions.


As I understand it, the recent economic turmoils in various parts of the world seem to have shed a new light on the mathematical modelling of real economies. While the models might be a good discussion tool, forcing the parties to explicitly state various assumptions and ideas, they are essentially useless for predicting a real world economy.

And what’s worse, they seem like something dependable. Isn’t it very hard to judge how much a model has to do with reality? A discussion about a model might be a perfectly scientific, constructive and relevant discussion about a theoretical world that has almost nothing to do with the real one.


No, it might not. Not unless you use the word "scientific" in some creative way.

The "real world" is a necessary component for the scientific method. By comparing model predictions to real-world experiments, it allows refuting and refining models and theories. How else would you evaluate theories, to separate the chaff from the grain?

The feedback from observation is absolutely vital.


Please don’t generalize my arguments to scientific method as such, I am just talking about economy. A real-world economy is a wildly nonlinear system, a bit like the weather. And there’s always something important you may fail to take into account, like the role of the private financial sector in the recent economic crisis. You may argue that we are now wiser and our models better, but the plain fact is that the degree of correspondence between our models and the real economic world is very poor. When you tweak a model for such a complex system, how do you scientifically know it’s better for predicting the future behaviour? It’s madness to think that you could reasonably model what would happen after introducing a minimal income.


I agree with you, but that changes nothing.

You're basically saying that economy is not science, in the sense above. Which I agree with. It is madness.

This doesn't mean that things outside science cannot be useful -- just that it's not science.


He would do well to read Hayek's Nobel Prize Lecture, "The Pretence of Knowledge".

"it simply makes it more understandable and less likely to be logically flawed" -- yes to the first part, no to the second.


I think it does make it less likely to be logically flawed, in that you can better see what is following from your assumptions. It's like stronger typing or an additional test in your test suite. It's perfectly possible to still get wrong results, but it's less likely. "Less likely enough" is another, extremely important question.


But writing some fucking code gives a good basis for a more informed debate since there could be a pool of models in existence which could then be refined. It also makes it a lot easier to test various assumptions and determine the sensitvity of various models. For example, take the number of non-workers a more plausible assumption is that everyone earning less than minimum wage + n, where n is the cost of maintaining the job would stop work immediately. That's a step forward from both figures used so far.

A set of iPython notebooks on Github could inspire the inner economist in us all.


I think that is the main takeaway from this article: models are useless for gaining consensus because anyone can tweak assumptions or input variables and come up with their own conclusion.

I think Climate Change and Securitized Credit crisis also taught this lesson for me: people with models can persuade people into believing anything. It's just a new way to lie with statistics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: