Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order to make this post a hacker-relevant discussion (I enjoyed the article, by the way) how about this quote:

"Everybody tries to 'game' the system on their route to vast personal fortunes - whether short-selling, packaging up dud mortgages as prime mortgages or telling lies about their financial viability - and the result is that the system is getting wise. The best course today in any financial transaction is to presume zero integrity. Credit is drying up and with it the very lifeblood of the economy."

When I read this, the first thing I thought of were the inevitable enhancements to the credit rating systems at personal and corporate levels which will arise in the next few years as a result of all of this.

If we're going to continue to abandon trust and integrity for their own sake and only include them as features of self-serving transactions, then it seems natural that we will have to push our current credit-rating systems much further to the (temporal and relational) edges than where they currently are.



I've been doing some thinking along the lines of using statistical mechanics and information theory to produce a self-consistent measure of risk (unlike the current rating systems). Does anyone know if there's been any work in the area?


What would it mean, in this context, for a measure of risk to be "self-consistent"?


At the first pass, it would mean that possibilities would add up correctly.

Here's one example: give every investment, stocks, bonds, cash, commodities, etc. a distribution of possible returns. Then, every holding company also has a distribution of possible returns, obtained by adding up and scaling the distributions of the stock respectively held. So long as the market changes at a rate substantially faster than the holdings change, this is a self-consistent model of stocks in that the total amount of expected stock value remains the same. One can then examine the distribution of possible returns of a given investment and assess one's own position of risk.

The problem with this is how limited it is. It does not, for example, account for hedge fund pricing well at all, since in hedge funds the holdings change nearly as fast as the market does. It doesn't account well for the wiles of any particular, fast trading investor. It does, however, lend itself to a fairly simple model for static futures, forwards, options, and swaps.

Why is this line of reasoning interesting? The credit systems we used to have couldn't handle the complexity of the derivatives that were being created. By one method or another, bad investments, especially mortgages, were being added up, and in the process the market found it difficult to track how risky the summed investments were. Credit rating agencies don't even make publicly known what their credit ratings represent! An investor is compelled to then blindly believe them, which, in this case, spectacularly failed.


How does this concept differ from the stochastic calculus based implementations of VaR?


I don't know. Maybe it doesn't differ from it at all. I'm not a domain expert by any means...

Thanks for the tip.


I've been thinking of applying agent based simulations as a new way to measure risk. That's a bit different than what you're thinking of but I think it would be self-consistent, or at least the way I understood it.


Agent based simulations would probably be a very excellent approach. Unfortunately, the system is highly nonlinear and quite chaotic, so you'd have to do some work into engineering dissipation into the system to keep it from fluctuating wildly.


Charles Stross's vision of a post-Singularity Net dominated by spam-bots, con-bots, and very convincing AI's with bridges in Alaska for sale seems more and more plausible.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: