There is. When the Fed buys bonds they get the coupon payments for those bonds and they can wipe the money off the books then if they want. They can also sell the underlying bond if they want to remove the cash from the economy immediately. And--since they got a good deal by buying when the market was down--they can afford to sell a little below what the same bond would currently trade at and fixed income investors will line up around the block to buy
> since they got a good deal by buying when the market was down
Assuming the price goes back up later, you mean. But if that were guaranteed then the price wouldn't have been down much in the first place. It could stay down, or even go lower, in which case the Fed will lose money when they go to sell.
Denver has these, you can see them on Larimer and Market street in LoDo. Around the 1890s, the locals dug out a series of tunnels through the city. The snows would come in and people would just go underground for a few days for business. There was a bar that closed about a year ago called the Blake Street Vault—it used to be a bank—and if you asked they would take you into the basement to see the vault and the dumbwaiter. You can see down where they plastered over some of the wall, it used to have a teller window right there open to the tunnels for customers.
Supposedly, you could go from Union Station all the way to the capital building underground (but I doubt that).
I’m sure most of the tunnels aren’t passable, possibly collapsed, filled in, or flooded. But I seriously want to go down and try to map out some of them to be restored like they did in Seattle.
I've got some friends in Minneapolis that November - ~May don't leave the skyway system. Seems similar, but connects buildings at the 2nd story instead of underground.
We have these old glass grids in the sidewalks along College Ave up here in Fort Collins as well. Some are even still in use by restaurants to load inventory from the Cysco / Shamrock trucks.
Snow also scatters light a lot, if it's not too dirty. One example I found, it takes about 8 cm (3 inches) of snow to reduce the light irradiance to half.
They weren't clearing the sidewalks to let light through, they were (and we are) clearing the sidewalks so that people could walk on them! Anyway, with or without prism tiles.
I was in the same boat in 2014. I went a more traditional route by getting a degree in statistics and doing as much machine learning as my professors could stand (they went from groaning about machine learning to downright giddy over those two years). I worked as a data scientist for an oil-and-gas firm, and now work as a machine learning engineer (same thing, basically) for a defense contractor.
I’ve seen some really bad machine learning work in my short career. Don’t listen to the people saying “ignore the theory,” because the worst machine learning people say that and they know enough deep learning to build a model but can’t get good results. I’m also unimpressed with Fast AI for the reasons some other people mentioned, they just wrapped PyTorch. But also don’t read a theory book cover-to-cover before you write some code, that won’t help either. You won’t remember the bias-variance trade-off or Gini impurity or batch-norm or skip connections by the time you go to use them. Learn the software and the theory in tandem. I like to read about a new technique, get as much understanding as I think I can from reading, then try it out.
If I would do it all-over again I would:
1. Get a solid foundation in linear algebra. A lot of machine learning can be formulated in terms of a series of matrix operations, and sometimes it makes more sense to. I thought Coding the Matrix was pretty good, especially the first few chapters.
2. Read up on some basic optimization. Most of the time it makes the most sense to formulate the algorithm in terms of optimization. Usually, you want to minimize some loss function and thats simple, but regularization terms make things tricky. It’s also helpful to learn why you would regularize.
3. Learn a little bit of probability. The further you go the more helpful it will be when you want to run simulations or something like that. Jaynes has a good book but I wouldn’t say it’s elementary.
4. Learn statistical distributions: Gaussian, Poisson, Exponential, and beta are the big ones that I see a lot. You don’t have to memorize the formulas (I also look them up) but know when to use them.
While you’re learning this, play with linear regression and it’s variants: polynomial, lasso, logistic, etc. For tabular data, I always reach for the appropriate regression before I do anything more complicated. It’s straightforward, fast, you get to see what’s happening with the data (like what transformations you should perform or where you’re missing data), and it’s interpretable. It’s nice having some preliminary results to show and discuss while everyone else is struggling to get not-awful results from their neural networks.
Then you can really get into the meat with machine learning. I’d start with tree-based models first. They’re more straightforward and forgiving than neural networks. You can explore how the complexity of your models effects the predictions and start to get a feel for hyper-parameter optimization. Start with basic trees and then get into random forests in scikit-learn. Then explore gradient boosted trees with XGBoost. And you can get some really good results with trees. In my group, we rarely see neural networks outperform models built in XGBoost on tabular data.
Most blog posts suck. Most papers are useless. I recommend Geron’s Hands-On Machine Learning.
Then I’d explore the wide world of neural networks. Start with Keras, which really emphasizes the model building in a friendly way, and then get going with PyTorch as you get comfortable debugging Keras. Attack some object classification problems with-and-without pretrained backends, then get into detection and NLP. Play with weight regularization, batch norm and group norm, different learning rates, etc. If you really want to get deep into things, learn some CUDA programming too.
I really like Chollet’s Deep Learning with Python.
After that, do what you want to do. Time series, graphical models, reinforcement learning— the field’s exploded beyond simple image classification. Good luck!
This is the correct progression IMHO. I can tell you’ve been in industry because it mimics my experiences.
Always start with a simple model and see how far you can get. Most of the improvements I’ve seen comes from “working the data” anyway. You will be surprised how much you can improve model performance just by working the data, or improving the quality of the underlying data alone. Also simple models give you a “baseline”. What is the point of reaching for neural networks if you don’t have a baseline performance metric to compare against? XGBoost is a godsend. It trains extremely quickly and is surprisingly difficult to beat in practice.
As you say, constantly sharpen your saw with regards to probability theory and mathematics in general. There is simply no way around this in the long run.
Excellent detailed advice! This is THE roadmap for ML study.
PS: While many of us may not have the time/resources for a graduate course, one can absolutely get the mandatory theoretical ideas from books/courses/videos/etc.
I've seen a couple comments about Arthur Andersen, Enron, and Accenture in this thread and I just want to detail everything that went down in that situation.
Full Disclosure: I recently left a position with one of Accenture's subsidiary. I also hate that company.
The Supreme Court actually vacated Arthur Andersen's conviction for their actions in Enron. The SC basically said the jury instructions were too vague, and the jury could have believed that Arthur Andersen thought they did everything right and legally but still voted to convict. Arthur Andersen was never retried since there wasn't much left at that point anyway.
Arthur Andersen Accounting (or whatever they formally called themselves) and Arthur Andersen Consulting had already broken up into two different companies. They set up some weird agreement where the more profitable of the two would pay a cut of the profit difference to the other company. Consulting is always more profitable than accounting (which is why all the Big 4 have gotten back into even in the age of Sarbane-Oxley, where they can't audit a company they consult for), so Arthur Andersen consulting had to send a big-ass check to the accountants every quarter and they wanted to get out of that. The blow up of Arthur Andersen accounting presented the perfect opportunity. Arthur Andersen Consulting changed their name to Accenture to distance themselves from the Enron scandal, but it was mostly PR because they didn't have anything to do with Enron anyway.
McKinsey, however, had consultants all through Enron. I believe Skilling worked for McKinsey right out of college and threw them a ton of work. There's no way people at McKinsey didn't know what Enron was up to, but they somehow got off scot-free
> Andersen Consulting changed their name to Accenture to distance themselves from the Enron scandal
The name change was in place before Enron went bankrupt, and was a condition of the settlement with Andersen Consulting. It was very fortunate timing indeed.
McKinsey wasn't their legal auditor, blessing their financial statements. That's like saying any management consultant or contractor at Enron at the time should be punished. That makes no sense.
McKinsey was far more than just a management consultant for Enron. They were deeply embedded in the decision making process and even if they didn't do anything outright criminal they certainly share moral responsibility for Enron's misdeeds.
You're both right and wrong. The auditor has professional ethical and legal obligations. Clearly that is a special case of being particularly culpable.
However, everyone who knew what was happening and the scam being run should probably have been deemed complicit
The author jumped from an experiment he did which suggests that our brains can't pull different objects from our memories at the same time to "There is no unconscious thought."
If anyone should have known about Enron, it was McKinsey. They were all through that place, Arthur Andersen just looked at the financial reports. AA collapsed and the government went after them, but the Supreme Court vacated AA's conviction on the grounds that the instructions given to the jury were so vague the jury could have believed AA thought they were doing everything right but still voted to convict. Since the firm was destroyed, no one bothered to retry the case.
And I don't know why you say nothing substantially changed, Sarbanes-Oxley came out of that and accounting students are beat over the head with the lessons from Enron and what changed.
Plus, Arthur Andersen Consulting still exists as Accenture, but they were separate companies at that point. (Accenture was actually pretty psyched because they had nothing to do with Enron, but when AA accounting and AA Consulting broke up they had a deal where the more profitable business would send a cut of their profits to the other company. Since consulting is always more profitable, AA Consulting was looking for a way out of that deal when AA accounting imploded.)
As an aside, I've always wanted to read a this book on the cultural origins of Frankenstein. The shift in climate, advances in medicine and science, the decline in alchemy, treatment of women, etc. I've read a lot of different works but I've never seen it all in one place.
Luigi Galvani investigated the connection between living creatures and electricity. Earlier scientists had observed that static electricity and lightning were the same phenomenon. Galvani made frog legs twitch by applying two dissimilar metals. He came to the conclusion that animals generate electricity conducted by the metals. Alessandro Volta believed the opposite causation: that the setup generated the electricity and the legs reacted. Volta invented the battery after experimenting with various elements as the metals. Galvani's nephew Giovanni Aldini carried on his work and even experimented on the body of a condemned prisoner.
I had always thought the rise of Frankenstein's monster was a hilariously nonsensical application of the literary device of extending scientific explanations of natural phenomenon into magic (e.g. endless potential of atomic everything in the 50s, electric disturbances in ghost movies, reversing the polarity, quantum mechanics, genetic engineering, endless applications for graphene). We all know that lightning kills and injures. But it's not such a random jump between hearing of a corpse flinching, making a fist, and opening an eye (Aldini); and writing a horror story about a crazed scientist reversing death.
Funnily enough it turns out that all living things do generate electricity through pumping out sodium ions and pumping in potassium ions, creating a voltage across their cell membranes. But nowhere near enough to be observable on a macro scale.
Why Nations Fail by Acemoglu and Robinson
Poor Economics by Banerjee and Duflo
The first deals with why nations stay poor, the second covers how the poor in the third world live and how to actually help them