Hacker Newsnew | past | comments | ask | show | jobs | submit | subatomic's commentslogin

So will Nature be comfortable with having scientists cross publishing on GitHub in the name of "Open Science"?


NFI FTI?


"For Their Information", i.e. I wasn't telling e40. (Yes, I am aware of the irony here).


FTI: FYI typo on QWERTY NFI: Not found on the internetz?


Also known as “no (expletive) idea”


I think you're proving the point OP was making. In the past you found people by proximity, as, not so long ago, there was no online.


Bloke, surely?


I was heavily into reinforcement learning around the turn of the century, and at the time, "Reinforcement Learning - An introduction" (Barto and Sutton) https://mitpress.mit.edu/books/reinforcement-learning was an absolute goldmine for me getting started. I think parts of it are online somewhere including all their pseudocode and solutions.

https://mitpress.mit.edu/books/reinforcement-learning


The complete first edition can be found here: http://incompleteideas.net/book/ebook/the-book.html

If you're interested in some well documented C++ implementations of the algorithms shown in the book, feel free to check out https://github.com/Svalorzen/AI-Toolbox. I started the project because when I was first reading the book I had no reference implementation to compare the book to, and personally I learn better with practical examples, so maybe it can help you too.


If you are going to start in RL, you should really consider reading the second edition even though it is not released yet. I am guessing that Sutton is getting closer to the finishing line as there have been numerous revisions already. The second edition has better notation and benefits from the field having matured a lot since the first book was written. http://incompleteideas.net/book/the-book-2nd.html


It's a fantastic book! The authors have been working on a second edition for a few years, and I think it's finally finished. A draft is generously available here: http://incompleteideas.net/book/the-book-2nd.html


As a self contained, foundational course, Georgia Tech's OMSCS offering [1] is solid. Charles Isbell and Michael Littman are great at building intuition into equations.

[1] https://www.udacity.com/course/reinforcement-learning--ud600


Isbell's course in person was great. And if the exams for the online version are anything like the in person ones, it really does test your understanding of foundational concepts.


Yup, just took the online RL class and the average grade for the final exam was 45 out of 100, high score of 76. The format was true/false with a short explanation for your answer. I never thought I'd be proud about getting a 53 on a true/false exam, but it was an extremely challenging and rewarding class.


I was wondering how you were over 120 years old there, for a moment, then I realized turn of the century doesn't mean that century any more.


Bartow & Sutton is excellent.

You can definitely find it online but be sure to find the right version - the latest version has great illustrations and is a lot clearer.

Also, check out the RL jupyter notebook here by my friend Ryan Sweke who does work on RL for quantum computing: https://github.com/R-Sweke/CrashCourseInNeuralNetworksWithKe...


Great suggestion! The blog was based on a large portion of the book. A friend of mine asked for a version of the first chapter that was digestible for an audience that is in high-school to undergrad college level. I wrote this blog with that in consideration, while adding my own observations as well. I am planning to write up some python solutions for the MDP chapter as well. Thanks for reading :)


Thanks for this, I have read a couple books on deep learning but struggled to find anything on Reinforcement Learning. Maybe an Ask HN is in order.


I highly recommend the documentary "From Bedrooms to Billions: The Amiga Years" http://www.frombedroomstobillions.com/amiga

The significance of the architecture and Jay Miners brilliance in system design cannot be underestimated.


Can you outline the draw for people?


Imagine Alexa/Siri 20 years from now, running on VR glasses that actually work and look cool. Available today.

That’s what the Amiga felt like in 1985.


The draw, as in the attraction to Amiga? TL;DR: it was a technologically superior product for a while, but never became mainstream. This allowed it to garner a decent, very faithful following. Kind of like if the iPhone had failed, but you still had people swearing it's greatest thing since sliced bread.

I can't comment too much regarding the development side of things, as I was quite young at the time. While I dabbled in coding and have heard many comments about various ahead of its time ideas even on the dev side of things, I suspect what set it aside was in large part the hardware.

For its price, it came surprisingly well equipped. For context, we're talking 1985, so most PCs you ran into would have had CGA (EGA only having come out in '84). On the Amiga, the lower / standard resolutions you could have 32 or 64 colours at a time, picked from a 12-bit (4096) palette. For static images, there was a special hack which allowed all 4096 colours on screen at the same time, which made for breathtaking visuals. There were a set of co-processors (Copper and Blitter), which allowed for specialised, primarily graphical programming. Kind of alike a 2D GPU, but in 1985.

The OS itself had a fairly well implemented true pre-emptive multitasking, which allowed for a number of apps to be run simultaneously. Again, this is 2 years before Windows 2.0, and probably 10 years before Windows had anything better than switch-tasking. Apps could run as a window, or get their own fullscreen mode, but surprisingly the fullscreen mode was stable (sometimes games still crash on Windows when you drop do desktop), and you could even drag the full screen window down, and see half the desktop, half the game, etc. Along with the various co-processors offloading work from the CPU, this meant that the entire experience was very fluid. I still remember playing various simple games by setting them to high priority on the CPU, while a 3D render was merrily taking 100% of the CPU in the background. There was literally no noticeable lag in the game. For various architectural reasons, this just doesn't seem to work on modern machines.

Couple that with very decent audio (again, I suspect it would be many years before PCs routinely outperformed the Amiga), and you have a very decent product. It's very difficult to explain the degree to which this technical lead existed, the market is much more competitive now, and everything is much closer together. But, as an example, it's as if someone came out with a console with full VR capabilities at a 500$ price point. It simply shouldn't be feasible.

Since it was quite different, a lot of in-house technologies sprung up. An example is the AmigaGuide format, which was a kind of hyperlinked text document (kind of like man pages?). I suspect a bunch of these esoteric and quirky aspects gave the Amiga a lot of its flavour and soul.

For various reasons, possible mismanagement by Commodore, possibly the PC's open platform simply being economically superior, the Amiga never became mainstream. But therein lies it's draw - the people who knew it and loved it simply couldn't believe this, and became rather fanatical followers.


That’s what I was not grasping. We were too poor to have a computer until I was 16 (in the 90s) so I missed that era. Thank you!


You're welcome! One of the appeals of Amiga was that it offered all this at a very competitive price (when compared to IBM compatible PCs). Still, I recall a Commodore 64 costing my parents about 4 months of salary, and that was without even a tape drive. This was Eastern Europe, so perhaps we were poorer than most on HN. Commodores, Ataris and other alt machines were pretty popular around those parts, due to their price point.


I don't understand the downvoting going on here. Even if you don't agree with the comment, it's still a valid point.


Clojure has clojars: https://clojars.org/


Not sure about the inspiring talks or whatnot, but I've been writing clojure/clojurescript (reagent, re-frame et al.) professionally for a couple of years and there's nothing I can't find that I need in a project. With simple js interoperability, native use of js/react components in reagent and ease of wrapping any vanilla js library, there really is nothing you can't do with clojurescript. You should take a closer look!


If you mean something like contribution guidelines, then they're everywhere. Here's one of mine as an example: https://github.com/benhowell/react-grid-gallery/blob/master/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: