One factor seems to be the vertical integration. It seems like these days there are a bunch of large companies succeeding by having the users of the hardware involved tightly with the the designers of that hardware. Tesla is definitely doing it. I think Google is doing it with machine learning. And Apple has definitely been doing it with their phones and tablets. The feedback loop is tight, valuable, and beneficial. No need to make big compromises for backward compatibility, either, just update the software.
Check out meteorjs. Javascript has a huge amount of buzz and usage these days, and meteorjs is a fairly simple fully integrated environment that allows you to do just about anything. I don't use it - I have been developing software since the early 90s and I am currently working in RoR. But when I want to play around, it's in meteorjs. In my opinion, too many teams are using an environment that is too complex, ensuring that nothing is predictable.
Although I do like Meteor, I don't think it's the right choice for the OP. Surely Meteor is easy to grasp, but its fundamentals rely on many technologies (React/Blaze, MongoDB, NodeJS) that a out-of-practice developer may struggle a little.
That being said, I'd recommend the OP to dive into modern CSS frameworks (Bootstrap, Semantic UI, Materialize, to name a few) and learn more JavaScript. After that he will probably feel what he likes/dislikes more and find the most suitable framework for his needs.
It seems to me that we have been doing this more and more as memory has gotten bigger and bigger, we care more and more about performance, and more and more people are writing more and more code that will be used for shorter and shorter times under tighter and tighter deadlines.
I was interested in NoSQL. Went to a Mongo pres. Both examples would have been easier and faster in SQL. Even the SQL I used 20 years ago. I asked for an example that would show performance advantage. I got a tired vague statement about the vague performance advantage. Seems like snake oil to me.
Unfortunately you were downvoted, but it wasn't so long ago on HN that every second story was NoSQL this, NoSQL that. There were even "SQL is dead"/"relational DBs are dead" posts, just ridiculous. So it's nice to see stories like this.
I've been writing a lot of recursive queries for Postgresql lately using CTEs. Quite cool though a little mindbending at times.
I remember those days. I was never sold on NoSQL, but if anything can be said, it lit a fire under SQL and competition in the space was ultimately a good thing.
Absolutely, but it pays to be very skeptical and to pay closer attention to the negative "I have used this technology and it sucks because [links to the bug tracker]" articles than the ones that gush about how great they are.
There's probably a trendy Hacker News technology lifestyle cycle chart to be drawn, step 3 or 4 of which is "developer is bitten by deficiency in the technology, writes blog post saying it sucks and not to use it, gets 200 points and front page".
The main reason I read HN is to keep up with whatever manic fancy will catch the developers' eyes this week, leading to me supporting it for a couple of years.
"NoSQL" is generally a misnomer. It's not SQL that is/was the problem, but that there are a lot of cases where specific common properties of RDBMS' are limiting. The NoSQL moniker is a result of the fact that most of these RDBMS's uses SQL as the query language, and most of the "new" database engines does/did not.
Since then, a lot of the RDBMS's have adopted features that have reduced the gap. E.g. Postgres' rapidly improving support for indexed JSON data means that for cases where you have genuine reasons to have data you don't want a schema for, you can just operate on JSON (and best of all, you get to mix and match).
For some of the NoSQL databases that puts them in a pickle because they're not distinguishing themselves enough to have a clear value proposition any more.
But it is not the lack of SQL that has been the real value proposition.
I love Postgres but its support for sharding, multi-master, and most forms of scaling that aren't just "buy a bigger box" is still way behind most of the NoSQL solutions.
Lots of use cases don't need that kinds of scalability but if you do then Postgres can be more difficult to work with.
Skype used to run entirely on a Postgres cluster before Microsoft bought them. There are lots of examples of large Postgres clusters in the wild. Have you considered hiring an experienced Postgres admin? These types of setups are not impossible.
Our current usecase for NoSQL is that CouchDB can be replicated to mobile devices - I'm not aware of Postgres being able to do this. So I think there still are cases where it can be useful.
(accidentally posted this to the root of the story)
Postgresql 9.4 with jsonb sends mongo to the dustbin, IMHO. If you have to write it in js close to the data or if plpgsql is too steep of a learning curve, you can play with the experimental plv8.
But you should really pick up plpgsql, it's "python" powerful, with the an awesome db (and has a python 2.x consistent API, sadly, but the doc is very good) There is a great sublime 2.0 package that makes the writing and debugging of functions in one file just awesome. Write an uncalled dumb function that has a lot of the API in it at the top of your file, and you'll get autocomplete on this part of the API.
Specifically no not miss getting acquainted with json and hstore, specifically using json as a variable size argument passing and returning mechanism, it's just hilariously effective.
cheers, and keep making this place(not only HN, our blue dot) better, F
sorry for the confusion, no I'm really referring to plpgsql, it's quite a powerful language; the APIs for the essential extensions such as hstore, and integrated types such as json, array, strings etc have evolved over several years, so they lack a little consistency in naming conventions, something python2.0 also has IMHO. plpython is quite cool too, but using that or js with plv8 IMHO obscures some of the power of the underlying db server.
> Both examples would have been easier and faster in SQL.
That's easy, so long as we mean the whole entire project when we say "faster". When I worked at Timeout.com they were importing information about hotels from a large number of sources. For some insane reason, they were storing the data in MySql. Processing was 2 step:
1.) the initial import was done with PHP
2.) a later phase normalized all the data to the schema that we wanted, and this was written in Scala
The crazy thing was that, during the first phase, we simply pulled in the data and stored it in the form that the 3rd party was using. That meant that we had a separate schema for every 3rd party that we imported data from. I think we pulled data from 8 sources, so we had 8 different schemas. When they 3rd party changed their schema, we had to change ours. If we added a 9th source of information, then we would have to create a 9th schema in MySQL. We also checked the 3rd party schema at this phase, which struck me as silly because this did not mean that step 2 could be innocent of the schema, rather, both step 1 and step 2 would have to know the structure of those foreign schemas, but it was necessary because we were writing to a database that had a schema.
The system struck me as verbose and too complicated.
It's important to note that most of the work involved with step 1 could be skipped entirely if we used MongoDB. Simply import documents, and don't care about their schema. Dump all the data we get in MongoDB. Then we can move straight to step 2, which is taking all those foreign schemas and normalizing them to the schema that we wanted to use.
For ETL situations like, NoSQL document stores offer a huge convenience. Just grab data and dump it somewhere. Simplify the process. Your transformation phase is the only phase that should have to know about schemas, the import phase should be allowed to focus on the details of getting data and saving it.
You can do that in SQL databases too: store the XML / JSON / whatever in a blob. There is no need to have a normalized import schema, especially since you are doing the transformation using an external application (Scala program).
I'm not about to go to bat for Mongo in an SQL thread (there's plenty of problems with that platform that are real), but I rather enjoy their query syntax, it's very AST-like, even through the aggregation pipeline.
I don't believe it's probably much faster even in the best case (and I'm sure an experienced SQL expert wouldn't find it any "easier"), but on a grammatical level I do find it a fresh take on query structure, and writing queries and map-reduce jobs in coffeescript was extremely satisfying because of how terse, elegant, and pseudocode-like it turned out.
They allow some small number of articles for each browser before putting up the wall. And I think you can defeat it by removing your cookies. I think that qualifies it for free linkage.
I think the fault is in the whole idea of giving reduced sentence for confessing. I understand that a confession is easier and there should be some incentive to make one, but it seems like the police coerce defendants into making confessions by threatening the longer sentence if there is no confession. While this may result in true confessions sometimes, it also inevitably results in some false confessions. It is just too easy for a prosecutor to make their evidence sound very strong in the absence of a judge and jury to scare the defendant into a confession.
The sadder thing is that the accused typically has so little faith in the justice system that they'll accept a "deal" rather than take their chances with a judge / jury.
That cynicism translates back into broader society and causes antipathy if not hostility in the general public towards the police. The whole thing creates much more harm than good.
Yes, and also a fairly basic research fail, referring to the ECMWF's old supercomputer at position 60 as the UK's weather predicting system. They have couple of new systems at positions 19 and 20. ECMWF is for Europe, not UK, even if it is based in Reading. The key is in the name.