"It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used"
You don't have to update the injection points, because the injection points don't know the concrete details of what's being injected. That's literally the whole point of dependency injection.
Edited to add: Say you have a class A, and this is a dependency of classes B, C, etc. Using dependency injection, classes B and C are passed instances of A, they don't construct it themselves. So if you add a dependency to A, you have to change the place that constructs A, of course, but you don't have to change B and C, because they have nothing to do with the construction of A.
Ya very confused by this. Either the change is to the constructor of the object being injected, in which case there is no difference either way, or the change is to the constructor receiving the injection, in which case there’s no difference either way.
I think you're being downvoted because you're agreeing with the post you're quoting, but arguing as if they're wrong: the example in question was there to show how DI can be useful, so there's nothing to argue against.
"because it does preloading directly in javascript, it can't possibly follow the HTTP semantic of not actually applying cookies until later when the cached route is used"
I may be wrong, but I don't think using JavaScript vs using the standard HTML <link> element to prefetch makes a difference here. I don't see anything in the HTML specs about preload or prefetch delaying cookie setting to sometime after the resource is actually loaded (although admittedly I find this bit of the spec somewhat hard to read, as it's dense with references to other parts of the spec). I tried it out, and, both Firefox and Chrome set the cookies for preloaded and prefetched links when the resource is loaded, even if the resource is never actually used.
I initially interpreted "unaccounted-for null values may cause compile-time warnings, but not compile-time errors" as meaning "in some cases, an unaccounted-for null value might not cause a compile-time error", but in the context of the rest of the spec, I think it actually means "unaccounted-for null values are not permitted to cause compile-time errors, only warnings", which seems like a bad idea to me. I can see why allowing implicit conversion from unannotated "Object" to "Object!" is a reasonable compromise to work with existing code, but I don't see why conversion from "Object?" to "Object!" would not cause a compile-time error.
Worse, permitting this conversion at compile time means developers will ignore the warning, so we'll have actual codebases which include these conversions. Any later change to enforce nullability checking at compile time will then have a significant backwards compatibility cost.
That's right, I think it's really "soft deletion as a blanket rule" which is the anti-pattern; soft-deletion is one option which (IMO) is used too often without thinking about specifically what you need to achieve. If soft-deletion is used as a blanket rule, you're more likely to want to try and abstract it away via an ORM or similar, which tends to be fragile (I agree views aren't fragile, but they do add another layer of complexity in defining the relationship between the application logic and the schema). If soft deletion is chosen judiciously and represented explicitly in the business logic, it's less likely to cause problems (the "archived state" in the post is kind of an explicitly represented soft delete).
Yeah, I also think that it should be a part of business requirements rather than a purely technological decision that applies everywhere. A developer shouldn’t be asking “do we need soft deletion” in vacuum, because it’s a decision to be made higher up where workflows live.
It all probably stems from a rule that as a developer you must never [force/allow anyone to] lose expensive input or make it hard to recover. So ORM and platform developers try to ensure that no one really deletes anything, as a presumably simplest solution. It’s okayish sometimes, but is a bad responsibilities design really. If data is valuable, then its owner is the most responsible by definition. So the actual responsibility should be moved there, with explicitness and reasonable safety nets where needed. Otherwise a developer has to get defensive on all fronts which comes with additional costs for both them and a user, for reasons not well defined.
I don't think there's an explicit reference in the JavaScript spec to numbers being treated this way, because this is how all variables are treated in JavaScript - the relevant part of the specification is probably the definition of the "PutValue" abstract operation[1], which doesn't include any special cases for numbers (or other primitive types) vs. objects.
The language in the UK version of the law is "strictly necessary for the provision of an information society service requested by the subscriber or user", which the ICO interprets as meaning "it must be essential to fulfil their request". I don't think tracking page views counts, because it's technically possible to serve a page without using a cookie to track that it was viewed.
The paper linked to in the Criticism section of the article [1] is well with reading, IMO.
It points out that AOP only works when there is "obliviousness of application", i.e the code to which the aspect is applied doesn't need to know about the aspect, and argues that this kind of obliviousness is pretty rare.
It's an API for a specific hypermedia client, i.e, a web browser. It's not obvious that it's the best hypermedia API for a different client, for example, a JavaScript application that happens to be running in a web browser.
"quicky discover I couldn't post on the timeline of the music or videogames one"
That's technically true, but who cares? The instance timeline isn't a "community", it's just a particular filter applied to the global timeline. The communities of videogame and music enthusiasts are larger than any one instance, and can follow one another across instances, and see each others' tagged posts in searches across instances.
> The instance timeline isn't a "community", it's just a particular filter applied to the global timeline.
Defaults matter. Usability matters. If that particular filter is the one that the community uses to communicate with each other, then who's included or excluded from that filter becomes very relevant.
While I agree with the principles of what you're saying, the facts are a bit off here: the instance timeline isn't the default, and the lead dev has tried to nerf it a lot in many ways – I think it doesn't even show up in the official iOS app? Also, anything posted "to the instance timeline" also shows up in federated timelines if you follow the poster, unless you're running a fork allowing "local-only" posting.
You don't have to update the injection points, because the injection points don't know the concrete details of what's being injected. That's literally the whole point of dependency injection.
Edited to add: Say you have a class A, and this is a dependency of classes B, C, etc. Using dependency injection, classes B and C are passed instances of A, they don't construct it themselves. So if you add a dependency to A, you have to change the place that constructs A, of course, but you don't have to change B and C, because they have nothing to do with the construction of A.