"In v1, a nil Go slice or Go map is marshaled as a JSON null. In contrast, v2 marshals a nil Go slice or Go map as an empty JSON array or JSON object, respectively. The jsonv2.FormatNilSliceAsNull and jsonv2.FormatNilMapAsNull options control this behavior difference. To explicitly specify a Go struct field to use a particular representation for nil, either the `format:emitempty` or `format:emitnull` field option can be specified. Field-specified options take precedence over caller-specified options."
That reality may make the fundamental flaws of the if statement more noticeable, but at the end of the day the problem is still that the if statement itself is not great. If we're going to put in effort to improve upon it – and it is fair to say that we should – why only for a type named error?
Because the type named error is used in that flawed way orders of magnitude more than any other type. If there were other types that were consistently used as the last return value in functions that short-cirucuited when calling other functions that retuned specific sentinels in their final value when called, there would be reason to do it for them too.
In fact, this is exactly what Rust's ? -operator already does, and something that's obscured by the oddness of using pseudo-tuples to return errors alongside non-error values rather than requiring exactly one or the other; `Result` in Rust can abstract over any two types (even the same one for success and error, if needed), and using the ?-operator will return the value from the containing function if it's wrapped by `Err` or yield it in the expression if it's wrapped by `Ok`. In Go, the equivalent would be to have the operator work on `(T, E)` where `T` and `E` could be any type, with `E` often but not always being an error. Of course, this runs into the issue of how to deal with more than two return values, but manually wrap the non-error values into a single type in order to use the operator would solve that with overall way less boilerplate than what's required currently due to it being rarely needed.
> Because the type named error is used in that flawed way orders of magnitude more than any other type.
That does not give reason to only solve for a narrow case when you can just as well solve for all cases.
> If there were other types that were consistently used as the last return value in functions that short-cirucuited when calling other functions that retuned specific sentinels in their final value when called, there would be reason to do it for them too.
Which is certainly the situation here. (T, bool) is seen as often as (T, error) – where bool is an error state that indicates presence of absence of something. Now that your solution needs to cover "error" and "bool", why not go all the way and include other types too?
Errors are not limited to "error" types. Every value, no matter the type, is potentially an error state. bool is an obvious case, but even things like strings and integers can be errors, depending on business needs. So even if you truly only want to solve for error cases, you still need to be able to accommodate types of every kind.
The computer has no concept of error. It is entirely a human construct, so when handling errors one has to think about from the human perspective or there is no point, and humans decidedly do not neatly place errors in a tightly sealed error box.
> rather than requiring exactly one or the other
That doesn't really make sense in the context of Go. For better or worse, Go is a zero value language, meaning that values always contain useful state. It is necessarily "choose one or the other or both, depending on what fits your situation". "Result" or other monadic-type solutions make sense in other languages with entirely different design ideas, but to try and graft that onto Go requires designing an entirely new language with a completely different notion about how state should be represented. And at that point, what's the point? Just use Rust – or whatever language already thinks about state the way you need.
> but manually wrap the non-error values into a single type in order to use the operator would solve that
I'm not sure that is the case. Even if we were to redesign Go to eliminate zero values to make (T XOR E) sensible, ((T AND U) XOR E) is often not what you want in cases where three or more return arguments are found. (T, bool, error) is a fairly common pattern too, where both bool and error are error states, similar to what was described above. ((T AND U) XOR E) would not fit that case at all. It is more like ((T XOR U) OR (T XOR E)).
I mean, realistically, if we completely reimagined Go to be a brand new language like you imagine then it is apparent that the code written in it would look very different. Architecture is a product of the ecosystem. It is not a foregone conclusion that third return arguments would show up in the first place. But, for the sake of discussion...
> That does not give reason to only solve for a narrow case when you can just as well solve for all cases.
...
This clearly can't be solved "just as well" because nobody can figure out how to do it. The second half of your comment alludes to this, but a lot of what makes this hard to solve are pretty inherent to the design of the language, and at this point, there's a pretty large body of empirical evidence showing that there's not going to be a solution that elegantly solves the issue for every possible theoretical case. Even if someone did manage to come up with it, they're literally saying that they wouldn't entertain a proposal for it at this point! I don't understand how you can come away from this thinking it's realistic that this would get solved in some general way.
> The computer has no concept of error. It is entirely a human construct, so when handling errors one has to think about from the human perspective or there is no point, and humans decidedly do not neatly place errors in a tightly sealed error box.
That's exactly the argument for solving this for what you're calling a "narrow" case. Providing syntax just for (T, E) that uses the zero value for T when short-circuiting to return E would improve the situation from a human perspective, even if it meant that to utilize it for more than two return values you need to define a struct for one or both of T or E. The only objections to it that you're raising are entirely from the "computer" perspective of needing to solve the problem in a general fashion, which is not something that needs to be done in order to alleviate the issues for humans.
> This clearly can't be solved "just as well" because nobody can figure out how to do it.
Fine, but then that means there is no other solution for Go unless you completely change the entire fundamental underpinnings of the language. But, again, if you're going to completely change the language, what's the point? Just use a different language that already has the semantics you seek. There are literally hundreds of them to choose from already.
> That's exactly the argument for solving this for what you're calling a "narrow" case.
Go has, and has had since day one, Java-style exception handlers. While it understandably has all the same tradeoffs as Java exception handling, if you simply need to push a value up the stack, it is there to use. Even the standard library does it when appropriate (e.g. encoding/json). The narrow error case is already covered well enough - at least as well as most other popular languages that have equally settled on Java-style exception handling.
Let me be clear: It is the general case across all types that is sucky. Errors, while revealing, are not the real problem and are merely a distraction.
I assume with "Java-style exception handlers" you're referring to panics? If this were sufficient, then people would be using it more for error handling. I'd argue that a large part of why people don't write code using them as error handling more often is because of the syntax, and that's ultimately what this whole discussion is for me. This is probably another area where we just fundamentally disagree, because I don't really consider there to be a necessity to fully solve the "real problem" instead of providing something smaller that alleviates specific minor warts.
What's strange to me is that the main reason this seems like the best path forward to me is because it doesn't require large fundamental changes to the language, and I'm skeptical that there's any fix to the underlying issues that you're concerned about that wouldn't require those sorts of changes, but somehow your objections seem to mostly be on the grounds that you also don't want those types of changes. To me, the insistence the entire real problem needs to be solved for anything to be worth doing is in practice incompatible with the requirement not to change the language fundamentally, so the question becomes whether its worth considering changes that don't fully solve what you consider to be the real issue. I think that's where the disconnect is; I'm not really trying to argue for a solution to the problem you're concerned about because I don't consider it to be realistic that it will ever get solved, so I'm arguing that making smaller changes to reduce the impact of the problem without solving it fully would be worthwhile compared only to the status quo rather than some ideal solution that I don't think exists. I'm not trying to say that it's an absolute certainty that there's no way to fix the issues you're concerned about without fundamentally changing the language, and I'd be just as happy as you if I turn out to be wrong! It doesn't really feel like you're rebutting the actual suggestion I'm making though because you're interpreting my claim that there isn't any way to solve the problem generally without changing the language fundamentally as my advocating for those fundamental changes, which is not what I actually think, and from my perspective isn't what I've been saying at all.
> I'd argue that a large part of why people don't write code using them as error handling more often is because of the syntax
I don't see meaningful difference in the syntax as compared to other languages with a similar feature: https://go.dev/play/p/RrO1OrzIPNe How deep are we really going to split hairs here?
If it were commonly used you could clean it up a little, like how the somewhat recently added iterators feature cleaned up what people were already doing with iteration, but in this case since it is so rarely used in the first place, why bother? "If you build it, they will come" is Hollywood fantasy. Unlike this, the use of iterators was already prevalent before the iterators feature was added.
Let's be honest: If it were useful, people would already put up with the above being slightly less than perfect. People will actually put up with a lot of shit when something is useful at its core! But that they are doing this almost never is quite telling.
> What's strange to me is that the main reason this seems like the best path forward to me is because it doesn't require large fundamental changes to the language
Or maybe no changes at all. Would using the above really be so bad from a syntactical point of view? The much bigger problem, and why pretty much all modern languages have moved to returning errors as the defacto solution, is that it exhibits all the same fundamental problems as errors under Java exception handling. That is something syntax cannot fix.
And, well, for the exceptional (pun intended?) cases where Java-style exception handling really its the best option to suit your circumstances: It's there to use already!
Any time I write "if err == nil" I write // inverted just to make it stick out. It would be nice if it was handled by the language but just wanted to share a way to at least make it a bit more visible.
> I hope we get monthly or at least quarterly Ladybird Newsletter just to keep the attention of the project along with attracting those who still dont know.
I've heard people complain about Django many time on HN. I started using it back in the 0.96 version, so maybe its just a familiarity thing.
But I built 3 large successful applications in it in that time. I loved it. I don't use it regularly anymore since I mostly moved away from webdev, but I recently came back into contact with my largest project I build in 2018/2019 and its been running perfect this whole time and was a pleasure to dive back into.
Django just felt logically organized, documentation was on point, core was very readable (at least then).
I always just felt so productive in it. I know everyone has different opinions, experiences and products they are building, but I'm always surprised with the negative comments. I definitely prefer SSR with its reasonable though, so maybe thats part of it.
tbf it was borderline unusable until they added async DB query support in 4.1 (2022) - before that you had to wrap every DB query with sync_to_async, async_to_sync and it generated too much boilerplate code..., and even in 4.1 the DB queries themselves were still sync/blocking, not truly async because at that point they didn't yet rewrite their database "backends" to use async querying, and I believe that as of now the Django's DB engine still doesn't support natively async DB queries/cursors/transactions/...
Also, lots of the "batteries included" into Django don't have async interfaces yet.., for example the default auth/permission system will get async functions like acreate_user, aauthenticate, ahas_perm only in 5.2 which is expected in April 2025, so as of now these still have to be wrapped in sync_to_async wrappers to work...
My complaint with Django is/was that it's fantastic for building brand new apps starting from scratch, but less pleasure to integrate with existing databases. The last time I tried to add Django models to a DB we were already using, there was an impedance mismatch which made it hard to fully model, and I gave up trying to get the admin to work well with it. The ORM and admin are 2 of Django's biggest draws, perhaps the biggest. Without them, it's not so pleasant.
That's when I first came to love Flask. SQLAlchemy will let you model just about anything that looks vaguely database-like, and Flask doesn't really care what ORM (if any) you use.
TL;DR Django's opinionated. If those opinions match what you're trying to do and you can stay on the golden path, it's freaking great! Once you get off in the weeds, it quickly becomes your enemy.
> If those opinions match what you're trying to do and you can stay on the golden path, it's freaking great!
That's a great summary. I wrote a few significant flask apps many years ago as well and I'm a huge fan of SQLAlchemy. My flask apps were greenfield so I ended up building crappier versions of alot that Django provides. I still enjoyed it but I wasn't as productive. But with a legacy integration, it would be hard to beat SQLAlchemy (I think its great for greenfield too). I've basically landed on your comment above as well.
Sure but would they? Currently they get it totally for free. If they had to finance the development themselves then it would get real hard to justify real quick. $20bn is a lot of money even for Apple
It's not about whether or not Apple have the resources to make their own browser engine, it's about whether it makes sense from a business point of view to make their own browser engine. Currently it does, because Google pay them huge amounts of money to do so. But what business case would there be to pay that $20bn themselves if Google did not fund them? Would it be worth that just to avoid Chromium?
Tbf - they don't pay for WebKit, they pay to be the default search engine. If Apple wanted, they could switch to Chromium and still have the same captive audience and bargaining power (but a lot less control of the direction web standards go)
That’s not necessarily true, even Microsoft has its own tweaks of Chromium:
> We’ve seen Edge adding some privacy enhancements to Chromium pioneered by Safari. Edge shipped those, but Chrome did not. And as more browsers start using Chromium and large companies will work on improving Chromium, more of these disagreements will happen. It will be interesting to see what happens.
Just because a browser is based on Chromium, that does not mean it is identical to Chrome and that Google is in control. Even if the unthinkable happens and Apple is forced to adopt Chromium, that will only ensure that Google is not the only one having a say about Chromium and the future of the web.
Fwiw, I agree its problematic to lock down phones the way Apple does. I won't use them because I'm not buying a device where I don't get to decide what runs on it.
And for sure they would put their twist on Chromium, like Edge or Brave or Vivaldi.
I still think they have a lot more control the way it is now, for better or worse
This is insipid. Why would Apple adopt a fork of WebKit when they’ve been using WebKit just fine for so long? Why would Apple of all companies defer to something in Google’s realm besides search? Do you have a single technical justification for Apple to overturn decades of WebKit use that’s baked into its frameworks and its control over iOS to use Blink?
IE was too long in the tooth, Microsoft was behind by several trends at that point, mobile being one of them. Don’t think the situation with Safari and WebKit is comparable.
As a small correction that somewhat matters to this hypothetical, Microsoft had already moved away from Internet Explorer/Trident to Microsoft Edge/EdgeHTML. It was quite competitive and modern already.
So, they did not "move away from IE to catch up". They "dropped the Edge engine in favour of Blink (Chromium)". It feels very much like Microsoft just did not want to compete on the engine (run-to-stand-still) but rather just on the feature set. Who can blame them?
If you think about why Microsoft really switched, I think it is a fair question why Apple would not just do the same thing. I mean, as long as WebKit is the only engine allowed on iOS, it makes sense for them to control it. But as regulators force them to open that up, and perhaps put an end to the Google gravy-train, I think it is a fair question why Apple would spend that much money on a web engine when they do not have to.
You cannot fall behind the competition using Chromium as a base, because they are all using it too! It is the ultimate in safe corporate options.
While the Apple-Google rivalry seems to have waned compared to a decade ago, I just don’t see Apple completely capitulating their platform/browser engine like Microsoft did.
Not to mention even if Apple switched to Chromium, they’d just end up taking over that engine, even forking it later down the road:
> We can only imagine what would have happened if Chrome kept using WebKit. We probably ended up with a WebKit-monoculture. Trident is dead, and so is EdgeHTML. Presto is long gone. Only Gecko is left, and frankly speaking, I will be surprised to see it regain its former glory.
But Chrome did fork, and today, we can also see similar things happen in Chromium. I don’t expect somebody to fork Chromium, but it could happen.
We’ve seen Edge adding some privacy enhancements to Chromium pioneered by Safari. Edge shipped those, but Chrome did not. And as more browsers start using Chromium and large companies will work on improving Chromium, more of these disagreements will happen. It will be interesting to see what happens.
Just because a browser is based on Chromium, that does not mean it is identical to Chrome and that Google is in control. Even if the unthinkable happens and Apple is forced to adopt Chromium, that will only ensure that Google is not the only one having a say about Chromium and the future of the web.
And that is what is crucial here. The choice between rendering engines isn’t about code. It isn’t about the rendering engine itself and the features it supports or its bugs. Everything is about who controls the web.
There are plenty of scenarios which can be discussed in detail which have no possibility of coming to pass. Zombie apocalypse fiction, for instance.
I never had any beef against Ladybird. To bring this conversation to full circle, I merely clarified there are at least a few other promising new indie browsers that don’t use Chromium. In the event that Apple does abandon WebKit- which wouldn’t mean the termination of the project anyway!- I would simply use one of those alternative browsers.
Edit: while we are on the subject of wild hypotheticals, there’s also the DOJ suggesting Google split off Chrome into its own company for antitrust.
Safari is now the browser that is lagging the most behind. And it has not gotten better recently either.
Apple even got into "AI", so I would not put it beyond them to kill a browser team.
As per my reply to the sibling comment, I don’t think Apple is anywhere near to the capitulation that Microsoft was when it came to abandoning their browser engine.
Can you elaborate? I’ve been using it for over 10 years and it might just be my favorite piece of software. It’s central to all development I do.
I use many features of git that I probably wouldn’t otherwise due to having to remember flags and copying around hashes. It also makes discovering git functionality very easy.
"In v1, a nil Go slice or Go map is marshaled as a JSON null. In contrast, v2 marshals a nil Go slice or Go map as an empty JSON array or JSON object, respectively. The jsonv2.FormatNilSliceAsNull and jsonv2.FormatNilMapAsNull options control this behavior difference. To explicitly specify a Go struct field to use a particular representation for nil, either the `format:emitempty` or `format:emitnull` field option can be specified. Field-specified options take precedence over caller-specified options."