Google has no customer service, no way to appeal problems in a reasonable manner, so their headlessness w.r.t. users is self-inflicted, and undeserving of any sort of pity.
i am no apologetic for the messy git UI. But i also know that any competent software engineer can be given a sufficient understanding / mental model of the basic git way of doing things in 1-2 hours, after which they can Google for the exact syntax of the commands they need (if they are doing complicated things). i feel that if they are unwilling to invest even that much time, then they are going to waste an eternity on endless tools and GUIs.
it's like tar or rsync or ffmpeg. Yes, it's hard to keep track of all the command line flags etc. but thanks to the internet we don't need to. it's far more useful to understand the underlying concepts.
Git is intrinsically a graph model. It's well-suited for very rich and effective visual representations.
If the cli "matched up" with a visual way of thinking, for many of us, the cli would become second nature. But it just doesn't match up. It's fugly and needlessly hard to remember and become fluent in at least for us "incompetent" engineers who already have too much to work on.
And how do you KNOW this? My experiences-- and many other's experiences have suggested otherwise.
Also it's foolish to suggest that one of the 21st century's most complex repository management software is just like tar, or rsync. What planet are you from?
Thanks for sharing this! i feel this is a cautionary tale for us HN-minded folks since i see a rather unusual love for the look and feel of the "old internet", and what i like to call the Craigslist style of design. As someone who remembers the internet of the 90s and early 2000s before it was taken over by ads and SEO spam,
i understand the nostalgia, but as a web developer, also know that i need to do right by my clients and build things for them that make their businesses successful. An old outdated website turns away many customers.
Ironically, a badly designed modern website turns me away. (sometimes because the thing literally doesn't work)
I remember trying to book a place and there was some issue with the z-index and I couldn't click to confirm the dates on the pop-up calendar. Made me wonder how much $$$ they could be losing because the % of people willing/know how to delete the offending element must be pretty small.
I agree 100% with what you're saying though. Older websites appear more "complex" to a lot of people. There's good middle grounds though. The new netflix for iOS is really nice, imo. Leans more towards form but still functions very well.
This is one the examples where the exception proves the rule. After introducing my friends and family to a CLI-based booking tool I wrote for them, they have swore off website UI's since. Every other week I get an excited e-mail stating how great the tool is, and how they have also convinced their own friends to give up JavaScript and turn towards Rust.
> A film that exists is better than one that never gets made.
Some might disagree. The democratization of content creation that happened with the internet and social media was in general a good thing, but it can't be denied that it also resulted in a massive reduction of the Signal to Noise ratio in content quality. The reason the Google search result page is much worse today is precisely because there exits tons of content today that should never have been made.
Sturgeon's law applies - 90% of everything is crap. 90% of movies shot on film were terrible too. Everything in the direct-to-video bargain bin in Blockbuster in the 80s and 90s was shot on film, and trust me, they were not all Tremors II.
So when more stuff gets made, more crap gets made. But the 10% that's good gets bigger too.
You want a world that contains Everything Everywhere All At Once? You have to also accept The Hobbit: The Battle of the Five Armies as well. Sorry.
> "recognize that trying to parse any meaning from an email address' local-part is blatantly ignorant of IETF specifications and almost certainly will create bugs"
I am sorry but this makes no sense. You do realize that the only reason you are able to use aliases is because your email provider chooses to parse meaning out of the supposedly "opaque" text right? If your email provider is free to "break" the spec, so are people you give your id to.
And that is solely the business of myself and my email provider. It's my email address, and therefore I am within my rights to assign whatever internal meaning I so choose. It is absolutely not the business of someone sending an email whether or not that opaque text has further-parseable meaning, and pretending otherwise absolutely will cause bugs (say, when sending emails to mailservers which don't use that alias syntax).
EDIT:
> If your email provider is free to "break" the spec, so are people you give your id to.
Wrong. See above. The email provider is free to "break" the spec because it is the thing in control of that email address and can therefore process it as it sees fit. The people to whom I give an ID are not my email provider, and therefore do not have the same degree of control; consequently, attempting to parse meaning from that opaque string will cause bugs, and also is a dick move which will not be tolerated.
If you're defending this practice because you, too, are parsing the opaque components of email addresses which you do not control, then I will take note to look into your code contributions as well and avoid anything you've touched.
Do. Not. Parse. The. Local-part. For. Aliases. Full stop. It's my email address, not yours. Respect how I enter it, or else remove it from your system entirely. Anything different is asking for bugs and is blatantly disrepsectful to users.
> If your email provider is free to "break" the spec, so are people you give your id to.
There is no reasoning behind this argument; it is purely a verbal construct memetically derived from some inapplicable equality ethic that might make sense in a completely unrelated situation.
The correct application of ethics is that someone agency who is given abc+def@gmail.com, and infers from it that this gives them permission to send email to abc@gmail.com (or, worse, sell that address to harvesters) is behaving unethically.
^ this is the best explanation I've seen so far in the thread.
Software translates directly into money. The company I work for makes physical devices - they have all kinds of engineers, but what we're making goes directly into making the company money. Like I can straight up say, they make millions off the stuff we produce for them.
If your work makes the company a lot of money, it's not frivolous to ask for a small chunk of it lol.
It's hard to say whether it _should_ go to journalists. Even before the digital era, newspapers largely made money through advertisements, not subscriptions. They were monetizing eyeballs just as much as the Facebooks or Googles of the world do. In their case they brought in the eyeballs through their content (whether responsible journalism or tabloid-trash) and monetized them through also showing ads to the same eyeballs.
The difference is that journalism no longer has a pseudo-monopoly on the kinds of things they historically did (content, distribution, eyeballs etc.).
My grandfather read the whole newspaper every morning. In one day I think I read a LOT more content than him but it is spread over a wide variety of surfaces, print, websites (no single author) etc.
People change jobs at the drop of a hat for reasons that are well within companies' control. It's not like people want all that stress and hassle, they do it because they're incentivized to do so.
Sometimes. Other times they are just sampling what's out there to see what suits them the best, or playing the comp boost game until even their best face forward isn't able to garner a higher offer.
The former happens, I’m not convinced that it’s a common occurrence. Changing jobs is genuinely stressful, I don’t think people do it lightly. The latter is usually something the company could fix, but won’t.
Even still, it’s obvious that comp alone isn’t enough to retain employees. Even FAANG companies, which pay extremely well, have pretty low retention numbers. Facebook does best here, at an average job length of only 2.02 years. If comp was enough, people would stay there longer. This implies that people are changing jobs for reasons aside from “this other company will pay more”.
It’s not just about absolute comp. If google will pay you more, then maybe you leave facebook. That’s not because google pays more than facebook, just that it’s easier to get a “promotion” by taking a hire role elsewhere than it is to get an actual promotion. It doesn’t mean you don’t pay market wages, but it does mean you don’t pay that person their market wage.
That’s exactly something companies have under their control. If people are leaving because it is seen as the sure route to a promotion, then perhaps providing clear advancement opportunities internal would reduce this phenomenon and help keep your high performing talent.
The present employer and prospective employer have a different perspective on the individual. It's entirely possible, and indeed somewhat common, that individuals are hired to levels to new employers beyond what they would be able to justify promotion at their existing employer. The only way to eliminate this is prolonged and comprehensive interview process, which is what TFA is railing against.
I disagree that this is the only way to eliminate this problem. Companies could loosen promotion criteria to be more in line with what external candidates bring. Ultimately the cost of lost knowledge and backfilling is quite high, and could easily justify faster promotion cycles on a monetary basis alone.
Even if some of them genuinely get promoted before they’re ready and wash out, you’re not really that much worse off than if you’d lost them before. Besides, there’s always the risk that your new hire is unprepared too, which is a much harder thing to quantify.
Incredible that you're the first person I've seen mention a second-order effect in this conversation.
I don't know what it is but I feel like people have forgotten just like basic truths about how humans work. Maybe it's because managerialism has infected everything.
That works if this is a "one time game" (game theory) as opposed to a repeated game.
If an employer does do training, it means they'll probably continue to do more training over time, which helps the employee become more valuable.
If the employer doesn't do training, yes they may be able to allocate the training budget to salary, but they are not going to spend anything training you or letting you work on projects to increase your skills while you're there, unless they absolutely must.
I think people also have some human perception of how they're being treated, and prefer to work for people that invest in them.
Not necessarily. It's not easy to find a boss who you genuinely trust to consider your best interests, for example.
Out of curiosity, what sort of "non-monetary" benefits were you thinking about? There's usually not a reliable way to turn (small amounts of) money into the sorts of things that really build loyalty.
So the answer is to not spend the money at all? How much are you costing your company by putting candidates through 8 hours of interviews only to reject them. Rinse. Repeat.
All the while, productivity suffers as the remaining team falls further behind due to short-staffing and being pulled away from their real jobs to interview.
Professions mandate training minimums per year in order to maintain credentialled status. They're low, sure, but they at least create a need for ongoing professional education.
I literally said it isn't all about money. Most people leave because managers[0]
> In general, people leave their jobs because they don’t like their boss, don’t see opportunities for promotion or growth, or are offered a better gig (and often higher pay); these reasons have held steady for years.
And if it is about money, then this is called paying competitively.
But lastly, recognize that if everyone is training employees you're still not really losing out unless you're only hiring entry level employees. Sure, you might be training someone that leaves, but so does your competitor. But if you're only hiring junior engineers then you're probably doing something wrong that's much bigger.
That works when you're employing a bunch of Wordpress monkeys who do nothing all day but mess with CSS and install plugins. Not so much when you've got a mature SAAS product, parts of the system are tricky to work with, and stakeholders are breathing down your neck to implement new features so you don't have time to cross-train your teams.
Losing people who are experts within the domain of the software they're maintaining because you refuse to invest in them is going to cost you thousands of dollars... the only question is whether that's tens or hundreds times that amount... and in some cases it can cause you to lose your entire business.
Yep. Apprenticeships solved this problem in the past (and of course created many others). Actually it’s almost a fun little exercise in economics.
Basically there’s two types of efficiency, investment efficiency and allocative efficiency. (There may also be other types I don’t know about.)
Investment efficiency means people are incentivized to make positive-expected-value investments. Think about how people are incentivized to invest in their house, e.g. preventative maintenance, because if the expected value is positive then they will recoup that value when they sell the house. If you’re renting you don’t have this with respect to where you live - water damage or no, not really the renter’s problem. Investment efficiency is maximized by private property, where you know that no one will take your property without your consent.
Allocative efficiency means things go to whoever is willing/able to pay the most for them. Renting does have this property - if both of us want to rent a house, and I’m willing to pay more, in most cases I’ll end up getting the house. This is why gentrification can cause displacement - when wealthier people come into a city and are able to outbid the current renters, they win and the current renters lose. Allocative efficiency is maximized by auctions and things like them, where the good goes to whoever is willing to pay the most.
Bringing it back to your comment, job training isn’t worth it because our careers as programmers are dominated by allocative efficiency, not investment efficiency. If you can train a programmer create $50,000/year more value in general (i.e. it’s not training that would only be useful to your company), they can now get paid about that much more from any of your competitors, and you will have to pay them about that much more to stop them from leaving. So you gain nothing from giving them general-skills training.
Another way of solving this problem is with sectoral bargaining. If you have a sector-wide union, they can make all companies start training simultaneously, or assume some of the costs themselves. It’s a win-win for the industry and for the programmers, but it doesn’t happen nearly as much as it could because of that coordination problem.
>Yep. Apprenticeships solved this problem in the past (and of course created many others). Actually it’s almost a fun little exercise in economics.
But it makes Reginald the investor angry that his ROI isn't exactly 20% each quarter, so they jettison apprenticeships and start cooking the books to make that possible.
So, in this hypothetical, the sector-wide union is preventing individuals who learn to create an additional $50K/yr in value from realizing the increase in pay which would otherwise accrue to them?
In return for wasting training on employees that will leave for other companies, the company is getting its employees trained for free by other companies in the same way. In aggregate everyone wins because employees now get training.
The union is ensuring that no company can ruin it for everyone.
Yeah, or at least they aren’t able to capture the entire $50k/year in value. It kind of sounds bad but it’s a trade and there has to be something in it for both sides for it to happen.
You say that and I’ve seen several companies in practice echo what you’re saying. However, I fail to understand why they don’t simply make better use of contracts and probationary periods to solve that specific problem.
The problem of a probationary period is that it pushes all the risk to the employee.
While I agree interviewing has gotten ridiculous with all the leetcoding and ten rounds of interviews and FAANG cargo-culting and whatnot, one small advantage - assuming I'm not desperate for a paycheck - is that it gives me, as a prospective employee, time to consider and withdraw my application if I see too many red flags or I just prefer the devil I know.
A short interview process with a probation period on the other hand is a big roll of the dice. Maybe I'm not able to ramp up on time, or make a silly mistake due to unfamiliarity with the codebase or underlying business logic. Maybe I don't get on with the team or manager. Maybe I'm going to be dumped into a doomed death march project on day 1. I could find myself unemployed a month later with an embarrassing gap in the resume. Perhaps on the other hand a better interview process (not longer, just have properly trained people and constantly improve the process with feedback) would save us all that pain.
In a world where short, high-risk interviews dominated, you could just go roll the dice again. It would be a negative signal (why is @foo interviewing after only 60 days?), but nowhere near as bad as “why is @foo still interviewing after 6 months in this job market?!”.
Probationary periods could work but it's a coordination problem. Such periods are the norm in Europe (coz it's very hard to fire someone) but for an at-will place like the US, given that the industry doesn't really do probationary periods in general, any employer who starts doing it would be at a disadvantage.
You can't force a private entity to sue another private entity. The regulator (government) can sue it if it finds it at fault. And a private entity can choose to complain to the regulator if it wants. The agreement here between MSFT and GOOG is to try and resolve disputes internally if possible without suing each other or complaining to the regulator. There's nothing illegal in that.
Also the vast majority of contracts these days have "Arbitration clauses" that require all disputes to go through an arbitration process rather than lawsuits in the courtroom.
While there is a great debate to be had that many "Arbitration clauses" far too often curb consumer rights to civil courts (so called "tort reform"; putting limits on what your legal rights are as a consumer), arbitration has "always" been seen as the default first and lawsuits as the necessarily hard next step if arbitration fails (and if a given arbitration clause allows someone their rights to a fair trial in a court of law).
Just... wow. The jokes write themselves at this point. So instead of posing a simple question to the customer that even the most tech-illiterate customer can understand and answer reliably (Is your router next to a microwave oven or aquarium by any chance?), you make them download an app (which would ask for god knows how many permissions), ask them to take a photo, which may or may not even have the necessary information you need and could be in terrible lighting, blurred, or a hundred other failure modes, then run image processing on it which has its own precision and recall issues.
All of this, instead of asking a simple question. If this is not technology fetishization I don't know what is.
This only happens when a customer requests support via the app. You would be amazed at the number of customers that not only put their router in such places but then also deny their ever being there. The picture doesn't lie. Support tickets that arise from such cases can take months to close. Usually after a technician visits and sees the router's position
it's painful to see what was once such a loved and admired brand be reduced to this. :(