"For this is the source of the greatest indignation, the thought 'I’m without sin' and 'I did nothing': no, rather, you admit nothing."
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.
You can still enable remote work while putting guard rails around the need for video conferencing due to the cognitive load and emotional drain it clearly causes.
> My experience is that the people who love video is highly correlated with people who love useless meetings.
Strong agree. If you want your video on, I am cool with that. If you want it off, also cool. If you're not present, I'm going to know either way, but I want you to be comfortable while we work together. I care about the output and outcomes, not the control. n=1, ymmv, etc.
As a big advocate of remote work, over the years I'm coming to agree with this less and less. Done well, remote work is great. Done poorly, it's killing me. It often saps me of energy even more than office work did somehow.
On days at the office I get less done in terms of 'amount of work' but it feels more satisfying than remote, because it gives the feeling of better understanding situations and being able to do the right thing at the right moment.
Offloading the use of your brain to proprietary and legally murky third party services that can deny you access for any reason whatsoever seems shortsided. What happens when you don't have access to these services and you find out you don't actually know how to do most of what you need to do?
And risk all of your work being owned by some entity you have no hopes of fighting against and being left with nothing to show for but an atrophied brain because you've offloaded all your thinking to a machine that doesn't belong to you and is not able to be audited.
What is to stop the owners of these ai systems from denying service to users for trying to make a product that competes with them? Or just straight up taking your work and using it themselves?
You still need to be basically literate to understand what you're doing, otherwise you're adding zero value. Making AI tools solve problems for you means you're not learning how to solve those problems. It's especially problematic when you don't have a clue about what you're doing and you just take the AI at its word.
I think you still have to be pretty good at programming in order to bend a gpt to your will and produce a novel program. That's the current standoff. Might remain this way for a long time.
I strongly disagree, I believe that it's likely someone who has never ever programmed would be able to solve multiple advent of code tasks using GPT-[x] models and some copy/pasting and retries, and I'm 100% convinced that a poor programmer (i.e. not "pretty good at programming" but has some knowledge) can do so.
That's a good phrase "learning how to use an AI", indeed it's not just "using an AI". It's also a process and it involves learning or knowing how to code.
Maybe this will be true in 2030, but in 2023 AIs can help you quickly get off the ground in unfamiliar domains but expert knowledge (or just being knowledgeable enough to write code) is still king.
That is. If your goal is to quickly get out a prototype that may or may not work (even though you don't understand it very well), using AIs is great. But if you want to improve as a programmer, it may not be the best (or only) path.
Can you provide some data to support this? I’ve had good luck with applying through LinkedIn and company portals. What other channels are you thinking of?
In my experience, both as an applicant and a hiring manager, a warm introduction to a hiring manager or a referral works something like 10x better than an online application.
James Surowiecki's book "The Wisdom of Crowds" explores this the idea of harnessing collective intelligence in detail.
The basic idea is that when everyone in a crowd makes a prediction or an estimate, the average of all those guesses will often be more accurate than any individual's opinion because individual errors, biases, and idiosyncrasies tend to cancel each other out in a large enough group.
There's also the related idea of Superforecasting (explored initially by Tetlock): some people seem to just be really damn good at assigning probabilities to events. A platform like Metaculus allows finding those people and, at least to some extent, training them.
> If we're lucky, that means a federated, open, mostly-ad-and-suggestion-free open source social media experience can fill the power vacuum for intimate, interpersonal, high-latency communication over the internet.
Worth mentioning is Jimmy Wales' effort in this vein: https://wt.social/
Money is the most powerful mechanism for influencing human behavior that mankind has ever devised. The phenomena this article describes (banks in New York, chain outlets, etc.) are emergent - they arise from the aggregate behavior of lots of ordinary people working (at least in part) for money. If the goal is to change the emergent phenomena, then it's necessary to give the average individual an incentive that's more powerful than money. I don't see that here.
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.