the thin phone is supposedly first step toward the iPhold foldable. they will probably slam 2 iphone air sandwiched together for the fold so this is the first step i guess
can someone with experience doing this shine some light? i have been offered this type of role from engineer to 50/50 (as i feel it) or 80/20 (as they say) IC and managing. in a series C startup. i feel like it’s never good to context switch. i never seen a tech lead or manager who did well both roles at once. am i crazy to think that the tech lead or manager role should be 100%? either go the IC track or the manager track. but i lack evidence to substantiate this idea of mine.
I’m in this position now. The longer I’ve been in it, I’ve come to realize can be summarized as:
You experience some the benefits of being a manager but bear all the responsibilities of managing others. It becomes challenging to make sound judgments when you must consider two different perspectives of a problem. Essentially, you’re taking on the duties of two jobs. I’ve found it incredibly difficult to step back and allow the team to make decisions without my input. My technical bias compels me to intervene when I perceive a decision as clearly incorrect. However, this approach hinders growth and may be perceived as micromanagement. While it’s a challenging position, it’s an excellent opportunity to explore management and determine if it’s a long-term career path you’re interested in.
At early startups where people are focused on building and you have self motivated, mostly senior+ engineers or hands-on founder types, the 80/20 thing can work. The problems happen when you bring in a lot of other roles, less experienced folks, and more and more distractions build up. The 80% will become more like 30%.
I feel the 4-7-8 is more natural than box breathing the length seems to align more with the natural breathing without constraining it to fixed length. I feel like exhaling takes usually longer than inhaling
I feel the opposite. Especially the part about NOT breathing for 7 (!) seconds, which doesn't feel naturall at all. Something like 4-2-5 would have been much closer to my natural. To me the benefit of this thread is the comments recommending other apps/methods.
Airbnb is generally fine but I would never probably use it again if I could. They have very high fees and zero support when something goes wrong. I now pretty much prefer hotels, the price is the sane but the service is not comparable. At least for shorter stays
I don’t get the point on going open source aside from a tiny boost in marketing. What is the objective and proposition here? Considering as others have said is not really open source. If I were the founder I would not do that. It’s like if Airbnb went open source or something
Yeah I went 100% to cash about 5 days ago. Jobs report came in ok, I think Trump is enjoying the side effect of forcing the fed to lower rates to stave off recession. On the other hand the market is super emotional right now and not making sense. I’m out until the dust settles. I have to admit, it’s nice to see Wall Street twist in the wind when they’ve been running the show for so long.
Remember the financial crisis when Bush (R) wrote them a $800B check and Obama (D) delivered it hat in hand? Suck it wall st.
Meh, I shorted the Dow a couple months ago, I was early but its paying off now in gangbusters. He has been shouting what he was going to do for months now, free money
Put it on a zfs dataset and back up data on the filesystem level (using sanoid/syncoid to manage snapshots, or any of their alternatives). It will be much more efficient compared to all other backup strategies with similar maintenance complexity.
Filesystem backups may not be consistent and may lose transactions that haven't made it to the WAL. You should always try to use database backup tools like pgdump.
Transactions that haven‘t been written to the WAL yet are also lost when the server crashes or you run pgdump. Stuff not in WAL is not safe in any means, its still a transaction in progress.
If a filesystem backup isn't consistent, the app isn't using sync correctly and needs a bug report. No amount of magic can work around an app that wants to corrupt data.
For most apps, the answer is usually "use a database" that correctly saves data.
Entire companies have been built around synchronizing the WAL with ZFS actions like snapshot and clone (i.e. Delphix and probably others). Would be cool to have `zpgdump` (single-purpose, ZFS aware equivalent).
I self-host Postgres at home and am probably screwing it up! I do at least have daily backups, but tuning is something I have given very little thought to. At home, traffic doesn't cause much load.
I'm curious as to what issues you might be alluding to!
Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.
I was using straight filesystem backups for a while, but I knew they could be inconsistent. Since then, I've setup https://github.com/prodrigestivill/docker-postgres-backup-lo..., which regularly dumps a snapshot to the filesystem, which regular filesystem backups can consume. The README has restore examples, too
I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.
Getting my backup infrastructure to behave they way I'd want with filesystem snapshot (e.g. zfs or btrfs snapshot) was not trivial. (I think the hurdle was my particularity about the path prefix that was getting backed up.) write once pg_dumps could still have race conditions, but considerably fewer.
So, if you're using filesystem snapshots as source of backups for database, then I agree, you _should_ be good. the regular pgdumps is a workaround for other cases for me.
Why would tuning be necessary for a regular setup, does it come with such bad defaults? Why not upstream those tunes so it can work out of the box?
I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use
PostgreSQL defaults (last I looked, it's been a few years) are/were set up for spinning storage and very little memory. They absolutely work for tiny things like what self-hosting usually implies, but for production workloads tuning the db parameters to match your hardware is essential.
Correct, they're designed for maximum compatibility. Postgres doesn't even do basic adjustments out of the box and defaults are designed to work on tiny machines.
Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.
I run a few PostgreSQL instances in containers (Kubernetes via Bitnami Helm chart). I know running stateful databases is generally not best practice but for development/homelab and tinkering works great.
https://pgtune.leopard.in.ua/ is a pretty good start. There's a couple other web apps I've seen that do something similar.
Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)
I've got an openbsd server, postgres installed from the package manager, and a couple of apps running with that as the database. My backup process just stops all the services, backs up the filesystem, then starts them again. Downtime is acceptable when you don't have many users!