Hacker Newsnew | past | comments | ask | show | jobs | submit | kopirgan's commentslogin

That's exactly what history should be about. Ordinary lives of ordinary people. But it's mostly which King fought with which emperor and slept with which socialite.

Depends on the sources, earlier historical writing is definitely like that, whilst more modern writing often has a more nuanced approach.

“histoire vue d'en bas et non d'en haut”


There's an Ericsson GH388 phone I used in 90s!

IIRC it was my first mobile.

Never used Nokia though it had major market share those days.


I used a Nokia in the early 2000s. But my fondest memories are of my W810i (much “newer” than the GH388… by about 11 years ).

I notice most of the phones seem to be missing SIM cards = intentional disposal ? Or have they just come apart over time?


Yeah likely just thrown away.

My early phones were all Ericsson later Alcatel which had a nice AA battery powered one! That was in 2000-01. First camera phone I think was a Siemens.

What a decline for European brands!


I am not much of a programmer only fool around a bit for fun and occasional profit. I find Helix to be very good for coding. Compared to Neovim, I could get LSPs going for go, c without any effort. Only thing is I haven't figured out much of debugging which I guess is must have for a serious coder. My favorite is printf and is enough for me across go, awk, c , Excel VBA macros and JS!


As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each?

Any mitigation strategy for larger use cases?

Thanks in advance!


After 2 years in production with a small (but write heavy) web service... it's a mixed bag. It definitely does the job, but not having a DB server does have not only benefits, but also drawbacks. The biggest being (lack of) caching the file/DB in RAM. As a result I have to do my own read caching, which is fine in Rust using the mokka caching library, but it's still something you have to do yourself, which would otherwise come for free with Postgres. This of course also makes it impossible to share the cache between instances, doing so would require employing redis/memcached at which point it would be better to use Postgres.

It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.


How would caching on the db layer help with your web service?

In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.


As I said, my app is write heavy. So there are several separate processes that constantly write to the database, but of course, often, before writing, they need to read in order to decide what/where to write. Currently they need to have their own read cache in order to not clog the database.

The "web service" is only the user facing part which bears the least load. Read caching is useful there too as users look at statistics, so calculating them once every 5-10 minutes and caching them is needed, as that requires scanning the whole database.

A CDN is something I don't even have. It's not needed for the amount of users I have.

If I was using Postgres, these writer processes + the web service would share the same read cache for free (coming from Posgres itself). The difference wouldn't be huge if I would migrate right now, but now I already have the custom caching.


I am no expert, but SQLite does have in memory store? At least for tables that need it..ofc sync of the writes to this store may need more work.


Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.


You also need BEGIN CONCURRENT to allow simultaneous write transactions.

https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...


Why have multiple connections in the first place?

If your writes are fast, doing them serially does not cause anyone to wait.

How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve

    1000 * 60 * 60 * 24 = 86 million users
And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...


> If your writes are fast, doing them serially does not cause anyone to wait.

Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?


Because development and maintenance faster and easier to reason about. Increasing the chances you really get to 86 million daily active users.


So in this solution, you run the backend on a single node that reads/writes from an SQLite file, and that is the entire system?


Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.

Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.


I agree...there is scale and then there is scale. And then there is scale like Facebook.

We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02


It goes much further than that.. a single moderately sized VPS web server can handle millions of hard-to-cache requests per day, all hitting the db.

Most will want to use a managed db, but for a real basic setup you can just run postgres or mysql on the same box. And running your own db on a separate VPS is not hard either.


That depends on the use case. HN is not a good example. I am referring to business applications where users submit data. Ofc in these cases we are looking at 00s not millions of users. The answer is good enough.


>How often does the typical user write to the DB

Turns out a lot when you have things like "last accessed" timestamps on your models.

Really depends on the app

I also don't think that calculation is valid. Your users aren't going to be purely uniformly accessing the app over the course of a day. Invariably you'll have queuing delays above a significantly smaller user count (but maybe the delays are acceptable)


Never tried that in my Japan trips as life is too rushed. But have seen old Japanese cafe in Singapore where jap patrons sit for hours reading manga sipping coffee. I'm sure the culture is there in Japan too..


Agree on the paper cup burning the tongue. Hate that too. But then coffee gets cold within minutes in porcelain cup.

Solution: I bring along a flask and use the paper cup as a cup and flask as cache. Means I lose the discount offered on byo but doesn't matter.


Done that many times only never thought of writing a nice piece like this.

Certainly not with pen and paper. Lol that skill gone these days can't write a sentence I can read later.


Democracy being restored, one oil well a day.


Hey, show some respect, you’re talking about the first ever winner of the prestigious FIFA peace prize!


Is your opinion that we're going to get less oil and less democracy? It seems likely the opposite would be true.

Venezuela GDP is all upside - it's practically a free lunch if they stop punching themselves in the face.


And simultaneously denying China a source of oil and (yet another) foothold in South America on the USA's doorstep?

It's like Cuba all over again.


Oil is fungible. If America takes Venezuelan oil for themselves and doesn't let China have any, China will just buy more from Russia and the Middle East instead. There's no oil blockade to China, those ships aren't being stopped, and the minute such a blockade would be announced is the minute this 5th generation warfare WW3 turns into a 3rd generation war.


You need to understand marginal economics and why China will have to pay slightly more for oil as a result of this. Resulting in China having less money to spend on other things.

Plenty of things are fungible, that doesn't eliminate scarcity.


Trump himself just now said he is good friends with Xi and they will get their oil.


Does the movie Maltese Falcon too enter public domain?


There are two (1931 and 1941), but no: a movie is its own work. It’s the same with translations.


Thanks.

Was referring to the Bogart version.


It’ll arrive in the US public domain in 2037, so a little wait.


I have the MAD archives bought in 90s on CDs but can't use..


The issues on the Absolutely MAD DVD (1952-2005) are just plain PDF files, no DRM, they work perfectly

https://files.catbox.moe/x4np6u.png


The CDs I have seem to be proprietary for Windows from the late 90s. But I also have PDFs through 2005 on my computer which I must have "acquired" at some point.


The browser app might be some outdated Windows application, that's the case with the MAD DVD too, but you can find the actual issue files in some folders


Yes the file names are something unknown. It has a software to access. They did a damn good job.

For instance, in Disk 1, there is a big binary file mad.m1 492MB. That seems to hold content, but not sure what file type or which program can open it. Rest of the files are very small.


No mine were pre dvd era. In CD. Older. They had a surprisingly good UI with its own funny stuff. Your install that and insert the disk 1-7 based on which issue you select. Even scold you for installing wrong disk & comments about 'you can insert a CD of Yanni if you prefer screeching' or something like that. Lol don't know what mad has against him their comments are always funny.


I have MAD archives somewhere. I thought they were in some standard format but maybe not.

A lot of the gen 1 or so CD content isn't easily accessible although a more industrious person could probably get to it in some manner.


I have the CD backed up as ISO files which I can mount. Since these days laptops don't have CD players.

Need to try on latest windows 11 I gave up earlier. For a while had a windows 2000 virtual machine that worked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: