That model still works for streaming. You have a central source stream only to the distributed edge locations, then have clients only stream from their local edge location. Even if one region is overwhelmed, the rest can still work. Load on the central source is bounded.
Humans have a massive capacity to vary energy use. Highly trained endurance athletes like professional road cyclists and triathletes can average 3x or more the typical daily energy expenditure of a non-athlete on a long term basis. The idea that psychological stress can overwhelm the body's ability to produce energy does not seem credible to me.
Those people have trained very deliberately over years to reach that level of performance, on top of an innate genetic disposition.
Undoubtedly, in absolute terms they have a higher capacity to withstand the negative physical effects of psychosocial stress as described in the paper, precisely because of these physiological adaptations.
If regular people trained themselves to deal with stress then they would have a higher capacity too.
The paper is referring to the maximum capacity of a particular organism at a particular moment in time. It doesn't assert that the capacity is uniform across a species or doesn't change over time.
I'd be interested in knowing more about your API use. Are you using the official 3rd party API or one of their internal APIs used by their own web front-end?
It's a good idea for a project like this. One bit of feedback is that it would be helpful to have a bit more context for the images - the titles get elipsised on mobile and when viewing a full image you can't see the title.
From my understanding, it's used a lot by moderators. It's likely that they may block me since I do not pass any authentication or identifier info, but I don't think it'll happen soon as my use case is extremely simple (knock wood).
Thanks! That's right, it's pure Python. It's certainly a lot slower than C or Rust, but I've not done any benchmarking to get a good sense. I believe Deno have a Rust implementation which they use, so in principle a Python binding could make use of that. I think V8's own C++ implementation is too integrated with V8 itself to be re-usable, at least without refactoring.
My immediate use case is very IO-bound, and won't use huge message sizes, so decoding/encoding performance probably isn't a huge problem. My hunch is It should be fast enough for event handling small messages, or it'd also be fine for passing binary buffers between Python and JS. (E.g using an array library like numpy and shipping an array over as a buffer, with some other JS objects for extra metadata.) (More so if I implemented reading/writing an mmap.)
MappingProxyType is another handy one. It wraps a regular dict/Mapping to create a read-only live view of the underlying dict. You can use it to expose a dict that can't be modified, but doesn't need copying.
Two main situations I think. The first is just interactive use in any shell to encode ad-hoc JSON. If you have a next-gen shell which can handle structured data directly, then you probably don't need it.
Second is situations where you'd rather not add an additional dependency, but bash is pretty much a given. For example, CI environments, scripts in dev environments, container entrypoints. Or things that area already written in bash.
I don't advocate writing massive programs in bash, for sure it's better to turn to a proper language before things get hairy. But bash is just really ubiquitous, and most people who do any UNIX work will be able to deal with a bit of shell script.
> Second is situations where you'd rather not add an additional dependency, but bash is pretty much a given. For example, CI environments, scripts in dev environments, container entrypoints. Or things that area already written in bash.
"Dependency" generally means "external dependency".
It's only a dependency if your solution's build process fetches it from its upstream repo. (Or worse: it's just mentioned in some manual build instructions as one of the things your solution needs.)
This is small enough to copy into the source tree of your solution, in which case it's no longer a dependency.
It is, but if you already have bash, adding another shell script isn't much of a jump. e.g. I'd feel OK about committing jb to another repo for use from a .envrc file to set up an environment, whereas committing a binary would not feel good.
> Biggest crime of the Unix world probably.
Sorry if I'm perpetuating this! :) My take is that problem is not with bash, the problem is that it's hard for more advanced tools to replace it.
But for when you don't want an extra dependency, awk and perl are better than bash and just about as ubiquitous. (I might dare to say more ubiquitous, since MacOS in particular ships with an ancient version of bash that can't even use this jb tool. But the versions of awk and perl it comes with are fine.)
This is just the kind of use case I had in mind. Something I've considered is publishing a mini version with only the json.encode_string function, as that's enough to create an array of JSON-encoded strings and use a hard-coded template with printf to insert the JSON string values.
That would be a fraction of the overall json.bash file size.