As an example, I would assume worker is making requests with the internal view of the site, but can not have an internal view of other sites or security problems would ensue.. So what happens when two of my sites have service workers fetching something from each other on each request?
As you guessed, when your worker makes a subrequest to your own zone, it goes directly to your origin server, but when you subrequest to some other domain, it goes in "the front door", and that other domain's scripts apply.
If a request bounces back such that the same worker script would need to run twice as a result of a single original request, then it fails with an error. There's nothing else we can do here: we can't let the request loop, but we also can't let it skip your script after it's bounced through a third-party script.
If I make a device that lets half of society sit on its ass for 100 years then murders everyone, I think the costs should be factored in by sueing me before everyone is dead. But maybe that's just me.
Every technology has negatives that companies try to turn into externalities. The ones that hurt their non-customers are the most egregious and should be subject to the highest corrections. Hardly a new theory.
To give an example, we should be paying more for many electronics as the result of suits against most existing companies for not dealing with Tantalum sourcing.
This is the kind of attitude that disturbs me about recent journalism and what appears to be the dominant opinion in the US.
Getting quantitative results is entirely useless if you don't want to think about the subjective qualitative reasons we thought some arbitrary metric might matter and how they might be limited. Having a high GDP or salary is about raising your quality of life and options. If you are actively prevented from living all sorts of other lives simply because they are not as profitable then you are poorer than the wild man.
So I can create antisocial companies that hurt more than they help and take/distribute profits until damning evidence is assembled using public resources, then pay back a small portion from whatever I couldn't get off my book value?
I'd say that's giving organizations rights in civil matters that only are supposed to apply to individuals in criminal ones. The end result is thousands of companies actively trying to kill us and manipulate evidence and politics for much higher profits than the ones doing net positive work.
You're presuming guilt before guilt has been established.
Why do companies make profit? It's because they're offering a product or service that people buy. Why do people buy the product or service? It's because the value they get from the product outweighs the cost. What this really means is that--- all successful companies are doing social good, simply by being in business. If a company is making a big profit, they're probably doing so by providing much more value in their product than what they are charging.
The people who buy stolen goods also get more value then they spend, so all thieves are inherently good?
Some companies produce more good than bad. Some companies produce more bad than good, but the bad is primarily not affecting their customers. The latter is inherently easier which is why civil courts must be eternally vigilant.
Thieves are using force or fraud, and that's obviously wrong.
I agree that we should include the negative externalities in any product, and that can be done by adding taxes to products. And that we should have a healthy civil court system. And I agree that companies often get to settle for far too little. But the way you phrased your previous comment was _really_ putting the carriage before the horse.
Flying cars not existing has everything to do with the resources they use (public federally controlled airspace which requires every driver to be a pilot.)
Quantum computers have the same barriers as other private space technology. The question to me is how long they will take to go from research toys to hobby toys which has a lot to do with cost and something to do with consumer safety.
Actually I mentioned flying cars due to fuel inefficiency (I automatically assumed it was too low to make them practical) but I neglected to actually check their energy usage, and now that I just did it seems like the maximum efficiency could be on par with a normal car and it would depend on the actual trip, so my mistake on that.
A better example might be artificial intelligence and the AI winters. Like the failure of machine translation despite high hopes in the 1960s. It took us some 50 years to get to where we are now with Google Translate, and it still only works well for some languages. I have no idea how close to human accuracy it will get, so I would neither be surprised if it never does, nor if it does someday.
Anyway, hopefully instead of picking on the particular examples you can see that my point is obviously that people imagined we'd have a lot of things now that we still don't have the technology for today. So the fact that we've done amazing things in certain areas doesn't really say anything regarding this particular issue.
> my point is obviously that people imagined we'd have a lot of things now that we still don't have the technology for today. So the fact that we've done amazing things in certain areas doesn't really say anything regarding this particular issue.
And there's plenty of things that people said were impossible that were achieved.
That doesn't tell us anything regarding quantum computing, either.
Looking at what people said about past things is completely irrelevant to the prospects for some particular potential technology. The only way you can try to judge the prospects for that technology is by actually getting into the details of that technology and how it relates to our current understanding of the world.
I'm not refuting anything. All your comments have focused on things that people have promised that haven't turned out, which gives one the impression that that somehow applies to quantum computing as well. I'm trying to paint a more balanced picture of the situation.
I see your point. I'm just trying to point out that failures of general tech to emerge are quite different than failures of use cases. You need more of a limit preventing the emergence of a Moore's law on entanglements to really make quantum computing go away and not just solve other problems than those originally assumed to be "easy" and months away.
A community that hasn't established a semaphore for updating the code isn't a community. It is hard to build that with consensus instead of input from its creator.
The nature of what succeeds in opensource has everything to do with whether the creator puts community work in themselves, nominates someone, or just publishes read-only code.
Naturally, anyone is free to do whatever they like.. but most people seem to expect a result from their actions and publishing pure code is rarely going to have any result.
It's really about blasting advertising and hiring good enough cheap. The engineers with a network will know not to go through your process and would expect a salary appropriate to the job, the ones that developed a lot of skills on their own with no network (or afraid of honest discussions with peers where they might not be the best) will settle for what they are offered.
When I look at who from my network (or really missing from my network) is at the places with these tactics.. Losing time and failing to get the job is not the worst outcome.
File stream is a Kernel module reimplementing NFS or Samba? Or they are killing standard FS access achieved via Rsync and are trying to position a webapp bundled in chrome its place?
The following (linked from the tool's about page) will provide you with a good idea of the underlying technology: https://support.google.com/drive/answer/1716931. In a nutshell, you're mounting your drives via FUSE to gain a interface and on-demand streaming and the option for offline access. Google has wrapped quite a few niceties around the native file interface (Finder/Explorer), and I imagine that there are more to come (e.g., integration with native indexing/search).
Interesting article, but I don't really agree with their conclusions. I think social networks are legacy from first engaging people on the real web and have a fate similar to AOL.
Experiments in decentralized social are more of a test run of components for "subversive" networks that will eventually engage the global youth politically on tor rather than compete with fb for grandparents sharing photos and recipes or Myspace as a MP3 storage space for forty somethings with bands.
Ellison bought sun then disparaged cloud and killed Sun's cloud ventures only to watch Microsoft recover on selling cloud to a market that no longer wants proprietary desktops..
I was very negative on Sun, but looking at Microsoft fill the gap eventually, I have to assume IBM would have made me eat my shoe if they got the deal instead of Oracle.
Fujitsu was interested, US threatened to block any foreign sale. IBM was interested, a loud mouth blocked negotiations and pushed his golf buddy instead..
Sun made some strange moves toward the end, but there absolutely could have been a product line left if the developer market felt neutral about the buyer and the buyer tried to focus on upsells and professional services. The way Oracle tried to sneak this EOL in is very much evidence of there still being support licenses and professional services money for a few years more.
But it was worth it if the Sun curse took down Ellison. May your foot never leave your mouth again, cloud boy. Now go play golf with network's owner.
As an example, I would assume worker is making requests with the internal view of the site, but can not have an internal view of other sites or security problems would ensue.. So what happens when two of my sites have service workers fetching something from each other on each request?