I would like to see a language use decimal by default. Under the hood, it could use floating point for certain types of calculations where less precision is required. If I want to know the answer to 0.2 + 0.1, chances are I want the answer to be 0.3. If I'm writing performance-sensitive code, maybe I can drop down a level and opt for float(0.2) + float(0.1). Are any languages doing this currently?
Perl 6 uses rationals by default. They have the advantages of being base-agnostic, able to accurately represent any recurring digit expansion accurately regardless of eventual base, and also faster (since, especially if you normalize (convert to lowest terms) lazily, most operations are just a few integer instructions with no branching, looping, or bit-twiddling involved).
Scheme has rationals, but decimal literals aren't rational by default. You have to type #e in front of it, or express it as a fraction.
It really ought to be fixed at the implementation level (how much stuff would it really break if floatin point errors went away?), but failing that I'd love a macro that did it for me. I can't figure out how to write one robustly.
I find it highly unlikely this was a coordinated PR stunt. If revealed, this would damage the reputation of both parties, particularly Taylor. On a risk-adjusted basis, it’s not worth it for such established players to do this sort of thing. There’s also the secondary issue of how Apple would initially raise this topic. What are the chances she would have wanted to play along? Maybe she has a close friend at Apple who’d feel safe suggesting this, but it seems unlikely.
It’s very hard to exactly predict how stories will develop. What if lots of other artists came out and publicly supported Taylor’s stance? Then, even if Taylor agreed to Apple’s next deal, the other artists might not. In which case, Apple finds itself in a pickle and nobody would be suggesting that this was coordinated – even if it was.
Programmers are not better than programs if the program it is competing against is a black-scholes estmator, compiler, interpreter, or assembler, and time is an input to the cost function.
Compiler, interpreter or an assembler is basically a translator. It doesn't have to think, it is simple computation following a set of rules, of course a computer would win in that case.
But can a computer write code from scratch as good as a human? No(t yet).
Human minds are nothing more than computers running on unoptimized substrates. It's only a short matter of time on an evolutionary scale before machines surpass.
unoptimized? Hardly. Evolution has optimized our nerve systems for the task of heuristic information processing over tens or hundreds of million of iterations in parallel over billions of different implementations.
That part of your sentence is wrong, the rest (minds are nothing more than computers running on substrate) is wrong or meaningless, too. It just so happens that computation of the Turing model is one of the things human minds do, and badly. It's very much not the only thing. (Unless you mean computer in a sufficiently vague and abstract way as for the statement to be vacuous.)
unoptimized? Hardly. Evolution has optimized our nerve systems for the task of heuristic information processing over tens or hundreds of million of iterations in parallel over billions of different implementations.
The biological method of information processing, calcium ion transfer, is demonstrably orders of magnitude slower than artificially devised methods by semiconductors. So this is physically not optimal and easily proved. So that part of my sentence is correct.
The rest of your comment is too ill-defined to refute, unless you hold that there is some non-material, extra-physical quantity of mental process that cannot be duplicated by engineering.
Having slower individual components running at ridiculously low power is a valid optimization because it allows very close packing of those components and levels of interconnection that we can only dream of in our designed systems.
The optimization is a subtle one but extremely powerful, and it will be a while before we can pack equivalent computational power in something of similar size and power requirements.
Think about the amount of hardware required to simulate a cat brain at reduced speed, then think about the amount of hardware in a cat brain.
People have been making this claim since computers have existed, and it hasn't happened yet. It's similar to the way that fusion power is always 20 years away.
I think it will happen eventually, but given the long record of failed predictions in this area, I think that a blanket statement that it's going to be a "short matter of time" is unwarranted.
> 3) How do you determine which devices and OS versions we should test on?
> This should generally be an easy question for the candidate. Good candidates will point to app analytics as the best measure, looking for the most used devices for their particular app. Another good answer would be to check the app reviews where people might have complained about specific issues happening on their devices. Less creative candidates might simply suggest “looking at the top devices” on the market
Looking at top devices is also incredibly important, and it seems foolish to dismiss that strategy.
What if your website doesn't work on iPhones, and so you have no iPhone users? What if, because people can't even sign up on an iPhone, they don't bother complaining about specific features not working?
Surely it makes more sense to test based on your target market, rather than the subset of people who are self-selecting to use your product?
Ideally you should be looking at both usage and market data. If your target market contains lots of iPhone users, but everyone's on Android, this points to a potential problem.
As an Android developer, if hiring for an Android QA position I'd be thrilled to interview a QA Manager or QA Person who was specifically emphatic about testing on a wide array of Samsung devices.
Not because they are among the "top devices" (though they usually are for each subcategory), but because it shows they have enough experience to have seen first-hand that a surprising amount of device and/or OS version specific issues that Android apps run into occur on Samsung/TouchWiz devices.
The purpose of QA is to serve the existing user-base base first, therefore it makes sense to find what that user-base is first; you're missing the point.
This is possibly the most source-friendly piece of tech journalism I've read in a while. I don't think there is a single point in that piece that would make Yahoo PR the least bit uncomfortable. Credit where it's due, the article is incredibly well written and beautifully presented. It's just a shame to see such unquestioning faith in a source. One by one, Levy hammers away at every recent criticism of Yahoo. Anything that could not justified was deemed to be overblown.
"Some of the various flaps involving her leadership were crazily overblown, like her personal child care accommodations, or an edict against working at home that affected a tiny percentage of Yahoo’s workforce."
Every original fact in this article is information Yahoo would obviously want to share, like the success of its revamped ad platform. Based on the way he has failed to question anything else that Mayer has done during her tenure, it is hard to believe that these facts were questioned and examined in any detail.
Don't get me wrong: I think that Mayer may be doing a good job. I'm not too sure either way. I just think, as a piece of tech reporting, this is embarrassing.
And in his own words (https://medium.com/@stevenlevy/im-moving-to-medium-6869c0e32...), he was hired by Medium to "establish a tech hub that strives to bring well-reported, lively, and meaningful reporting and writing to what is already shaping up as a terrific platform for the written word."
I agree the article is favorable to Yahoo, but it strikes me as likely that the author conducted interviews, reviewed the facts, and wrote what he thought was the truth. Some folks on this thread may disagree with his conclusions, but that by itself does not make it bad journalism.
Disclaimer: Steven is a friend, occasionally a competitor (before I left to found Recent.io), and someone who once approached me to work with him.
PR piece: embarrassing.
Hit piece: equally embarrassing.
Why are the critical articles more appealing in this case? Is there some inherent desire to see the criticism turn out to be true which is greater than the desire to see the criticism turn out to be unfounded?
As much as you can disagree with the content of this article, it's hard to argue that it's as light and fluffy as the PR piece served up by Levy.
Regarding Carlson's stuff on Yahoo, you might disagree with some of his opinions. You might even think his book is a hit piece. But, at the very least, it's a well-researched hit piece.
Levy only seemed to speak with Yahoo PR and republish their opinions directly. I don't think any reasonable reader could assume that Carlson only spoke to a hedge fund manager short on Yahoo.
Looking at the release as a whole, I'd be fascinated to hear what features have taken the most time and effort? What turned out to be much harder than expected?
One big challenge is the sweet spot between flexibility and optimizability. Perl 6 is designed for extensibility, and yet we know at compile time where a given subroutine will dispatch to.
Getting the right trade-offs there was no easy task.
Another challenge was how to make the type system and built-in types so that stuff works intuitively for a Perl programmer, and still has sane rules in the type system.
Finally a thing that's surprisingly tricky is precompilation. You need to serialize types and objects, and then another compilation unit comes, and needs to modify something, so it claims ownership over a piece of serialized data or code. There are way too many nasty corner cases in that area.
If you filter by Python, you can compare some results running on both CPython and PyPy. I would be curious to know what it is about these benchmarks that makes PyPy perform poorly. I would also be interested to see how Nuitka performs.
I'm supremely sceptical about those benchmarks. Go take a look at the code being tested. I would welcome a serious look at developmental time versus resource used, using code that is probable in production. I read the code used to test Django. It's not reasonable code.
Correct me if I'm wrong here, but isn't that only helpful if you have multithreaded python programs? I have found that if my process is too slow, I can consider porting it to numpy/numba, cython, using pypy or dividing up the work using multiprocessing. multiprocessing is barely more work than using the threading module, and completely avoids the GIL AFAIK.
Well, multiprocessing (the modules) is problematic because not everything can be pickled. If you are working with simple functions this can be fine, but often we use third party libraries that use lambdas (that can't be pickled). Often you don't know why something can't be pickled.
What STM (as I understand from blog posts from the pypy team) is that it provides real threading without having to worry about pickling.
And yes, it only helps if you have multithreaded programs. However, multithreaded and parallel programs are very relevant today.
I found that PyPy sometimes has unexpected slowdowns. When we were porting from Python to PyPy on some offline processing tools, the most crazy one was building strings via += and sum(arrays,[]), which is much slower than cpython.
I find that unexpected. Java has had string builder optimization for a long time, and CPython is much much faster in this respect. It's not always easy to use "".join when using a string, so you end up having to build a separate array of strings in some cases. And building arrays isn't always that fast either. And [].join doesn't exist, so summing arrays is always kinda slow.
Anyway, all that is to say: I really like PyPy, and we use it a lot, but those _unexpected_ crazy slowdowns are unfortunate.
This looks pretty elegant, but it's a shame that Elixir uses the Ruby-style multi-line code blocks with do end. Looking at those code examples, 'end' take up around 25% of the lines with code. Does anyone know if they considered taking the Python approach? Would be curious to hear the arguments behind the decision. I've noticed that most people seem to favor the Python approach after trying it, but that it's rarely used in new languages.