Yeah, this is probably better for a game like Go where evaluating raw point differentials is a common thing that professionals (and even strong amateurs) do.
As a Go player myself, I agree that Go-wise this result isn't impressive. But isn't the point that it's an "attack" on the AI? Like for password leaks, failed hash functions, etc, we don't care that it's 100% broken, just that there are tiny artifacts that can be exploited.
I strongly agree, in addition statements like "Before social media advertising would have to hitch a ride with some content produced for something else. Billboards, newspapers, television shows, and magazines all tried to provide a use value to their customers and audiences" are pretty incorrect. As bad as many people agree Facebook is for mental health, it was clearly created for and does provide value. The author does a fairly poor job of carrying out the argument without form-fitting irrelevant details to their desired narrative.
Wait, I think you’re misunderstanding a bit. The culture of professionalism ALREADY inherently represses non-straight people (and allows straight people privilege to be who they want). For example, men suits women dresses. There is no culture of professionalism that I know of that gives the privilege to all people. Like a dress code would be fine if it wasn’t obviously biased towards certain culture standards (in this case, western/white, straight).
Edit: to put it another way, professionalism in the case of clothing for example would be more fair if there could be a professional qipao, or professional burka, rather than only a professional western option aka the suit.
I don’t understand the logic here. Yes, of course a wealthy person will be able to pay for procedures and avoid going broke. Isn’t this universally true? Perhaps you meant the US is poorly designed for poor people, which is not something you can say about every country.
I mean, I have no idea what the expert climate science groups are but I still trust that they have a better idea of what's going on than I do. In this case, they just happen to know that this is one expert AI group.
This is absolutely not cheating; every hand designed algorithm can access and compare everything in the array!
The real point of what they’re saying in your italicized quote is actually that giving the net full access hinders efficiency, so they actually restrict it. Almost like the opposite of cheating.
It is not a pointer to an array I'm concerned about but the "neighbour diff vector" or what you should call it that provided by the "environment". See A.1.2.
Doing so many comparisons and storing them has a cost. Also the model can't decide if it is done so at each step the array has to be iterated to see if it is sorted by the "environment". Are they only counting function calls? I guess so. The paper is really hard to follow and the pseudocode syntax is quite madding.
If I understand the paper correctly of course I could be wrong.
If so, "our approach can learn to outperform custom-written solutions for a variety of problems", is bogus.
>> Are they only counting function calls? I guess so.
Oh that, yes, it's true. They're listing "average episode lengths" in tables 1-3 and those are their main support for their claim of efficiency. By "episode length" they mean instruction or function calls made during training by the student agent which they compare to the instructions/function calls by the teacher agent. So, no asymptotic analysis, just a count of concrete operations performed to solve e.g. a sorting task.