Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
It's All About Time – Remote timing attacks in PHP (ircmaxell.com)
83 points by ingve on Nov 28, 2014 | hide | past | favorite | 51 comments


I think the title on HN is a bit misleading. This issue applies to ALL languages and PHP has specific functions (since version 5.6) to avoid timing issues for you: hash_equals and password_verify. PHP is specifically trying to address poor authentication mechanisms by adding the (ingenious) password module to PHP's core.


I don't really find it misleading. It's about all sorts of remote timing attacks (including cache-timing attacks), not just passwords.

I'm the author of the pull request that prompted the discussion, though, so I'm probably biased. :)


It's misleading because it implies this is a unique problem scoped in PHP. That is factually incorrect and hence misleading any reader to assume this article is about PHP's poor security against timing attacks.

It'd simply be nice if the scope of the discussion was about how all languages have more or less the same flaws. Anyone writing code with security in mind should understand how to avoid those flaws.

There are many ways to alleviate the problem specified in the pull request that may be better. Pecl is one route that was mentioned on internals. Also, possibly just cache the hex value in memory and avoid the bin2hex conversion outright (with randomization of your cold cache times).

Edit: As an aside, I want to also greatly express that the requirement to attack the bin2hex vulnerability is so high for most cases the attacker would require complete, private, 1-hop, uninterrupted access to a single target machine. Usually this implies the attack not being remote but I do understand the mental exercise.


The second you say "install this PECL extension" people hop on the fast train to nopesville. That is the problem I ran into getting people to adopt libsodium :\


I honestly don't see why. It's easy enough to install pecl extensions.


It's an extra dependency. Often, people don't even have php5-dev installed, so it's more than one step to get it done. (Even that first additional step is more work than people are willing to put forth!)

Especially since all this does is patch the behavior of one function. :|

And if you have PECL, you might as well go all the way and install libsodium.


The audience is PHP users. It's only misleading to outsiders.


The title of the article is not the same as HN's title (which is a HN guideline). I was commenting on the editorializing on HN.


How viable is this in the real world? I could imagine it working on a lone, quiet server at 4am with no other people on it. The average server with 100s of accounts, 1000s of users all doing various stuff, that's going to introduce enough random delay to throw off your analysis. Maybe you could try to calculate averages but that's going to increase the number of requests. Would a good hosting company block an attack like this at the hardware level? I'd guess yes, but I don't know.


> How viable is this in the real world?

Not really viable against a native code memcmp. Potentially viable against a comparison implemented in PHP or Java.

Here's a video looking at actually implementing such attacks over LAN and internet: http://rdist.root.org/2010/11/09/blackhat-2010-video-on-remo...

> Would a good hosting company block an attack like this at the hardware level? No, there's no generic way to defeat this. The only real limit is how granular your attacker can measure. If you're on shared hosting, you're probably the most vulnerable you can get to this (but also the most screwed already, so you probably don't cate).


Shared hosting might actually add more noise, since all the traffic to the other sites on the server will affect timings too.


I think the reality is that this style of attack is difficult enough that it would be something of a last resort, and probably only tried on high value targets. It'd take a long time (you'd probably need 100s of millions of requests) and could be easily noticed.

That said, while it's not instant game over, you don't really want this vulnerability if you can avoid it. Especially in things like authentication libraries.


Pretty viable when you do statistical analysis across a large number of requests:

http://matasano.com/research/TimeTrial.pdf

https://github.com/dmayer/time_trial


Interesting article, I never took the time to really think about this.

For optimal security, OP mentions that we should fix the underlying code, but I kind of disagree. Understanding all the underlying code is a lot of work and, except for extremely secure application, is not really needed. Also, hindering performance for security is not always the way to go (he mentions returning only at the end).

If you have ultra-secure tasks to do, you might want to do something like calculating the execution time of the function and sleep()'ing to add padding. Example code (doesn't work):

function login(){

  execstart = utime()
  // whatever code
  execduration = utime() - execstart;
  sleep( execduration % 100 ); // pad on at 100 boundary
}

You'd have to carefully choose your padding so the execution time is (normally) not revealed in the code.


OP here. Seeing as this question is getting asked a lot, I'll edit something into the post, but I wanted to answer you here as well.

So, there are a few problems with this technique.

1. It ignores the local timing leak

An attacker who can get code running on the server (shared hosts for example), can carefully monitor the CPU usage to see when the process is actually doing work, vs when it sleeps. So really, its not hiding anything.

2. The resolution of the sleep call is WAY too high. We're talking about detecting differences down to 15 nanoseconds. Sleeping for blocks of microseconds or even milliseconds will be far to granular. It will introduce block-like patterns in the requests that should be pretty easy to detect with statistical means.

3. It's basically identical to a random delay. Considering it depends on the system clock, and the original request comes in at a random point, it's functionally identical to calling sleep(random(1, 100)). And over time (many requests), that will average out.

Now, what if we took a different approach. What if we made the operation fixed-time?

    execstart = utime()
    // whatever code
    // clamp to always take 500 microseconds
    sleep( 500 - utime() - execduration)
That might work (assuming you have a high enough resolution sleep function). Again, it suffers the local attacker problem (which may or may not matter in your case).

However, there are two reasons I wouldn't recommend it: It requires guesswork and idle CPU.

You would either need to actively guess every single operation (and remember to clamp it) or clamp the overall application.

If you do it for every operation, that sleep time can become expensive (if you have a lot of them).

If you do it on the application level, and if you do too little, an attacker can use other expensive control (like larger input introducing memory allocation latency) to increase the runtime past the sleep clamp (hence allowing them to attack the vulnerability anyway). If you do too much, the attacker can leverage it to DOS your site (since even a sleeping process is non-trivially expensive).

There are two valid ways of protection IMHO:

1. Make sensitive operations actually constant time.

2. Implement strong IP based protections to prevent the large amount of requests that would be needed to collect enough data to analyze noisy environments. (I need to add this to the post now that I write it).

Personally, you should be doing #2 anyway. But since I also believe in defense-in-depth, I'd do #1 as well.


Thank you for answering, it is true that if someone has access to the server you might be in trouble, but if he has access to a cpu monitor, he might also have access to RAM and could just get the data from there.

For the precision of sleep, http://php.net/manual/en/function.time-nanosleep.php might be more appropriate

Also, you would only need to slightly clamp very important functions, so DoS attacks aren't that likely on it (and a constant timed function would also take the same time).


Well, access to the CPU happens with every user (since you can see the current mode of every core as an unprivileged user - idle, wait or running).

Accessing RAM requires system level access (privileged users, super user really) or running as the same user as the other process.

So unless the server is horribly misconfigured, or you exploit another vulnerability, reading from RAM isn't as likely as monitoring the CPU.


Most operating systems will not idle on a sleep() call, as far as I remember. Since the server is executing multiple applications, it is very likely that the processor will be assigned to another running application. The only way to really know this would be to know the state of the specific process php is using for the request, which seems unfeasible in a production environment (except if you have admin of course).


Well, it won't idle if there is another process ready to execute (load is greater than 1). If there is no process wanting to execute, it will idle.

Again, I'm not saying this is practical. I'm saying it might be possible (even if improbable).

And don't get me wrong, I'm not saying "OMG YOU ARE BAD IF YOU DON"T PROTECT THIS RIGHT". I'm more leaning on the side of "if there's a chance, I assume someone could possibly figure out a way".


I'm not an expert on timing attacks, but without clamping it seems quite tricky to guarantee that sensitive operations actually take constant time. There can be numerous subtle ways that timing information leaks while the code appears to be constant time. And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour. Yes, having constant time operation without clamping is the best solution but it seems too easy to accidentally slip from this ideal.

I'm leaning toward the approach of having a simple clamping library at the application level that (a) throws an exception if the sensitive code takes longer than the 'clamp time'; and (b) has some simple heuristic to determine the clamp time, such as "double the maximum execution time recorded during the first 20 runs". It might have a drawback if the CPU is not idle, but the benefit is that it is dead simple to implement. (Assuming the platform supports nanosecond wait times)


The far better approach is to just make the operations not depend on the secret.

You only really need to worry about timing attacks for values that the attacker doesn't know, and you don't want them to know.

So it's only things like encryption keys, passwords, session identifiers, reset tokens, etc that you need to worry about.

> And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour.

And that's why I support the discussion we were having on PHP's internals list where we talked about making functions which are commonly used with secrets timing safe by default. As long as there isn't a non-trivial performance penalty to it at least.

As far as worrying about it, I'd rather people understand SQLi and XSS better. They are both FAR bigger surface areas than a timing attack ever will be. And likely going to be the bigger threat to 99.99% of applications.


How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second. Your password comparison function could cause a page fault because the trailing end of the password spans onto another page of virtual memory. These are all factors that would throw any calculation for constant time out of the window.


> How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second.

Sorry, it appears that I didn't actually define constant time anywhere. What I really mean is that:

    Runtime does not depend in any way on the *value* of secret data.
So while actual runtime may vary, it's not varying because of the value of something we want to protect.

So it's not about keeping "absolute" time constant, but only the impact of the secret on runtime.


I'm not quite sure I understand what you are saying here. I don't think anyone is saying that you need to understand all the underlying code, but this is just something you should be aware of as you are building network-connected applications. Just like a web developer needs to be aware of CSRF, XSS, etc. Every language that I know of short-circuits string comparisons like this. You should know that you need to seek out a constant-time compare though when comparing hashes or other secrets.

Your sleep hack is probably less secure. I remember reading somewhere that doing random sleep() calls is flawed and doesn't completely mitigate the attack, but does increase the number of requests needed to get a good statistical analysis.


Here's part of the response I wrote last time this came up (https://news.ycombinator.com/item?id=8602494):

An approach where you watch the clock will be inherently less portable and actually much harder. Not only will the timing calls be hardware or OS specific, but so will the worst-case time. Imagine having to deal with a chip going into low power mode during your computation. Also you probably don't want to count time that your thread wasn't scheduled to run, so now you're talking about integrating with the scheduler.


I don't know how to do [code] tags


At this time, these few options are available:

https://news.ycombinator.com/formatdoc

>Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)


I've been on this site for something like five years, but...

  I never knew that this
  functionality existed


Thank you!


I've looked into timing attacks in the past but failed to exploit one. Even running php locally from the command line, eliminating any possibility of network randomness, I couldn't get even a few bytes with any amount of certainty.

Not saying it's not an issue, but I'm not so sure it's a big deal either.


I wrote a tool a while ago for testing network based timing attacks. Getting the measurement right is really hard, just taking the average doesn't generally work while the 10th percentile measurement is much better. I did find PHP fairly hard to exploit though, but I was trying over a network which much more difficult than locally.

The code for the tool (and a presentation pdf) is here if you're interested: https://github.com/aj-code/TimingIntrusionTool5000


> just taking the average doesn't generally work

Oh, well yeah that's what I did. Guess I'll have to look further.

> the 10th percentile measurement is much better

That sounds like something to try, thanks!


Even taking the straight minimum may be better.


You'd think so! But that's not what I found, I found the minimum, max, and average were all crap. Even the median wasn't very good compared to the 10th percentile (which I got the idea from for here http://www.cs.rice.edu/~dwallach/pub/crosby-timing2009.pdf)


This is also what I think. Start adding in database latency and other functions that execute alongside your string comparison, and this seems incredibly unreliable. It's not like most calls to a page only execute a single string comparison. Good to know, but seems low on the attack vector priority list.


> NOTE In general, it's not possible to prevent length leaks. So it's OK to leak the length.

I would imagine that you can prevent length leaks by looping through the characters of the known value and then returning that comparison with an additional check of length.

function timingSafeEquals($safe, $user) {

    $safeLen = strlen($safe);
    $userLen = strlen($user);

    $result = 0;
    for ($i = 0; $i < $userLen; $i++) {
        $result |= (ord($safe[$i]) ^ ord($user[$i]));
    }

    return $result === 0 && $userLen === $safeLen;
}


If you follow the link in the post you'll find a pretty convincing explanation why it's not possible to prevent leaking the length in the general case. I'm not certain that this applies to PHP code as well, since the problem is pretty low level.

However, leaking the length is much less of a problem than allowing the attacker to guess characters one by one.


The for loop duration here will vary depending on the length of the string


That was a quick copy/paste.

But imagine the loop would be the string we know (i.e. the password). Looping through the 10-character pass should be identical every time, regardless of what the user entered.


I imagine the strlen() function is not constant-timed, so I'd remove it, but it might work yes (but you'd have to check PHP's source code and have a deep understanding of it to really be sure).


libc `strlen()` is O(n), but PHP strings are binary safe (i.e. http://3v4l.org/47iaP ) so it must be storing a length as part of the string zval. So PHP strlen should be constant time.

+1 for recommending checking the PHP source code to be sure, though


strlen() in PHP is constant-time O(1), we don't use C strings. (Well, we do, but we store length information and reference count them)

See: http://lxr.php.net/xref/PHP_TRUNK/Zend/zend_builtin_function...


So, I tried this in the past.

The problem you'll run into with PHP specifically is that reading an undefined string offset (past the end) will result in a notice: http://3v4l.org/nIkf5

Which means that errors are triggered. So you can increase the length of the user string and note a linear increase in runtime until you increase it past the length of the string, at which point it becomes MUCH slower on a per-character basis (even if you don't do anything with the notice, the error mechanism is still triggered internally, which isn't cheap).

Actually, my original code was more robust as it never read past the end of the string, preventing the notice:

    /**
     * A timing safe equals comparison
     *
     * To prevent leaking length information, it is important
     * that user input is always used as the second parameter.
     *
     * @param string $safe The internal (safe) value to be checked
     * @param string $user The user submitted (unsafe) value
     *
     * @return boolean True if the two strings are identical.
     */
    function timingSafeEquals($safe, $user) {
        // Prevent issues if string length is 0
        $safe .= chr(0);
        $user .= chr(0);

        $safeLen = strlen($safe);
        $userLen = strlen($user);

        // Set the result to the difference between the lengths
        $result = $safeLen - $userLen;

        // Note that we ALWAYS iterate over the user-supplied length
        // This is to prevent leaking length information
        for ($i = 0; $i < $userLen; $i++) {
            // Using % here is a trick to prevent notices
            // It's safe, since if the lengths are different
            // $result is already non-0
            $result |= (ord($safe[$i % $safeLen]) ^ ord($user[$i]));
        }

        // They are only identical strings if $result is exactly 0...
        return $result === 0;
    }
There are a few problems here though that are non-trivial as are explained in the post: http://security.stackexchange.com/questions/49849/timing-saf...

Basically, while it may keep the length %64 safe (since cache lines are 64 bites wide), it doesn't keep the length safe in general. Some length information will be leaked on larger strings. And considering it's impossible to protect the length in the general case, making a function which says it protects length is a lie. Therefore I don't even try and hence save the complexity.

But let me ask this: what cases would you have where are you trying to protect the length? Anything with variable length input (like a password) should likely be one-way hashed anyway. So you'd be comparing fixed-length hashes. So where's the possible leak?


I suppose preg_match(), previously popularly opposed on performance grounds, it now arguably better than a straight string comparison in security terms due to probable timing attack resistance. :)


Regular expressions will exhibit the same behavior. They will also short circuit if no match is possible/ In essence the behavior that causes the problem is a desirable performance optimization for the general case that you just need to prevent here.


I seriously doubt regex with grouping, greediness and repetition exhibit the temporal properties of purely linear character-by-character matches...


Doesn't have to exhibit "the temporal properties of purely linear character-by-character matches" and noone said that strawman.

It just has to exhibit the "correct matching strings return faster" property, whatever more complicated search it does.


Doesn't it have to exhibit "correctly matching strings return faster ... on a per-character or other tiny chunk basis" in order to work the match towards correctness?


Yes, and it will indeed try by all means to exhibit that behavior -- unless if it's some contrived regex designed to counter this.

The key word here is "short cirtuit" the match, and any regex engine worth its salt will try to do that as much as possible, tranforming the regex to the faster FSM it can.


Timing attacks are not working in reality because of many things. You know how many times hackers actually used it? I bet 0. Just little bit faster than bruteforce.

But that's how you can make them faster http://homakov.blogspot.com/2014/07/timing-attack-666-faster...


And if you have a PHP app, timing attack is the very last thing to worry about... go prevent XSS first :) Re http://klikki.fi/adv/wordpress.html#details




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: