Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OP here. Seeing as this question is getting asked a lot, I'll edit something into the post, but I wanted to answer you here as well.

So, there are a few problems with this technique.

1. It ignores the local timing leak

An attacker who can get code running on the server (shared hosts for example), can carefully monitor the CPU usage to see when the process is actually doing work, vs when it sleeps. So really, its not hiding anything.

2. The resolution of the sleep call is WAY too high. We're talking about detecting differences down to 15 nanoseconds. Sleeping for blocks of microseconds or even milliseconds will be far to granular. It will introduce block-like patterns in the requests that should be pretty easy to detect with statistical means.

3. It's basically identical to a random delay. Considering it depends on the system clock, and the original request comes in at a random point, it's functionally identical to calling sleep(random(1, 100)). And over time (many requests), that will average out.

Now, what if we took a different approach. What if we made the operation fixed-time?

    execstart = utime()
    // whatever code
    // clamp to always take 500 microseconds
    sleep( 500 - utime() - execduration)
That might work (assuming you have a high enough resolution sleep function). Again, it suffers the local attacker problem (which may or may not matter in your case).

However, there are two reasons I wouldn't recommend it: It requires guesswork and idle CPU.

You would either need to actively guess every single operation (and remember to clamp it) or clamp the overall application.

If you do it for every operation, that sleep time can become expensive (if you have a lot of them).

If you do it on the application level, and if you do too little, an attacker can use other expensive control (like larger input introducing memory allocation latency) to increase the runtime past the sleep clamp (hence allowing them to attack the vulnerability anyway). If you do too much, the attacker can leverage it to DOS your site (since even a sleeping process is non-trivially expensive).

There are two valid ways of protection IMHO:

1. Make sensitive operations actually constant time.

2. Implement strong IP based protections to prevent the large amount of requests that would be needed to collect enough data to analyze noisy environments. (I need to add this to the post now that I write it).

Personally, you should be doing #2 anyway. But since I also believe in defense-in-depth, I'd do #1 as well.



Thank you for answering, it is true that if someone has access to the server you might be in trouble, but if he has access to a cpu monitor, he might also have access to RAM and could just get the data from there.

For the precision of sleep, http://php.net/manual/en/function.time-nanosleep.php might be more appropriate

Also, you would only need to slightly clamp very important functions, so DoS attacks aren't that likely on it (and a constant timed function would also take the same time).


Well, access to the CPU happens with every user (since you can see the current mode of every core as an unprivileged user - idle, wait or running).

Accessing RAM requires system level access (privileged users, super user really) or running as the same user as the other process.

So unless the server is horribly misconfigured, or you exploit another vulnerability, reading from RAM isn't as likely as monitoring the CPU.


Most operating systems will not idle on a sleep() call, as far as I remember. Since the server is executing multiple applications, it is very likely that the processor will be assigned to another running application. The only way to really know this would be to know the state of the specific process php is using for the request, which seems unfeasible in a production environment (except if you have admin of course).


Well, it won't idle if there is another process ready to execute (load is greater than 1). If there is no process wanting to execute, it will idle.

Again, I'm not saying this is practical. I'm saying it might be possible (even if improbable).

And don't get me wrong, I'm not saying "OMG YOU ARE BAD IF YOU DON"T PROTECT THIS RIGHT". I'm more leaning on the side of "if there's a chance, I assume someone could possibly figure out a way".


I'm not an expert on timing attacks, but without clamping it seems quite tricky to guarantee that sensitive operations actually take constant time. There can be numerous subtle ways that timing information leaks while the code appears to be constant time. And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour. Yes, having constant time operation without clamping is the best solution but it seems too easy to accidentally slip from this ideal.

I'm leaning toward the approach of having a simple clamping library at the application level that (a) throws an exception if the sensitive code takes longer than the 'clamp time'; and (b) has some simple heuristic to determine the clamp time, such as "double the maximum execution time recorded during the first 20 runs". It might have a drawback if the CPU is not idle, but the benefit is that it is dead simple to implement. (Assuming the platform supports nanosecond wait times)


The far better approach is to just make the operations not depend on the secret.

You only really need to worry about timing attacks for values that the attacker doesn't know, and you don't want them to know.

So it's only things like encryption keys, passwords, session identifiers, reset tokens, etc that you need to worry about.

> And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour.

And that's why I support the discussion we were having on PHP's internals list where we talked about making functions which are commonly used with secrets timing safe by default. As long as there isn't a non-trivial performance penalty to it at least.

As far as worrying about it, I'd rather people understand SQLi and XSS better. They are both FAR bigger surface areas than a timing attack ever will be. And likely going to be the bigger threat to 99.99% of applications.


How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second. Your password comparison function could cause a page fault because the trailing end of the password spans onto another page of virtual memory. These are all factors that would throw any calculation for constant time out of the window.


> How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second.

Sorry, it appears that I didn't actually define constant time anywhere. What I really mean is that:

    Runtime does not depend in any way on the *value* of secret data.
So while actual runtime may vary, it's not varying because of the value of something we want to protect.

So it's not about keeping "absolute" time constant, but only the impact of the secret on runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: