Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Are passwords stored in memory safe? (security.stackexchange.com)
164 points by lucb1e on March 18, 2013 | hide | past | favorite | 83 comments


If you can't trust the OS you're screwed regardless. If you are concerned about physical access based attacks (like cold boot) then there are alternatives.

Here's some interesting reading: http://en.wikipedia.org/wiki/TRESOR


If the OS is actively plotting against you, it's a losing game, but if it's just security-oblivious then there may be some things you can do mitigate. Consider for example, an OS with a habit of writing out random parts of your process memory onto disk.


Here's a question I've been pondering: suppose you are a program delivered from an origin (trusted) to a client machine (untrusted.) You start without any credentials, but you have the option of dialing out and asking the origin for a shared secret (e.g. a private key.) Is there any useful way for the origin to require that you, the delivered program, prove that the client machine you're running on can be trusted with the shared secret? If it's possible at all, I'm guessing it involves a "secure boot" with a TPM chip.


Why would I want a machine, which I own, trust someone else than the owner?

And no, I do not trust the TPM in its current iteration. We mere owners are prevented from knowing its private key. Nor can we generate and store away a private key (or buy a known private keyed chip).


So you can, for example, participate in a distributed computing project where the results sent in by your machine can be trusted.

(An online game that calculates physics client-side is a special case of a distributed computing project ;)

It doesn't require you to give up ownership over your entire computer, mind you. If your own OS ran in a hypervisor that was in one TPM "domain" (you have the key to this domain), but then applications could request to be run directly on the hypervisor with a separate TPM domain (and thus keep keys your own OS wouldn't ever be able to touch), that'd be good enough to allow for any secure distributed computation you might want to do. At any time, you'd still be able to wipe out those domains (and thus kill the apps running in them)--but you wouldn't be able to otherwise introspect them.

Basically, it's like the duality of "OS firmware" and "baseband firmware" on phones--except it would all be being handled on the same real CPU.


"you wouldn't be able to otherwise introspect them"

Can't implement it until you define its behavior. If you define its behavior you can emulate it (which, outside this discussion, is really useful). If you can emulate it, you can single step it, breakpoint it, dump any state of the system including memory, reboot it into "alternative" firmware...

Your only hope is playing games with timing. So here's a key, and its only valid for one TCP RTT. Well if they want to operate over satellite they must allow nearly a second, so move your cracking machine next door and emulate a long TCP path and you've got nearly a second now. On the other hand if instead of runnin over the internet you merely wanted to prove bluetooth distances or google wallet NFC distances, suddenly you've gone from something I can literally do at home easily to a major lab project.

Another thing that works is "prove you're the fastest supercomputer in the world by solving this CFD simulation in less than X seconds". Emulating that would take a computer much faster than the supposedly fastest computer. So this is pretty useful for authenticating the TOP500 supercomputer list, but worthless for consumer goods.


> Your only hope is playing games with timing.

This is inane. My question was about mathematically provable secure computation, not kludges that any old advanced alien civilization could bypass by sticking your agent-computer in a universe simulator. :)

Let's ignore the computers. You are a spy dispatched from Goodlandia to Evildonia. You want to meet with your contact and exchange signing keys. You can send a signal at any time to Goodlandia that will tell them to cut off all contact with you, because you believe you have been compromised. (A certificate revocation, basically.)

Your contact, thus, expects one of three types of messages from you:

1. a request for a signing key with an attached authentication proof;

2. a message, signed with a key, stating you have been compromised and to ignore all further messages sent using that key;

3. or a message, signed with a non-revoked key, containing useful communication.

Now, is there any possible kind of "authentication proof" that you could design, such that, from the proof, it can be derived that:

1. you have not yet been compromised;

2. you will know when you have been compromised;

3. and that, in the case of compromise, you will be allowed to send a revocation message before any non-trusted messages are sent?

You can assume anything you like about the laws of Evildonia to facilitate this--like that it is, say, single-threaded and cooperatively multitasking--but only if those restrictions can also carry over to the land of Neoevildonia, a version of Evildonia running inside an emulator. :)


It might be possible to exclude enough realistic current day threats to eventually end up with something that "works" but I don't think that's useful in any way.

None the less, if you want to exclude computers, the human equivalent of "stick it in an emulator" is the old philosopher's "brains in a vat" problem. That's well traveled ground no there is no proof you're not in a vat.

There is no way to prove you have not been compromised because there is no way to prove no theoretical advancement will ever occur in the field in the future. (or not just advancement but NSA declassification, etc) So you're limited to one snapshot in time, at the very least.

You're asking for something that's been trivially broken innumerable times outside the math layer.

Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.


> Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.

None of those; I know the current state of the art in cryptography/authentication, and that it doesn't quite cover what I'm asking for. I'm basically just waiting for you to say that the specific kind of designed proof I asked for is impossible even in theory, so I can go and be sad that my vision for a distributed equivalent to SecondLife[1] will never happen.

My own notion would be that the Goodlandian agent would simply request that his contact come and look at the machine itself, outward-in, and verify to him that he's running on a real, trusted piece of hardware with no layers of emulation, at which point the contact gives him an initial seed for a private key he will use to communicate with from then on. The agent stores that verification on his TPM as a shifting nonce (think garage-door openers), so that whenever the TPM is shut down, it immediately becomes invalid as far as the contact is concerned--and must be revalidated by the contact again coming and looking at the physical machine. All we have to guarantee after that is that any method of introspecting the TPM on a piece of currently-trusted-hardware fries the keys. Which is, I think, a property TPMs already generally have?

Besides being plain-ol' impractical [though not wholly so; it'd be fine for, say, inspecting and then safe-booting military hardware before each field-deployment], I'm sure there's also some theoretical problem even here that renders it all moot. I'm not a security expert. :)

---

[1] More details on that: picture a virtual world (technically, a MOO) to which any untrusted party can write and hot-deploy code to run inside an "AI agent"--a self-contained virtual-world object that gets a budget of CPU cycles to do whatever it likes, but runs within its own security sandbox. Also picture that people who are in the same "room" as each AI agent are running an instance of that agent on their own computers, and their combined simulation of the agent is the only "life" the agent gets; there is no "server-side canonical version" of the agent's calculations, because there are no servers (think Etherpad-style collaboration, or Bitcoin-style block-chain consensus.)

Now, problematically, AI agents could sometimes be things like API clients for out-of-virtual-world banks. Now how should they go about repudiating their own requests?


"Bitcoin-style block-chain consensus"

Majority rule not consensus. Given a mere majority rule protocol, I think your virtual world idea could work.


Eh, either way, it's the same problem. Imagine you're an agent for BigBank, thinking you're running on Alice's computer. If you authenticate yourself to BigBank, BigBank gives you a session key you can use to communicate securely with them--and then you will take messages from Alice and pass them on to BigBank.

But you could also be running, instead, on an emulator on Harry's computer--and Harry wants Alice's credit card info. So now Harry reaches in and steals the key BigBank gave you, then deploys a copy of you back into the mesh, hardcoded to use that session key. Alice then unwittingly uses Harry's version of you--and Harry MITMs her exchange.

In ordinary Internet transactions, this is avoided because Alice just keeps an encryption key (a pinned cert) for BigBank, and speaks to them directly. If you, as an agent, are passed a request for BigBank, it's one that's already been asymmetrically encrypted for BigBank's eyes only. And that works... if the bank is running outside of the mesh.

But if the bank is itself a distributed service provided by the mesh? Not so much. (I'm not sure how much of a limitation that is in practice, though, other than "sadly, we cannot run the entire internet inside the mesh.")


There is no way to prove you have not been compromised as it's possible to be compromised without knowing it. EX: Listing to the EM leakage as information is sent from one chip to another.


Without regressing into plato's cave and brain in a jar emulation type philosophical discussions....

The standard way of handling distributed clients is with trust calculations. A simeon trust calculation is to duplicate a packet of work multiple times and/or process questionable ones yourself. If they match, you can bump up the trust calculation on that client.

Public key crypto is all fine and good. It's great for executables so you know who they came from (hmm, trust again..).

So why can't I have the private key to a tpm I buy or have integrated in my motherboard?


I do not think their is a way of doing this for a general purpose computer, but I remember hearing a talk about doing this with small embedded devices (like, say, the microcomputers monitoring the coolant system for your power-plant). The idea is that you send the device enough random bits to completely fill its read/write memory, then require it to send that same stream of bits back. After this, you can re-send it the actual program you want it running.


"prove that the client machine you're running on can be trusted"

Well your remote server prover, is going to need a bunch of unit tests and some kind of load testing "system" for scalability testing, so rather than piling dozens of physical clients on your desk you may as well make a client emulator that can run in a virtual image... Oh whoops. That means anyone can run the emulator until it authenticates, then hit pause and start dumping memory to see your "secret"

Basically if you can define "it" to run on a turing complete machine, "it" can be run on any other turing complete machine. You may have some interesting games to play with timing and speed but that's usually not too hard to work around if the emulator is more powerful than whats being emulated, or if the protocol is sloppily implemented (which is the norm)


> That means anyone can run the emulator until it authenticates, then hit pause and start dumping memory to see your "secret"

I would presume that "semantically-correct" emulation of a TPM-based authentication protocol would require that the VM software runs in ring 0 and uses the host's TPM for guest storage. Anything else would be emulating "something that acts like TPM, but offers none of the guarantees of TPM."

I'm just not sure how an emulated TPM would be able to figure out that it's not fulfilling its semantic function.

That seems to be the key, here: is there anything a TPM chip could investigate before saying "yep, I'm not running emulated--I can see the Real Hardware I was created for right there--so I'll let programs trust me"?


> anyone can run the emulator until it authenticates, then hit pause and start dumping memory to see your "secret"

TPMs are implemented as tamper resistant hardware modules. As long as you can't read out the keys, you can't have the the emulator authenticate.

It's all explained here http://en.wikipedia.org/wiki/Trusted_Computing


I do apologize for using "secret" in a context where it could apply to either a public key encryption scheme or to the content which is being restricted by the whole scheme.

Another issue is "tamper resistant" is just a big wrapper around a security thru obscurity design. Its not "mathematically provable" its just here's a secret number we hope doesn't show up on wikileaks.


But the TPM is just a chip on the LPC bus, right? Couldn't you do a man-in-the-middle and have the TPM think it's talking to real hardware when in reality it's talking to an emulated system?


I think the idea is if TPM is enabled, the ROM bootstrap code only gives control to a signed trusted bootloader, which only gives control to a signed trusted kernel, which carefully prevents untrusted code from making requests to the TPM hardware. Like DRM, it's Game Over when the first vulnerability in this trusted code is found, though it'll continue to inconvenience legitimate users (because vendors have little incentive to ensure the machine is practically usable with TPM disabled or trusted signers according to the user).


Interesting point. I don't know if what if anything they do to defend against putting the chip in a hostile system.


Yeah, the "remote attestation" part of the trusted computing stuff was aiming to do that. Didn't really take off.

Nowadays people have dreamed up applications for it in server hosting, where you can talk to the DRM in your rented server and get some assurance your software running there hasn't been comproimsed.


What's your definition of trust? Trust to execute code faithfully or trust to be who you think it is? In either case, what's required is a base unit that you trust on faith. In the former case the base unit is the signed firmware and the latter case, the root CA.


For security beyond just trusting root (and by proxy, mostly the OS itself), there are options but the amount of work needed increase quite steeply. I am pretty sure that SELinux could provide some defense against memory reading by root, however it takes a lot of configuration and setup to get it done "right".

If one want to leave the OS altogether, One could also use a external physical security device, through it too need to be done in a correct way. Some hardware encryption acceleration devices might have an option to store keys only locally and never give it away to the host, but I doubt this is true for all.




There are attacks on passwords stored in RAM. There's an example against the Apple keychain. Root can run the software and it collects a bunch of passwords for logged in users (http://juusosalonen.com/post/30923743427/breaking-into-the-o...)

But there are best practices for passwords, and those reduce the risks; and most attacks need privileges and access to the machine, which again reduces the risk.

If you're worried about stormtroopers kicking the door down and squirting liquid nitrogen on the RAM you probably have enough money to have very strong perimeter defences.


An air duster suffices for this purpose. No need for something so fancy as liquid nitrogen.


Depends on whether you need minutes or hours :-)


You can probably reapply it. :)

I wouldn't know, though.


What does liquid nitrogen on the RAM do? I was under the impression they plugged the computer into a UPS or something and took it.


The bits stored in DRAM remain readable for a time (sometimes minutes) even after the power is cut. In normal operation frequent refresh is needed to avoid decay the decay doesn't happen nearly as fast as the refresh cycles make it seem. Cooling the cells lengthens that time, from minutes up to hours (depending on the temperature), permitting an adversary to read them without much time pressure.

Paper on that: http://citp.princeton.edu.nyud.net/pub/coldboot.pdf


I see, thank you very much.


I would be slightly surprised if national police use such raw methods as liquid nitrogen. It is fast, but it has risk of damaging the ram.

Much safer (but slower) is to hook directly into the bus and communicate with the ram itself. I guess its a trade off between speed and security, which mean it depend on the case specifics.


It freezes the circuits and prevents them from being discharged. When removed from the fridge, the attackers have some extra time (while thawing) to read the status of the RAM.


As pfortuny and ygra say, freezing the RAM makes the memory readable for longer.

(https://citp.princeton.edu/research/memory/)


At the company I worked for previously we frequently used a firewire DMA attack such as inception (http://www.breaknenter.org/projects/inception/) to gain access to computers, and dump ram to recover other passwords.


I'm familiar with DMA attacks, but it's always shocking to see publicly available GPL code that just works against popular and recent versions of windows, OS X and linux. UEFI Secure Boot is no help if you signed a 1394 driver : )

Everyone should read the mitigation steps and caveats as appropriate.

If you have physical access to an unlocked windows machine, i'd reach for mimikatz. Instant plaintext.


There are well known attacks that allow to read memory, and even write, thought DMA.

See for example:

0wned by an iPod - hacking by Firewire http://md.hudora.de/presentations/#firewire-pacsec

More papers are linked in the wikipedia page: http://en.wikipedia.org/wiki/DMA_attack


Reading comments here and on stackexchange I'm surprised that no mention is made of the Data Protection API (DPAPI) on Windows, which is designed specifically for this purpose.

http://msdn.microsoft.com/en-us/library/ms995355.aspx

I've been using it for years, and while nothing is infallible, any sensitive plain text in my apps isn't there for more than it takes to encrypt and then destroy.

I can't comment on Linux or OSX but would be surprised if the OS didn't offer a similar API tied to the principal to protect in-memory data.


    > Reading comments here and on stackexchange I'm surprised
    > that no mention is made of the Data Protection API (DPAPI)
    > on Windows, which is designed specifically for this purpose.
It was mentioned and quickly dismissed as not being effective:

1) if it can be decrypted by the API, then it can be cracked by any process given enough time and resources.

2) further to point #1, the Data Protection API was reversed engineered in 2010.

3) security is only as strong as your weakest link, and that API doesn't address the "weakest link" of running malware locally. eg it's much easier to keylog passwords to begin with than to scan the RAM.

I'm inclined to agree with those comments. While that API is a nice idea, I think it's a little ineffective in practice.


Can't find anything online about a fix from Microsoft (didn't give it much effort - little pressed for time), but it seems that decryption is possible because the master key timestamp isn't protected by an HMAC mechanism (perpetuating access to the secret). Also, all the user's previous SHA1 hashes aren't salted.

Both of those seem easy enough to fix, which I imagine Microsoft has done (the exploit was discovered 3 years ago). I'm going to do a little more research when I have time.

Lastly, and importantly, this attack is an offline attack. I wasn't able to find anything that compromises in-memory data. Granted, nothing other than 2FA will protect anyone (or DPAPI) against key loggers, but that's true for all OSs.

If local malware is the strength of the argument against DPAPI then I might as well go so far as to say that the most secure system is no system.


    > Both of those seem easy enough to fix, which I imagine Microsoft has
    > done (the exploit was discovered 3 years ago). I'm going to do a little
    > more research when I have time.
Well, by your own confession, you've not read about a fix thus far, and Microsoft have been known to let poorer encryption APIs go unpatched for great lengths of time even when a problem is known (eg NTLM passwords aren't salted), so I wouldn't be the slightest bit surprised if this hasn't been patched either.

However, regardless of whether it has or hasn't, said patches only increase the CPU time it takes to crack the passwords, it doesn't make the passwords impossible to crack. And, as I'd already said, there's weaker links that can be exploited anyway.

    > Lastly, and importantly, this attack is an offline attack.
All the attacks we're talking about here are offline attacks.

    > Granted, nothing other than 2FA will protect anyone (or DPAPI) against key
    > loggers, but that's true for all OSs.
Of course it's true for all OS's. I never once suggested otherwise. Honestly, I'm puzzled why you'd even bring that point up.

    > If local malware is the strength of the argument against DPAPI then
    > I might as well go so far as to say that the most secure system is no system.
Which is what myself and pretty much everyone else in this discussion have already been saying. At some point, there needs to be a balance between usability and security. I think NT (and Linux / BSD / Mach too for that matter) offer enough security to make in-memory password hacking awkward enough that it's a minor security risk while still leaving the host OS highly usable.

For the average user, social engineering will always remain the most exploited vector of attack, and for ultra sensitive servers, all we can do is lock them down as best as we can based on the latest fixes and proof-of-concepts. But a sufficiently determined attacker will usually find a way in if there's a sufficiently high enough incentive. Our job (or at least mine), is to make it sufficiently hard that attackers lose interesting trying. But if they've gained root access (in order to run any of the aforementioned attacks), then it's already game over - regardless of whether they manage to decrypt your RAM or not. Which is why I think the aforementioned API is akin to the emperors new clothes (ie all hype, no practical security)


For this reason the Java JPasswordField getPassword() method returns an array with no other copies around. An array can also easily be zero'd out with fill().

http://docs.oracle.com/javase/tutorial/uiswing/components/pa...


Isn't JRE using compacting GC which makes the "no copies" guarantee void anyway?


The char or int datatypes do not have their values stored as objects. When you change the value of a char inside a char[] array, that value is directly changed in ram.

This will leave the "hello" in ram (subject to GC):

  String x = "hello"
  x = null;
This will clear the "hello" from ram:

  char[] x = new char[]{'h','e','l','l','o'};
  x[0] = '0'; x[1] = '0'; ...
This will leave the "hello" in ram (subject to GC):

  char[] x = new char[]{'h','e','l','l','o'};
  x = null;


The point was about the garbage collector compacting memory regions, thus moving objects around. If you don't pin your array it could leave "hello" somewhere in memory when it's moved before you zero it.


Been a while since I've done java, but I'd assume it simply pins it?


Is it possible that a (non-privileged) process could read data of a process that has previously terminated by looking at uninitialized memory and gain access to sensitive information that way?


Most operating systems clear the ram pages when the process exits, or when it requests a page, so no.


One method that I've thought about in the past is hashing your password using bcrypt, then zero and free the original password, and check all future authentication attempts against the bcrypt hash. Nobody, not even you, knows the password now, just whether a given password is correct.


Hashing passwords for storage is standard practice in all systems that involve password based authentication.

Even then, the password must reside in memory at some point in order to compute the hash of your password [using bycrypt or whatever scheme], which is necessary for both generating the hash the first time AND generating the hash for authentication attempts. This is the issue described in the given link.

http://en.wikipedia.org/wiki/Cryptographic_hash_function#Pas...


Of course. But just using hashes doesn't mean you're safe - watch out for pass-the-hash and replay attacks, as well as session hijacking and other possible side channels.


The question is about storing the password for you to authenticate to somewhere else, not the reverse.

Storing a hash doesn't help because the remote site won't accept it. And if it did then the hash is essentially the password and you've violated the goal.


Random Access Memory Memory?


It's where ATM Machines store their application code while running.


Here's what I do. The login page on my browser is sent to my phone which creates a https session with the remote website and then hands over the session back to the browser. The mechanics to do this are a bit tricky but like a few days of coding. The advantage: your password never ever enters your computer's RAM(or HDD/network)...take that keyloggers!


What about phone malware? And what stops someone reusing the https session as you move it from your phone to your desktop?


Smart cards can move the private key from the PC to a dedicated, self-contained and (supposedly) safer machine - the card itself.


I'm worried that the smart cards and their software is well designed, but then you have to rely on other vendors to do thier bit securely.

Fravia said of dongles that they were often great, with nice libraries, but when software vendors implemented them they would use stupid methods.

(http://home.scarlet.be/detten/tuts/dongle_zeezee.htm)

> Don't panic when you read all info about dongle security. They ARE secure. OK. You can't crack them unless they're done by complete idiots. OK. But you want to crack the application, NOT the dongle. When you read about RSA encryption, one-way functions and see in the API some interesting Question/Answer hashing functions, remember that it's only API. No one uses it. Only simple functions like Check/Serial Number/Read and sometimes Write are used.


The long-term solution is probably some kind of super-smartcard (essentially an HSM) which can put per-application logic inside the secure envelope. Things like rate limits on decryption requests, heuristics to require higher levels of authentication as transactions are more suspicious, etc.

Combine that with per-application virtualization and various forms of user authentication (other than passwords), and public key cryptography, and you could probably start to build substantially more secure services. Same stuff on clients and servers.

ARM's TrustZones are actually more interesting than TPMs on x86; you can essentially start the general purpose CPU as a trusted device and then partition off less-trusted pieces. If you're going to have a single processor, vs. a specialized security processor, this is probably how to do it, not the x86 + TPM + TXT way.

Probably all meaningless until there's a framework as simple as Ruby on Rails was vs. everything else in 2005, or php, which makes doing things securely the easy default.


Safety is a float - not a boolean.

A more appropriate question would have been: 'How safe is it to store passwords in memory?'


A quick CTRL+F didn't find anything, so I might actually be the first to point out that "RAM memory" is a case of the RAS Syndrome :)

http://en.wikipedia.org/wiki/RAS_syndrome


All you can do if you don't trust the OS is to assemble the password at the point of use (each time) and erase the memory location directly afterwards.

And even that is not 100% foolproof (the OS can detect it in between these two steps).


Also, is there a way to erase specific memory locations, and a guarantee that in the meantime that memory location has not been cached somewhere else?


Or keep it encrypted in memory


And how exactly would that work? You would need to decrypt it use it - but then you need to store the decryption key in memory.

Gaining you exactly nothing.


Here is a concrete implementation : SecureString in .NET http://msdn.microsoft.com/en-us/library/system.security.secu...

It's using DAPI which derives the encryption key from the user password.


You could store the decryption key on the disk, only loading it when needed, and possibly byte-by-byte. This is all hackable, especially when such techniques are used mainstream, but it increases the amount of work needed to hack something. In the end it's the OS's responsibility of course.


What's the point of that? If you are going to do that, just store the original key that way.

Not that it helps in any way at all.


The .NET framework has a SecureString class which does this. It mitigates the risk of sensitive information being discovered through running 'strings' on memory dumps and the like. It's meant to prevent trivial recovery, not stop an attacker with sufficient time and skill.


that was good!


You wouldn't need to encrypt it really.

Just don't store it character for character the same as the password when used to authenticate the user.

You could have the even characters in one array and the odd characters in another and this would make it a bit safer.

I'm not sure every app needs to be this paranoid but I offer it as an idea that can be easily implemented for those that do.


> Just don't store it character for character the same...

You've just described encryption here, whilst saying that its not encryption.

The principle is right though. A simple obfuscation will deter most attacks.


and how should that help? if you can read ram you can for sure read the executable file.


This is why ssh-agent is awesome: they don't hold the passwords in memory (only the keys) so you minimize the number of times you type password and the number of places it is stored.


But the key is what you authenticate with after all, so why go after the password when reading memory when you can go after the key directly?


the keys expire, so even if you have it it won't be useful later.


No, I found a password in the pagination file of an old Windows.


hey security experts, are microprocessor registers or cache, subject to any security attacks?


It depends on whether you consider attaching JTAG ICD and just reading whole state of CPU out an security attack. In some aspects just attaching an ICD to desktop CPU is simpler than attacks on physical security that involve freezing DRAM chips and reading their contents with patched BIOS or whatever. On the other hand it mostly requires attacker to have JTAG ICD that supports that particular CPU. Almost all x86 chips have some kind of ICD interface, usually very low-level and complete one. But because of low level nature of registers you see through it, JTAG register maps and so on are NDA-only and thus ICDs that support non-embedded CPUs tend to be very rare and in the realm of "you can't afford it if you have to ask the price". But they exist and buying one is more about the price than about any questions of the kind "why do you need that?". But because of rarity of these things most current research simply ignores this attack vector.

Bottom line: there is no such thing like security against local attacks on almost any kind of commodity hardware (including TPMs, excluding devices explicitly designed to be reasonably secure like gaming consoles). When you need that you also need proper tamper-proof hardware.


Power analysis can also be used if you can force the CPU to do work on the key/password. It's been used to extract key material from smartcards

edit: finish the sentence


Maybe some timing attacks would be the closest thing to CPU cache based attack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: