Theoretically, RSA keys that are 2048 bits long should be good until 2030. If so, isn't it a bit early to start using the 4096-bit keys that have become increasingly available in encryption-enabled applications? It depends.
Time To Crack 4096 Bit Rsa
Alright. So now we know 2048 bit keys are indeed acceptable until 2030 as per NIST. So where does that put our 4096 bit keys? Incidentally, the document is silent about this particular key length. However, because the two tables indicate that 3072-bit keys (whose security strength is 128) and 7680-bit keys (whose security strength is 192) are good beyond 2030, we can safely say 4096 bit keys (which are somewhere in between) should likewise be considered secure enough then.
In fact, since 2048-bit keys are supposed to be disallowed after 2030, we know for certain that 4096 bit keys are going to be more suitable in production environments than 2048 keys when that time comes. But since we're still at least a decade away from 2030, it's probably not yet necessary to migrate from 2048 to 4096, right?
But if the more secure 4096 keys are already available and it's just a matter of clicking the 4096 option, what should stop us from doing just that? One factor that needs to be considered is performance. Longer keys take more time to generate and require more CPU and power when used for encrypting and decrypting. So, in the case of file transfer servers, if your physical server is relatively old and has limited computing resources, then 4096-bit keys may impact your server's performance.
So, the performance hit due to a 4096-bit key will only be felt within a small fraction of the entire file transfer session. Of course , if your server carries out a large number of concurrent file transfers, then the performance hits can add up. But just how significant are these performance hits? That would depend on several factors like your server's CPU, the number of concurrent file transfers, network bandwidth, and so on.
JSCAPE MFT Server v10.2, which is due for release on December 8, 2017, already supports 4096-bit keys. So if you want to run some tests against it to see if the performance hits are substantial in your specific environment, then you may download an evaluation edition as soon as it's available. We shall update this blog post with a download link once version 10.2 is out.
You can run performance tests against that JSCAPE MFT Server instance using the load testing feature of JSCAPE MFT Monitor. We've written a blog post featuring a rudimentary load testing session involving key lengths some time in the past. To get some ideas from there, read the post:
We have neither the number of qubits needed (4099), nor the quality of qubits (perfectly stable). The biggest quantum computer has currently 72 qubits (Google Bristlecone), however it has an error rate of 0.6%. The hardest problem though is coherence time. The coherence time is a measure of how fast the qubits lose their special quantum properties, so any calculation needs to finish within the coherence time. The coherence time at the moment is typically between 50-90 microseconds, so you can forget about any calculation that takes a while!
Well, the problem is that innovation always comes in waves and sometimes breakthroughs are exactly that: they break through the established prediction. With the massive amount of research going into this field, it is hard to keep track of all the efforts.
The GNFS complexity measurement is a heuristic: it's a tool to help you measure the relative strengths of different RSA key sizes but it is not exact. Implementation details, future vulnerabilities in RSA, and other factors can affect the strength of an RSA key. The attack that breaks RSA 2048 could also break RSA 4096.
Bigger RSA key sizes may slow down handshaking from the users point of view. On a Mac or Linux machine you can get some time taken to sign a 2048 bit RSA vs 4096 bit RSA with the openssl speed rsa command:
No. We can re-key pretty quickly, so deploying a 4096 bit key would be pretty easy, but we feel like a 2048 bit key provides a reasonable speed/security/compatibility tradeoff - as we might move to AWS in future, the last one is also a concern for us.
On the other hand, what do we think about using a 4096 bit key? Is 4096 bit RSA horrible and slow? No. Looking at the results, the server CPU use and additional latency could be reasonable for some sites that desire the gain in strength.
Note: a previous version of the article mentioned 'there are theoretically 2^112 possibilities to brute force the private key' - this was incorrect. There are theoretically 2^112 possibilities to crack the private key using techniques other than brute force. Thanks to dchest on Hacker News for pointing out the error.
The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]
Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), described an equivalent system in an internal document in 1973.[8] However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its top-secret classification.
RSA involves a public key and a private key. The public key can be known by everyone and is used for encrypting messages. The intention is that messages encrypted with the public key can only be decrypted in a reasonable amount of time by using the private key. The public key is represented by the integers n and e, and the private key by the integer d (although n is also used during the decryption process, so it might be considered to be a part of the private key too). m represents the message (previously prepared with a certain technique explained below).
The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[30] By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process.
As of 2020[update], the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250).[32] Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010.[33] As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[34] It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing.
Kocher described a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption key d quickly. This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley demonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from a Secure Sockets Layer (SSL)-enabled webserver).[41] This attack takes advantage of information leaked by the Chinese remainder theorem optimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known as cryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computing cd (mod n), Alice first chooses a secret random value r and computes (rec)d (mod n). The result of this computation, after applying Euler's theorem, is rcd (mod n), and so the effect of r can be removed by multiplying by its inverse. A new value of r is chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails.
I was wondering how secure 512 bit RSA keys are, and some "googling" showedme that "they can be broken".Moreover, it seems that this can be done rather *easily*, in a *reasonable*amount of time.But I couldnt figure out what's exactly meant by that "easily" and"reasonable".So, what would it take to break a 512 bit RSA key? A massive network ofhigh-end computers? A simple desktop pc? And how long would it take? 10seconds? More than a year?Of course the time needed depends on the computerpower used...So more specific:How long would it take to break a 512 bit RSA key on, say, a desktop pc with1 GB RAM and 3 Mhz processor speed?Could you refer to the words of some authority in this field, who has"proven" this? Some founded references?Many thanks,Kristof
All RSA keys can be broken, in time. "Broken" means that the key can bediscovered in a way that is more efficient than an exhaustive keysearch; in the case of RSA, the method used is factorization, which isconsiderably faster than a key search.
Well, cracking a 512-bit key currently requires executing about250,000,000,000,000,000 machine instructions, using the most efficientalgorithms. Divide that number by the speed of the computer(s) youradversary has available to determine how long a 512-bit key will holdout against cracking efforts.When RSA's 512-bit public challenge key was cracked in 1999, it requiredalmost four months and just under 300 computers. It could be done muchfaster today, but it still requires a lot of work. For example, with aPC at 3.3 GHz, for example, it would take about 2.5 years of continuouswork to crack a 512-bit key. That's certainly technically feasible, butit's not very convenient, so the real question is whether or not youradversary is prepared to invest that much machine time in cracking your512-bit key. If he is, your key may be unsafe (unless you only needsecrecy for a period shorter than the period required to crack the key);if he isn't, you're completely safe. 2ff7e9595c
コメント