Important News:SafeLogic announced PQC Early Adopter Program at RSA Conference 2024 Learn more!

The SafeLogic Blog

RSA 2024 Conference Takeaways: Why We Should Not Over-Focus on AI Safety at the Expense of Cryptographic Safety

May 15, 2024 Evgeny Gervis

RSA Conference 2024

It is indisputable that artificial intelligence (AI) was the dominant theme at the RSA conference this year. Much was discussed about the benefits and risks of AI’s advent, with significant implications for both offensive and defensive realms of cybersecurity. For those walking the trade show floor, it was hard not to notice the considerable number of companies that have somehow rebranded themselves as AI plays. The hype around AI was undoubtedly on full display, yet what the future actually holds is difficult to predict.

With the overwhelming focus on AI, there was not as much focus on cryptography, specifically the risks to public key cryptography stemming from the rise of quantum computers. At least, that was my observation when compared to last year’s RSA Conference, where I thought there was a more balanced treatment of these two topics in the various sessions. I see the same situation potentially playing out at the level of the US government and public policy, where the focus and resources going to dealing with AI safety seem to be displacing some of the previous emphasis on the need to migrate to post-quantum cryptography.

In my opinion, we must create mental space for both AI safety and post-quantum cryptography and work on both simultaneously - kind of like walking and chewing gum at the same time. We must avoid the situation when “breaking news” displaces all other news. Yes, AI is arguably a sexier topic than “boring” cryptography, and we probably do not need to worry about cryptography taking over the world. However, there are at least two reasons why we cannot afford to take the eye off the ball when it comes to migration to post-quantum cryptography.

First, achieving AI safety will depend on strong cryptography. For instance, how do we ensure the integrity of data on which AI is being trained? How do we know that data has not been tampered with? After all, without solid proof that adversaries have not tampered with our training data, we will not be able to trust the resulting AI models. As with many other contexts where integrity needs to be ascertained, public key cryptography (specifically digital signatures) is used.

The same goes for signing entire AI models. It is not hard to imagine that in the future, we may have AI marketplaces (like various App Stores today) where AI models go through a certain amount of safety vetting before being placed in the market. Users will then be able to download and use AI models from these marketplaces with a higher level of confidence regarding their provenance and integrity. 

What key security control will enable this in practice? Again, cryptographic controls where trusted AI models will be signed so that someone can verify their integrity and origin authenticity. And so, if quantum computers break our commonly used asymmetric (PKI) algorithms and we have not migrated to Post-Quantum Cryptography (PQC), we will not be able to verify that the AI model we are using is, in fact, worthy of our trust. These are just a couple of examples, and there are others.

There is a second, even more fundamental, reason that goes beyond the linkages between AI safety and cryptographic safety. Even though most people do not think much about cryptography, it is a crucial security control that underpins privacy, security, and trust in the digital world. In that sense, it provides a fundamental service on which the digital world depends and cannot function without. Everything from banking to finance, to healthcare, to secure communications in general, to blockchains, to government and military systems, and almost everything else online one can imagine all depend on cryptography working. So, while cryptography will not take over the world like AI might, broken cryptography will undoubtedly bring the digital (and increasingly physical) world to a halt.

One way to think about the importance of cryptography is by thinking about pipes that deliver water to your house. When pipes are not leaking, and water quality is good, nobody really pays much attention to them. However, if pipes start to leak or no more water is coming into your house, that becomes an urgent and immediate priority. After all, people can only survive about three days without water. With the advent of quantum computers, we find ourselves in a situation where pipes will start leaking not just in one house or neighborhood, but across the entire digital ecosystem. Migrating to better (quantum-resistant) cryptographic pipes will take decades, so the best time to start is yesterday. The next best time is today.

This blog post is not meant to minimize the importance of focusing on the adoption of safe AI. AI is possibly the most disruptive technology we have had since the invention of the Internet, so safe and responsible development and use of AI are undoubtedly essential focus areas. However, AI safety and cryptographic safety are really two sides of the same coin, and the risk of not enhancing the cryptography that our world relies on to mitigate emerging threats is no smaller and no less urgent than AI safety. Therefore, we must keep progressing on both critical priorities.

Evgeny Gervis

Evgeny Gervis

Evgeny is the CEO of SafeLogic.

Share This:

Back to posts

Popular Posts

Search for posts


See all