3 Comments
User's avatar
⭠ Return to thread
Ted H's avatar

I'm by no means a Musk fan. But I think that the sense of urgency in this article is misplaced.

So, all of this information is available publicly, without AI. Also, it would take people with some technical expertise to actually pull it off and make use of it. On the flip side, one can argue that the advantage AI gives is in its ability to pull the information together in one place, essentially saving the interested party research time. In that sense, AI perhaps enhances a danger that already exists. But, AI is also known to get things wrong. So, in the end, the interested party (perhaps a terrorist) is going to have to do backup research anyway. And they are going to have to have qualified technical people to actually make it work.

For example, there is an entire wikipedia page on CL-20 and how it is made, with references to research articles. From that point onward, a diligent researcher could track down everything available on the subject in scientific journals.

The real security issue isn't what the author thinks it is. If there is information that really needs to be kept from the public, such as classified information, then that information should not be included in the AI model training set. The idea that AI needs safe guards to prevent the wrong people from getting the wrong information is to apply the wrong security standard to a new technology. Nobody, for example, is suggesting that we need such controls in place to prevent the public from accessing the same information by simply researching the topic online.

non-public information of any sort, from confidential corporate information to secret weapon information should simply not even be in a public AI's training set.

As an example of what is fingertip available without the help of AI, found with very little effort:

US9056868B1 - Three-step synthesis of CL-20 - Google Patents

Embodiments of the invention relate to the synthesis of CL-20 using only a three step synthesis and because of increased catalyst activity and lifetimes, a continuous flow process can be used with a tremendous reduction in total cost of producing CL-20.

patents.google.com

Just as an illustration of what is finger-tip available:

https://patents.google.com/patent/US9056868B1/en

Expand full comment
me ohmy's avatar

Thanks for the well rounded response. I was scratching my head over this one. Had Elon installed Grok-3 on some government systems that had become publically accessible and Krassencast then exploited it? But if all we are talking about is information already on the internet, then I agree with you. While his AIs level of security is just not ready for production -- but who worries about production readiness these days? -- this is not a crisis.

AI is incredibly inefficient for even minimally complex requests. Have you ever asked AI to write a complex PowerShell or KQL query/script? Its a mess. AI is good at fetching, but bad a fact checking or compiling multiple complex sources into something accurate and comprehensive.

I would be curious to know if Grok-3 can be tricked into giving up non-public PII about a person who it it thinks its talking to -- assuming that information is available to it.

Expand full comment
kikritiker's avatar

Yes for sure. However the problem emerges when models become powerful that they could invent new weapons from available data. Of such a model shows flows like grok3, it is very dangerous.

Expand full comment