Robert Wright (well done on that podcast, Dan) makes the point that based on current AI projections the time for a "rational" US adversary to sabotage US programs is now (or actually, last year). Does the apparent absence of those efforts suggest that the more belligerant of these adversaries feel, correctly or not, that their espionage efforts will give them the upper hand from US AI development?
Doesn't an effective MAIM strategy assume both extensive knowledge of adversaries' pending progress and, most important, high confidence in those assessments? Don't nuclear proliferation history and the recent surprise at Deepseek R3 progress contradict that assumption? How might the primary players reduce the risk that such uncertainties about adversaries' progress risk war?
Robert Wright (well done on that podcast, Dan) makes the point that based on current AI projections the time for a "rational" US adversary to sabotage US programs is now (or actually, last year). Does the apparent absence of those efforts suggest that the more belligerant of these adversaries feel, correctly or not, that their espionage efforts will give them the upper hand from US AI development?
Doesn't an effective MAIM strategy assume both extensive knowledge of adversaries' pending progress and, most important, high confidence in those assessments? Don't nuclear proliferation history and the recent surprise at Deepseek R3 progress contradict that assumption? How might the primary players reduce the risk that such uncertainties about adversaries' progress risk war?