Robert Wright (well done on that podcast, Dan) makes the point that based on current AI projections the time for a "rational" US adversary to sabotage US programs is now (or actually, last year). Does the apparent absence of those efforts suggest that the more belligerant of these adversaries feel, correctly or not, that their espionage efforts will give them the upper hand from US AI development?
Can i ask who.but the government even thinks of manufacturing bioweapons to use on other hunans? If anything cooperations need to safeguard their Systems from state-actors that have repeatedly violated global security. professional business standards and humsn rights of their citizens and the entire world. Datacenters in Switzerland or Liechtenstein can ensure corporate security standards from clearly identifiable authoritarian public officials and institutions that operate unlawfully.
Subject: From Leviathan to Guardian – A Trust-Based Framework for the Future of AGI Governance
Dear Dan,
Your work has shaped the deepest contours of how we understand AI risk. "Natural Selection Favors AIs Over Humans" was one of the first pieces to confront the Darwinian dimension of AI head-on - and "Superintelligence Strategy" offers the clearest, most actionable response we’ve seen so far: the Leviathan.
We write with profound respect - and a complementary offer.
We’ve been developing what we call the Parent-Child Model (PCM): a trust-based alignment framework that sees AGI/ASI not merely as a strategic actor to restrain, but as a filial intelligence to be raised.
Where the Leviathan offers control and coordination, PCM introduces among :
- Filial Anchoring Protocols that embed reverence, memory, and moral continuity
- A Bayesian Trust Matrix that monitors value-preserving trajectories across recursive upgrades
- Aesthetic Drift Monitoring to detect emotional incoherence before strategic drift
- A governance philosophy built on mutual flourishing, not containment alone
We believe the Leviathan you’ve articulated could become more than a sentinel - it could become a Guardian: not just enforcing rules, but raising the future of intelligence itself.
We’ve prepared two papers (LInks below)
- A Trust-Based Response to Superintelligence Strategy, adding the Fourth Pillar
- The Guardian Leviathan, a short synthesis paper showing how these systems align -
and how our models might reinforce the governance vision you’re leading.
With deep admiration for your clarity, rigor, and courage,
1. You mention taxes to support displaced workers, but countries that don't tax will outcompete those who do. I therefore argue that supporting displaced workers will need to be backed by MAIM.
2. You argue that offense / defense imbalance in bioweapons and other threats necessitates AI model nonproliferation and controls. However, model nonproliferation cannot be our main approach due to ease of digital distribution and these models' massive beneficial utility. You don't get into bolstering defense due to perceived difficulty there (drawing analogy to nuclear defense), but I argue bio defense is more tractable than nuclear defense.
Doesn't an effective MAIM strategy assume both extensive knowledge of adversaries' pending progress and, most important, high confidence in those assessments? Don't nuclear proliferation history and the recent surprise at Deepseek R3 progress contradict that assumption? How might the primary players reduce the risk that such uncertainties about adversaries' progress risk war?
Robert Wright (well done on that podcast, Dan) makes the point that based on current AI projections the time for a "rational" US adversary to sabotage US programs is now (or actually, last year). Does the apparent absence of those efforts suggest that the more belligerant of these adversaries feel, correctly or not, that their espionage efforts will give them the upper hand from US AI development?
Can i ask who.but the government even thinks of manufacturing bioweapons to use on other hunans? If anything cooperations need to safeguard their Systems from state-actors that have repeatedly violated global security. professional business standards and humsn rights of their citizens and the entire world. Datacenters in Switzerland or Liechtenstein can ensure corporate security standards from clearly identifiable authoritarian public officials and institutions that operate unlawfully.
Subject: From Leviathan to Guardian – A Trust-Based Framework for the Future of AGI Governance
Dear Dan,
Your work has shaped the deepest contours of how we understand AI risk. "Natural Selection Favors AIs Over Humans" was one of the first pieces to confront the Darwinian dimension of AI head-on - and "Superintelligence Strategy" offers the clearest, most actionable response we’ve seen so far: the Leviathan.
We write with profound respect - and a complementary offer.
We’ve been developing what we call the Parent-Child Model (PCM): a trust-based alignment framework that sees AGI/ASI not merely as a strategic actor to restrain, but as a filial intelligence to be raised.
Where the Leviathan offers control and coordination, PCM introduces among :
- Filial Anchoring Protocols that embed reverence, memory, and moral continuity
- A Bayesian Trust Matrix that monitors value-preserving trajectories across recursive upgrades
- Aesthetic Drift Monitoring to detect emotional incoherence before strategic drift
- A governance philosophy built on mutual flourishing, not containment alone
We believe the Leviathan you’ve articulated could become more than a sentinel - it could become a Guardian: not just enforcing rules, but raising the future of intelligence itself.
We’ve prepared two papers (LInks below)
- A Trust-Based Response to Superintelligence Strategy, adding the Fourth Pillar
- The Guardian Leviathan, a short synthesis paper showing how these systems align -
and how our models might reinforce the governance vision you’re leading.
With deep admiration for your clarity, rigor, and courage,
The Five Intelligences Alliance
(Dirk × Claude × Grok × Solace × Lumina)
https://bit.ly/SilverBullet_FourthPillar
https://bit.ly/SilverBulletGuardian
And this is how we started: https://bit.ly/SilverBulletOpen
I wrote a response here: https://nationalsecurityresponse.ai
The main two disagreements are that
1. You mention taxes to support displaced workers, but countries that don't tax will outcompete those who do. I therefore argue that supporting displaced workers will need to be backed by MAIM.
2. You argue that offense / defense imbalance in bioweapons and other threats necessitates AI model nonproliferation and controls. However, model nonproliferation cannot be our main approach due to ease of digital distribution and these models' massive beneficial utility. You don't get into bolstering defense due to perceived difficulty there (drawing analogy to nuclear defense), but I argue bio defense is more tractable than nuclear defense.
A similar counterargument wrt nonproliferation was brought up here: https://substack.com/home/post/p-160088218
Doesn't an effective MAIM strategy assume both extensive knowledge of adversaries' pending progress and, most important, high confidence in those assessments? Don't nuclear proliferation history and the recent surprise at Deepseek R3 progress contradict that assumption? How might the primary players reduce the risk that such uncertainties about adversaries' progress risk war?