Additional context & information on the "Criticisms of Anthropic" section that I think changes the framing you have lead with. Some of this should probably be updated.
2) Being a member of TechNet does not imply or require that you endorse all of its policy stances.
3) The article referenced about the lack of pre-deployment testing ( and indeed the only source on the subject ) has been found to have several inaccuracies, leading to the article as a whole being disputed as a source: https://x.com/AlexTPet/status/1801589884265587118 (+replies & quote tweets)
thanks for this writeup Alexa! i wouldn't have otherwised known about the antitrust investigatsions. really appreciate the SA summary!
if others find useful, the other summary i was highly recommended was by Zvi's https://thezvi.substack.com/p/quotes-from-leopold-aschenbrenners
Excellent info, as always. Current GenAI-based applications fall rather short in transparency, interpretability, and accountability. Here is a modest proposal to try and improve this situation: https://www.linkedin.com/pulse/rags-orgs-how-make-ai-applications-more-transparent-lammens-ph-d--xh0ef
Additional context & information on the "Criticisms of Anthropic" section that I think changes the framing you have lead with. Some of this should probably be updated.
1) While Clark expressed doubt over some policies, he also expressed support for other polices/regulations in the same essay: https://x.com/AlexTPet/status/1797749076143780336
2) Being a member of TechNet does not imply or require that you endorse all of its policy stances.
3) The article referenced about the lack of pre-deployment testing ( and indeed the only source on the subject ) has been found to have several inaccuracies, leading to the article as a whole being disputed as a source: https://x.com/AlexTPet/status/1801589884265587118 (+replies & quote tweets)
Since then, the UK AISI have released a report on evaluating multiple frontier models: https://www.aisi.gov.uk/work/advanced-ai-evaluations-may-update
4) Anthropic also released research on alignment this week that has been widely praised by the safety community https://x.com/AnthropicAI/status/1802743256461046007/quotes
5) Since that article has come out, the author has shared new information that has been published, adding clarity to the situation from Anthropic and lessening the severity of the situation: https://www.lesswrong.com/posts/sdCcsTt9hRpbX6obP/maybe-anthropic-s-long-term-benefit-trust-is-powerless?view=postCommentsNew&postId=sdCcsTt9hRpbX6obP&commentId=mEeAQyz2pfBL54npt