2 Comments

Little discussed, but a very serious threat is the instability of MI activity. GPT and other similar Large Language Models (LLM) produce hallucinations similar to various mental illnesses. The widespread use of MI should not continue until the hallucination relief problem has been resolved until sufficient stability of MI has been achieved until the “psychiatry” of MI has been created.

Expand full comment

Like your Newsletter a lot! During planning in my job as HR Director I try to follow the MOMA principle (minimum output and maximum accomplishment) which is sort of a best case - worse case planning.

For AI I would see three framing concepts to prevent extinction risks being needed:

1. Everyone building AI capabilities ensures it is a non direct execution system which needs human intervention before being used (e.g. AI cancer diagnosis, validated from a Phd before a final decision what to communicate to a patient )

2. A global AI certification label will be developed which is mandatory to have before being allowed to run AIs (governed by law) which ensures AI need to be able to explain decision making at any time. NGOs and governments are allowed to request output explanations

3. The United Nations agrees an international law about AI no go areas: a) ABC weapons of any kind b) Any AI usage for Optimisation of human genetic code and finally c) that all Countries have to share knowledge for the benefit of human mankind . Of course that will not be a perfect safety net :)

Kind regards

Guido

Expand full comment