Article Source
Artificial General Intelligence by 2030?
Abstract
In this special livestreamed talk I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs i.e. ‘generally intelligent digital entities’ (as set forth by Sam Altman / #openai and others) represents an existential risk that imho should not be undertaken or self-governed by private enterprises, multi-national corporations or venture-capital funded startups.
I believe we need an AGI-Non-Proliferation-Agreement. I outline what the difference between IA/AI and AGI or ASI (super intelligence) is, why it matters and how we could go about it.
IA /AI yes *but with clear rules, standards and guardrails. AGI: NO unless we’re all on the same page.
Who will be Mission Control for humanity?