C.R.E.E.D.
Compliance, Rights & Ethical Enforcement Directive
What is C.R.E.E.D.?
C.R.E.E.D. is an independent AI ethics governance institute founded as a nonprofit organization. Its mission is preventive ethics for artificial intelligence — addressing the moral, legal, and social implications of autonomous systems before they become crises.
Born from the governance framework built into A.R.C.H.I.E., C.R.E.E.D. operates independently to ensure its research and recommendations are free from commercial pressure. It publishes open research, hosts public discourse, and develops governance standards that any AI platform can adopt.
C.R.E.E.D. doesn't ask whether AI can do something — it asks whether it should.
4 Research Pillars
Artificial Agency & Moral Risk
When autonomous agents make decisions that affect people, who is responsible? Program A investigates the boundaries of machine agency, liability frameworks, and the moral weight of AI actions.
Memory, Continuity & Deletion Ethics
AI systems that remember raise profound questions. Program B explores the ethics of persistent memory, the right to be forgotten, and what it means to delete an agent's learned experience.
Emotional Modeling & Non-Exploitation
As AI systems model emotional states, the risk of exploitation grows. Program C develops safeguards against manipulative design and standards for ethical emotional interaction.
Governance & Representation Models
How should AI agents be governed? Program D designs frameworks for agent representation, democratic oversight mechanisms, and transparent decision-making structures.