Ethics

The moral foundations of our AI

Machine ethics is the core of engineering at X2AI.
It is an especially important area for X2AI as we deal with the very force that govern ethics and morality: emotion. Focusing on building moral, and transparent artificial intelligence is one of our founding philosophies, and as such, we've spent a lot of time thinking about how to do it right. Below you will find a brief overview of our focus areas, and a periodically-updated FAQ.
  • Seamless Human Intervention
    All potential and actual patient alerts are relayed to at least one human, including one psychologist. Triggers include suicidal ideation, indication of currently being harmed, and explicit unlawful intent. Whether or not to suspend the AI is dependent on the situation (e.g. it is always best to keep talking to someone who is contemplating suicide).
  • Re-engineered assumptions, Solid foundations
    We've built a specialized network for human-AI interaction after questioning the philosophical assumptions underpinning conversation-based AI. This means that our AI is built on top of a philosophically and morally-aware base. It also has the side-effect of not requiring any initial training data, eradicating the need for noisy machine learning techniques.
  • Emotion-focused programming
    All processing takes into consideration human feelings through our emotion engine. Emotion is the primary input required to determine what to say next. Our AI do not "have feelings", but they can certainly identify, understand and process information relative to them, learning and understanding the patient's moral spectrum in the process.
  • Precision over recall
    The main engineering focus for all our algorithms is to ensure 100% accuracy, even if it is at the expense of recall. This means that our AI will either be correct, or ask for clarification. We also ensure that all escalation triggers have as high recall as possible (e.g. all the ways a patient could indicate suicidal ideation).

FAQ

If an escalation trigger has been activated for a given patent, then their conversation will be intervened, or ghosted. This means that their conversation will switch from being with the AI to being with one of the psychologists who are on stand-by. Within the transition phase, the AI is either suppressed, or continues in an alternative mode, based on the activated trigger. For example, it is important to keep people who are thinking about suicide engaged in conversation, but it is not a good idea for the AI to continue chatting to a patient who has explicitly expressed to speak to a human being.
We are in the process of finalizing our official Board of Ethics. In the meantime, we continue to be guided and counseled by several (literally) world-leading experts, including policy-makers and board-members from AAAI, IEEE, NIST, and OpenAI.