Our mission is to ensure that state of the art AI systems are inherently safe, reliable, and beneficial to society. We focus on improving AI safety through enhanced System 2 task performance capabilities and causal reasoning abilities. Our motto is: Correlation is not Causation!
The concept of System 1 and System 2 types of human’s thinking was first proposed by the Nobel Prize winner Daniel Kahneman in his 2011 book Thinking, Fast and Slow: “System 1 thinking is an immediate response from your intuition while System 2 thinking is a delayed response from a well-thought-out process”.
We contribute to the development of transparent neural architectures suitable for mission-critical applications while prioritizing human-AI alignment.
With human-AI alignment, we aim to build AI systems that elevate us humans, and are aligned with our goals and values. Our human-AI teaming initiatives focus on improving AI transparency and human understandability while leveraging the unique strengths of both AI and also human thinking. Our goal is to enable a human to be responsible for the joint human-AI system actions while at the same time developing human-AI systems where the whole is better than the sum of its human and AI parts.
We develop and test new state-of-the-art neural models, such as Transformers and Deep Learning with a focus on improved safe System 2 task capabilities. Additionally, we create models and conduct user experiments focused on human-AI teaming and joint task performance, prioritizing safety and reliability. Our AI-assisted learning initiatives, in collaboration with the Steingrimsson Foundation, are aimed at leveraging our human-AI teaming work for improving learning and education with emphasis on inclusivity and enjoyable learning experiences. We strive for a society where no one is left behind in terms of AI literacy and ability to make use of AI in their personal and professional lives.
Learn more about our groundbreaking work in safe neural System 2 systems and/or support our mission by making a donation
We conduct research and user experiments to improve AI safety focusing on improved System 2 task capabilities with the motto correlation is not causation together with human-AI alignment. By supporting us, you help advance research on safe AI that is better capable of causal problem solving. You support better understanding of causal neural AI system behaviors and capabilities. You also support the development and testing of human-AI aligned models that are suitable for reliable joint human-AI problem solving, where human understandability and accountability is prioritized.
We specialize in socially responsible AI development, focusing on System 2 logical reasoning and planning capabilities. Our work aims to align AI with human values, ensuring it supports human dignity and intelligence. Discover our commitment to safe and reliable AI.
We envision a future where reliable AI operates within ethical boundaries, fostering trust while having a positive impact on society and human well-being.
Our mission is to create a future where AI systems are inherently safe, reliable, trustworthy, and beneficial to society. We conduct theoretical, empirical, and applied research to enhance AI safety through improved System 2 neural architecture capabilities, emphasizing human-AI alignment and AI-assisted learning.
We address the seen and unforeseen risks posed by AI by contributing to inherently safe and reliable AI together with human-AI alignment. Our goal is that AI contributes positively to society.
We serve AI researchers, engineers, policymakers, and the broader community, including those concerned with Safe System 2 AI, System 2 AI that is beneficial to society, AI ethics, safety, and responsible innovation.
Our project benefits the global community, focusing on areas where AI development is most prominent. Based in Houston, Texas, we publish our work internationally and impact both the global research community and local communities.
Research
Funding and resources for safe System 2 research.
Training Programs
Seminars and educational programs on System 2 AI with emphasis on reliability and a positive impact on society.
Collaborative Projects
Partnerships for promoting safe System 2 AI practices.
User Experiments
To improve human alignment through improved human-AI teaming with emphasis on the System 2 AI domain and AI-supported human learning.
Consulting
Expert advice and guidance on safe System 2 AI for organizations looking to implement safe and socially responsible AI solutions.