Chess fortresses, a causal test for state of the art Symbolic[Neuro] architectures
Written by Hedinn Steingrimsson
The paper presents a benchmark to evaluate neural networks' causal reasoning in chess through "fortresses," where a unique best move leads to a strong defensive position. This approach challenges traditional probabilistic reasoning and offers conclusive testing results, contrasting with typical methods that often result in draws. The findings suggest that modified neural networks can improve performance in identifying optimal moves while maintaining computational efficiency.
Chess fortresses, a causal test for state of the art Symbolic[Neuro] architectures - Supplementary material
Written by Hedinn Steingrimsson
The supplementary material for "Chess Fortresses: A Causal Test for State of the Art Symbolic[Neuro] Architectures" discusses the challenges in training chess engines like Leela Chess Zero and AlphaZero. It stresses the need for curated datasets and diverse agents to effectively explore the chess state space. The paper notes that mastering chess fortresses requires a deep understanding of critical squares and defensive formations, often allowing human players, including Grandmasters, to outperform advanced AI. It also mentions previous efforts, such as Eiko Bleicher's Freezer program, which narrows the search space by focusing on specific squares and movements. The work aims to enhance AI architecture understanding by tackling difficult chess scenarios.
Representation Matters for Mastering Chess: Improved Feature Representation in AlphaZero Outperforms Switc...
Written by Johannes Czecha, Jannis Blümla, Hedinn Steingrimssonc and Kristian Kerstinga
The paper explores the integration of Vision Transformers (ViTs) into AlphaZero for chess, addressing the computational limitations that hinder effective mastery. While previous efforts to optimize performance using MobileNet and NextViT achieved a modest improvement of about 30 Elo, the authors propose a significant enhancement through changes in input representation and value loss functions, resulting in a boost of up to 180 Elo points beyond AlphaZero’s capabilities. The study emphasizes the importance of transformers in AI while acknowledging their resource demands, and it highlights the experimental validation of the new features via the Integrated Gradient technique.