At first glance, trying to play chess against a large language model (LLM) seems like a daft idea, as its weighted nodes have, at most, been trained on some chess-adjacent texts. It has no concept of board state, stratagems, or even whatever a ‘rook’ or ‘knight’ piece is. This daftness is indeed demonstrated by [Dynomight]
Maya Posch is a journalist for Hackaday, specializing in technology and electronics. With a passion for reverse-engineering and exploring the inner workings of various devices, Maya's articles provide in-depth insights and practical solutions for the tech-savvy audience.