Right. 'Cause neural networks aren't trained to tell the truth or answer things factually. Things like ChatGPT are language-prediction models. They're set up to respond in a way that, based on their training data, is the most likely response. Like, I dunno, the top Family Feud answers or something except even less moderated/editorialized.
Something in its training data probably suggested that the boards were shuttered (at least at some point) even if that's not factually accurate. It's not like ChatGPT tried to visit the forums before it answered you and found them to be closed, thus modifying its response accordingly. It's more like a less knowledgeable (but more conversational) Google search.