LLMs (I refuse to call them AI, as there's no intelligence to be found) are simply random word sequence generators based on a trained probability model. Of course they're going to suck at math, because they're not actually calculating anything, they're just dumping what their algorithm "thinks" is the most likely response to user input.
"The ability to speak does not make you intelligent" - Qui-Gon Jin