LLMs doing a really bad job at things outside the scope of their language model (spatial relationships, the alphabet, math) isn't new. Although, I think Google letting an LLM into their search more than they should is important news.
This is ChatGPT 3.5:
If I face north, then turn left, and left again, then 180 degrees to the right, then left and left and left, which way am I facing?
If you start facing north and turn left, you'll be facing west. If you turn left again, you'll be facing south. Turning 180 degrees to the right from south will also make you face south. If you then turn left three more times, you'll still be facing south. So, after all these turns, you would be facing south.
GPT-4 is actually able to solve this, but most other modern language models can't. It's just an easy way to show the cracks in the "because it can speak means it understands" facade.