ERROR: empty digest from LLM. Raw response:
{"error":{"message":"litellm.APIConnectionError: Ollama_chatException - {\"error\":\"server busy, please try again.  maximum pending requests exceeded\"}. Received Model Group=wile\nAvailable Model Group Fallbacks=['roadrunner']\nError doing the fallback: litellm.APIConnectionError: Ollama_chatException - {\"error\":\"server busy, please try again.  maximum pending requests exceeded\"}No fallback model group found for original model_group=roadrunner. Fallbacks=[{'wile': ['roadrunner']}, {'clusteDaily intel ready: /opt/rag/logs/daily-intel-2026-05-13.md
