How to access currently used LLM

How do I send custom chat completion requests to the currently used LLM from within a custom action? I constantly test out my Rasa CALM model with both gpt-4 and local LLMs on Ollama, and I want to send a request to whichever LLM server I have configured without having to change the code directly.

Hey Lewis. Please help me out with my issue related to Ollama and Rasa Pro CALM integration as it seems you were able to integrate Ollama with RASA Pro Calm perfectly. I’m sharing the link to my issue. If possible, please do share your config and endpoints setup. Any help is highly appreciated.

[Local LLM with RASA CALM]