Ollama as embeddings provider

I understand that Ollama can be used as an LLM provider via the OpenAI compatible API, but how to utilize it as an embeddings provider. According to the blog, Ollama’s OpenAI API compatibility for embeddings is still in the works, and since huggingface_hub has an open issue right now where it cannot be used with langchain right now, is there any other way to make the opensource models accessible?

Hi, I have a similar query in this. I have experimented well with OpenAI’s API, But now I want to completely move to a self-hosted model.

The issue mentioned above is causing a hiccup. Is there any workaround for this?