Is it possible to create offline Conversational AI using metahuman SDK and Nvidia’s Talk with RTX which is large language model (LLM) connected to your own content—docs, notes, videos. I would like to know if there is a possibility before diving deep in this. The main goal is to create a virtual tutor using metahuman, metahuman SDK and talk with RTX app to create an offline conversational Ai specific to the content I have on my drive. Any help and roadmap will be highly appreciated.
On the surface it would seem that mixing Epic products with NVIDIA software products is a bit like mixing apples and oranges. A cursory inquiry reveals that NVIDIA “Chat with RTX” does not have an API at this time. There are any number of open source LLMs available for offline use.
If you don’t find what you are looking for here, I suggest checking out the Virtual Beings Facebook group.