HttpGPT is an Unreal Engine plugin that facilitates integration with OpenAI’s GPT based services (ChatGPT and DALL-E) through asynchronous REST requests, making it easy for developers to communicate with these services.
HttpGPT also includes Editor Tools to integrate Chat GPT and DALL-E image generation directly in the Engine.
You truly are doing god’s work here, lucoiso. Thanks so much for freely making these tools available, incredible stuff. Very useful.
I was using this today and went to start a new chat in my application based on this plugin, and realised that all the appending of conversations are done under the hood so I have no way to clear the plugin’s conversation history - do you have any plans to add “clear/reset chat” as a feature? I feel dirty for asking since you’re already doing so much work lol but this is a fundamental feature that’s needed so figured to throw a suggestion in.
I surprise myself with how stupid i can be sometimes. Clear the Chat History array you idiot lol
Also, a couple of QoL improvements for you to perhaps consider.
Send Message to GPT async node should have an enum to let me choose the model per async call (dynamcially changing models on the fly).
the node should also expose hyperparameters for the models inside an options struct, like top_p, top_k, temp, ctx_len, and so forth (popular hparams nothing too crazy), rather than in the project settings, allowing us to program dynamic user interactions with the models to their own customisability.
expose the initial prompt for us to prime the model outside of whatever you’ve hard-coded in the plugin. The initial prompt sets the tone of the AI and can mean a very big difference in how the nn responds. This might be okay in the Project Settings, however it would also be good as a setting in the node’s options struct as to change this on the fly after resetting/clearing the conversation history would be really handy.
people use stop sequences a lot also - which will stop the conversation if a word in this string array is equal.
error messages from the API would be handy for errors that might sit outside that which the async node callbacks don’t provide, or need more context to.
Hope I don’t come across too demanding, this is just what I observed so far while exploring your magical work. Thanks again!
Got access to GPT-4 and I’m already working on adding the new models to the options + Add support for streamed responses + still working on adding new options!
You’re on FIRE lucoiso. I was just coming here to report a bug in the previous version but you’ve dropped a whole new release with new features lol. Have to check this bad boi out now!