It doesn’t, though. That’s my biggest gripe - not with this GPT in particular but in general. When an actual (sane) expert / knowledgeable entity does not know something, they will simply tell you that they do not know. It saves a lot of time and frustration. We move on.
ChatGPT, on the other hand, will outright lie, confabulate, fabricate, colourise, meander and make $hit up instead. This is especially noticeable in narrow fields, like BPs. It’s easy-peasy if you ask for something that can be done. Asking for something that cannot be achieved opens a can of worms… Here’s an example:
The final answer is still incorrect. That’s after 5 pages of leading one astray. This would have taken 12h to follow for someone who is starting with the engine. In Chat’s defense - I set it up for failure to demonstrate.
That’s the crux - there are many inaccurate resources, poor tutorials, outdated books, blogs, spaghetti snippets, plain wrong answers flagged as Accepted out there. It’s really good at addressing convoluted math / physics issues, it can write C++ and be generally helpful but I sense its overall understanding of BPs is obfuscated by the system’s relative novelty and human factors.
The benefits of having a generative transformer dedicated to BPs that can improve over time obviously outweigh any shortcomings. Eager to see how you can improve it.
Somewhat unrelated stuff...
What I’d really like to have is a GPT-based built-in engine tool that automates the mundane.
- “Hey ChatGPT Expert: create a Texture2D array variable using the last 58 images in myImages folder, randomise array order”