Log in to your Document360 account to give feedback

Feature Request

Ask Eddy testing feedback
We've been testing Ask Eddy using our content to see if it's something we'd want to add to our site. However, there are a few key issues we identified, As with other chatbots, Eddy has a major problem with hallucination. It often makes up answers, especially when leading questions are used. In one of our test questions (regarding deleting a category), Eddy makes up a response that could potentially cause huge problems for users with significant data loss. I checked the reference pages cited to make sure we didn’t include or allude to the incorrect information. Additionally, the reference articles are often unrelated to the answer provided (sometimes Eddy is able to provide the correct answer even though it is not included in the reference articles cited). In other instances, Eddy will pull an answer from one page, but miss that the correct answer was provided in another article, and will not include that page in the reference articles. Eddy also seems to have some difficulty pulling answers from HTML tables, which I also found when testing ChatGPT separately. For example, when I asked which versions of PHP are supported, the response ignored the version numbers provided in the table on the system requirements page, and instead generated a response based on the body text on this page. Finally, the responses are typically very short and give less detail than a standard GPT would - I assume this is done intentionally to reduce the risk of hallucination. This wouldn’t be much of an issue if the reference articles were more accurate. I don't know what the solutions are to these issues, but hopefully this feedback is useful. As others have mentioned in their feedback, having some control over the instructions provided to the GPT would help to alleviate some of these issues.
1
Load More