Sorry, its me again with another question. Can someone please help me figure out what is going on?
Im running an answer to paragraph Open AI action. I ask 6 questions and then split out the answers later.
Im getting incorrect answers if I add one specific question in Dutch:
[Deleted from post]
or translated to English
[Deleted from post]
[Deleted from post]
Any idea what is going on here? I have been debugging this thing for hours now and spent tons of credits on this, to no avail. All suggestions are welcome.
Please help me understand the problem correctly because of the language barrier. I understand this is the google sheet in question: BARDEEN-SCRAPER - Google Sheets. What is the column that is cut off?
I am looking at your playbook and can not figure out why the prompt question is reflected as a variable? Are you pulling the 6 questions you have mentioned from the other action? The same for the context for the model.
Thanks for the support. More background information: I’m asking ChatGPT/OpenAI 6 questions and ask it to separate the questions by ***. This is not working well for this playbook. For a similar playbook for a different website website, this works well.
To debug this, I have mapped the unsplit answer to the Short Description column.
[Deleted from post]
The output that I am expecting is something like this:
[Deleted from post]
What I am getting is this:
[Deleted from post]
Note that the first answer is cut off. And the other questions are not answered.
It appears to be related to the first question. If I remove that one, the other 5 answers seem to come out fine, all nicely seperated by a seperator (***).
Hope this gives enough content. Let me know if you need more input. I think you have all the access rights you need but let me know if there is anything you cant access.
I will definitely dive into testing this and let you know as I get results, but I would like to share some thoughts prior to that. From what you are sharing, it seems we are running into some sort of a limit. It may be a tokens or character limit of OpenAI and I presume different OpenAI actions may be configured to use different limits. Did you happen to try OpenAI custom prompt action instead of question about the paragraph?
I just tested with only one same question and would like you to estimate the results: