We don’t have such an action yet. If you use “Create customized text generated by OpenAI”, you can specify the output length, but that’s limited to what the model can generate in one go (approx. 3000 words, minus the space for the prompt).
We could consider adding this action in the future, or extending the action so it continues generating if asked to produce longer text.
In the meantime, you might be able to use multiple calls to “Get custom prompt generated by OpenAI” in sequence in a single playbook, passing the prompts and a summary of the previous text. I’m not sure how easy it will be to get it to generate coherent text though, and you’ll have to make sure each prompt fits within the space of one model (4096 tokens).