Hi @Reinaldo Agostinho de Souza Filho ,
The token limit cannot be increased for a specific model. But the text-davinci-002 model has a 4000 token limit. If your document is larger than that, you will need to do some sort of chunking technique. We are working on guides for this. How you chunk does depend a bit on the use case but you can use heuristics or use embeddings to do a semantic search over portions of the document
Thanks!
Chris