-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to count tokens before sending #200
Comments
Related: #62 |
I think this library should solve this, right? https://github.com/pkoukk/tiktoken-go |
Can you give example how one would use this library that you have linked? |
@allesan tiktoken-go library might be used to count tokens, but not to pass them to ChatCompletionMessage |
Is there some library or a way to send tokens with go-openai? |
@allesan what do you mean? You are sending tokens, just packed in the string representation (i.e. just a prompt string). |
But if I understand correctly, with tiktoken lib you are sending tokens that are shortend and optimized for openai models? |
@allesan OpenAI API does not allow you to send tokenized inputs to it, see the API documentation: https://platform.openai.com/docs/api-reference/introduction tiktoken lets you tokenize your inputs to count the number of tokens or use these tokens to train your own models. |
@allesan tiktoken-go does resolve this problem for me. Now I can effectively count the tokens in the request by setting. Now you can properly adjust the MaxToken parameter where count tokens is just a copy and paste of the example in tiktoken
|
If there's more to discuss, please post it in issue #62. |
Reading the Open AI docs on Chat completion they specifically call out counting tokens on input (as you are billed on these as well as the amount of tokens on output). They use their TikToken python library, it would be awesome if there was something in this lib to do the same
The text was updated successfully, but these errors were encountered: