| Prio | Original string | Translation | — |
|---|---|---|---|
| Mobile Menu | 手机菜单 | Details | |
| All Departments | 所有部门 | Details | |
| Main Menu | 主菜单 | Details | |
| Connection block | You have to log in to add a translation. | Details | |
| Restore default settings | 恢复默认设置 | Details | |
| Can not get settings list | You have to log in to add a translation. | Details | |
| How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. | You have to log in to add a translation. | Details | |
|
How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.
You have to log in to edit this translation.
|
|||
| Presence penalty | You have to log in to add a translation. | Details | |
| How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. | You have to log in to add a translation. | Details | |
|
How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim.
You have to log in to edit this translation.
|
|||
| Frequency penalty | 频率惩罚 | Details | |
| Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. | You have to log in to add a translation. | Details | |
|
Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered.
You have to log in to edit this translation.
|
|||
| Top P | Top P | Details | |
| Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator. | You have to log in to add a translation. | Details | |
|
Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator.
You have to log in to edit this translation.
|
|||
| Stop sequences | You have to log in to add a translation. | Details | |
| The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text) | You have to log in to add a translation. | Details | |
|
The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text)
You have to log in to edit this translation.
|
|||
Export as