| Prio | Original string | Translation | — |
|---|---|---|---|
| Mobile Menu | Mobil menu | Details | |
| All Departments | Alle afdelinger | Details | |
| Main Menu | Hovedmenu | Details | |
| Connection block | Tilslutningsblok | Details | |
| Restore default settings | Gendan standardindstillinger | Details | |
| Can not get settings list | Kan ikke få liste over indstillinger | Details | |
|
Can not get settings list Kan ikke få liste over indstillinger
You have to log in to edit this translation.
|
|||
| How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. | Hvor meget nye tokens skal straffes baseret på, om de optræder i teksten indtil videre. Øger modellens sandsynlighed for at tale om nye emner. | Details | |
|
How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. Hvor meget nye tokens skal straffes baseret på, om de optræder i teksten indtil videre. Øger modellens sandsynlighed for at tale om nye emner.
You have to log in to edit this translation.
|
|||
| Presence penalty | Straf for tilstedeværelse | Details | |
| How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. | Hvor meget nye tokens skal straffes baseret på deres eksisterende frekvens i teksten indtil videre. Reducerer modellens sandsynlighed for at gentage den samme linje ordret. | Details | |
|
How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. Hvor meget nye tokens skal straffes baseret på deres eksisterende frekvens i teksten indtil videre. Reducerer modellens sandsynlighed for at gentage den samme linje ordret.
You have to log in to edit this translation.
|
|||
| Frequency penalty | Frekvensstraf | Details | |
| Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. | Kontrollerer diversitet via kerneprøveudtagning: 0.5 betyder, at halvdelen af alle sandsynlighedsvægtede muligheder tages i betragtning. | Details | |
|
Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. Kontrollerer diversitet via kerneprøveudtagning: 0.5 betyder, at halvdelen af alle sandsynlighedsvægtede muligheder tages i betragtning.
You have to log in to edit this translation.
|
|||
| Top P | Top P | Details | |
| Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator. | Op til fire sekvenser, hvor API'en stopper med at generere yderligere tokens. Den returnerede tekst vil ikke indeholde stopsekvensen. Indtast sekvenser med ";"-separator. | Details | |
|
Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator. Op til fire sekvenser, hvor API'en stopper med at generere yderligere tokens. Den returnerede tekst vil ikke indeholde stopsekvensen. Indtast sekvenser med ";"-separator.
You have to log in to edit this translation.
|
|||
| Stop sequences | Stop sekvenser | Details | |
| The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text) | Det maksimale antal tokens, der skal "genereres". Forespørgsler kan bruge op til 2.048 eller 4.000 tokens fordelt mellem prompt og afslutning. Den nøjagtige grænse varierer fra model til model. (Et token er ca. 4 tegn for normal engelsk tekst) | Details | |
|
The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text) Det maksimale antal tokens, der skal "genereres". Forespørgsler kan bruge op til 2.048 eller 4.000 tokens fordelt mellem prompt og afslutning. Den nøjagtige grænse varierer fra model til model. (Et token er ca. 4 tegn for normal engelsk tekst)
You have to log in to edit this translation.
|
|||
Export as