| Prio | Original string | Translation | — |
|---|---|---|---|
| Mobile Menu | λͺ¨λ°μΌ λ©λ΄ | Details | |
| All Departments | μ 체 | Details | |
| Main Menu | λ©μΈ λ©λ΄ | Details | |
| Connection block | μ°κ²° λΈλ‘ | Details | |
| Restore default settings | κΈ°λ³Έ μ€μ 볡μ | Details | |
| Can not get settings list | μ€μ λͺ©λ‘μ κ°μ Έμ¬ μ μμ΅λλ€ | Details | |
|
Can not get settings list μ€μ λͺ©λ‘μ κ°μ Έμ¬ μ μμ΅λλ€
You have to log in to edit this translation.
|
|||
| How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. | μλ‘μ΄ ν ν°μ΄ μ§κΈκΉμ§ ν μ€νΈμ λ±μ₯νλμ§ μ¬λΆμ λ°λΌ μΌλ§λ νλν°λ₯Ό μ€μ§ κ²°μ ν©λλ€. λͺ¨λΈμ΄ μλ‘μ΄ μ£Όμ μ λν΄ μ΄μΌκΈ°ν κ°λ₯μ±μ λμ λλ€. | Details | |
|
How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. μλ‘μ΄ ν ν°μ΄ μ§κΈκΉμ§ ν μ€νΈμ λ±μ₯νλμ§ μ¬λΆμ λ°λΌ μΌλ§λ νλν°λ₯Ό μ€μ§ κ²°μ ν©λλ€. λͺ¨λΈμ΄ μλ‘μ΄ μ£Όμ μ λν΄ μ΄μΌκΈ°ν κ°λ₯μ±μ λμ λλ€.
You have to log in to edit this translation.
|
|||
| Presence penalty | μΆμ ν¨λν° | Details | |
| How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. | μ§κΈκΉμ§ ν μ€νΈμμ κΈ°μ‘΄ λΉλμ λ°λΌ μ ν ν°μ μΌλ§λ νλν°λ₯Ό μ€μ§ κ²°μ ν©λλ€. λͺ¨λΈμ΄ κ°μ μ€μ κ·Έλλ‘ λ°λ³΅ν κ°λ₯μ±μ μ€μ λλ€. | Details | |
|
How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. μ§κΈκΉμ§ ν μ€νΈμμ κΈ°μ‘΄ λΉλμ λ°λΌ μ ν ν°μ μΌλ§λ νλν°λ₯Ό μ€μ§ κ²°μ ν©λλ€. λͺ¨λΈμ΄ κ°μ μ€μ κ·Έλλ‘ λ°λ³΅ν κ°λ₯μ±μ μ€μ λλ€.
You have to log in to edit this translation.
|
|||
| Frequency penalty | λΉλ νλν° | Details | |
| Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. | ν΅ μνλ§μ ν΅ν΄ λ€μμ±μ μ μ΄ν©λλ€: 0.5λ λͺ¨λ κ°λ₯μ± κ°μ€μΉ μ΅μ μ μ λ°μ΄ κ³ λ €λ¨μ μλ―Έν©λλ€. | Details | |
|
Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. ν΅ μνλ§μ ν΅ν΄ λ€μμ±μ μ μ΄ν©λλ€: 0.5λ λͺ¨λ κ°λ₯μ± κ°μ€μΉ μ΅μ μ μ λ°μ΄ κ³ λ €λ¨μ μλ―Έν©λλ€.
You have to log in to edit this translation.
|
|||
| Top P | μμ P | Details | |
| Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator. | APIκ° μΆκ° ν ν° μμ±μ μ€μ§νλ μ΅λ 4κ°μ μνμ€μ λλ€. λ°νλλ ν μ€νΈμλ μ€μ§ μνμ€κ° ν¬ν¨λμ§ μμ΅λλ€. ";" κ΅¬λΆ κΈ°νΈλ‘ μνμ€λ₯Ό μ λ ₯ν©λλ€. | Details | |
|
Up to four sequences where the API will stop generating futher tokens. The returned text will not contain the stop sequence. Enter sequences with ";" separator. APIκ° μΆκ° ν ν° μμ±μ μ€μ§νλ μ΅λ 4κ°μ μνμ€μ λλ€. λ°νλλ ν μ€νΈμλ μ€μ§ μνμ€κ° ν¬ν¨λμ§ μμ΅λλ€. ";" κ΅¬λΆ κΈ°νΈλ‘ μνμ€λ₯Ό μ λ ₯ν©λλ€.
You have to log in to edit this translation.
|
|||
| Stop sequences | μνμ€ μ€μ§ | Details | |
| The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text) | "μμ±"ν μ μλ μ΅λ ν ν° μμ λλ€. μμ²μ ν둬ννΈμ μλ£ μ¬μ΄μ 곡μ λλ ν ν°μ μ΅λ 2,048κ° λλ 4000κ°κΉμ§ μ¬μ©ν μ μμ΅λλ€. μ νν νλλ λͺ¨λΈμ λ°λΌ λ€λ¦ λλ€. (ν ν° 1κ°λ μΌλ° μμ΄ ν μ€νΈμ κ²½μ° μ½ 4μ) | Details | |
|
The maximum number of tokens to "generate". Requests can use up to 2,048 or 4000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text) "μμ±"ν μ μλ μ΅λ ν ν° μμ λλ€. μμ²μ ν둬ννΈμ μλ£ μ¬μ΄μ 곡μ λλ ν ν°μ μ΅λ 2,048κ° λλ 4000κ°κΉμ§ μ¬μ©ν μ μμ΅λλ€. μ νν νλλ λͺ¨λΈμ λ°λΌ λ€λ¦ λλ€. (ν ν° 1κ°λ μΌλ° μμ΄ ν μ€νΈμ κ²½μ° μ½ 4μ)
You have to log in to edit this translation.
|
|||
Export as