Words don't matter: effects in large language model unstructured responses by minor prompt lexical changes
Abstract
Prompt engineering has become an essential skill for AI engineers and data scientists, as well-crafted prompts enable better results and optimal costs. While research has extensively studied the effects of different prompt aspects—focusing on structures, formatting, and strate- gies—very little work has explored the impact of minor lexical changes, such as single character or word modifica- tions. Although it is well-documented that such changes affect model outputs in diverse ways, most studies compare outputs by measuring accuracy or structure. However, little research has examined how small changes affect the meaning of unstructured outputs while accounting for the stochastic outputs of large language models (LLMs). This work per- forms experiments to explore these effects systematically with several examples and model sizes. The results suggest that paraphrasing or word selection changes do not affect the answer’s substance, but special attention should be paid to typos and correct negations and affirmations.
Más información
| Título de la Revista: | ScienceOpen Preprints |
| Fecha de publicación: | 2025 |
| Idioma: | Inglés |
| URL: | https://www.scienceopen.com/hosted-document?doi=10.14293/PR2199.002109.v1 |
| DOI: |
10.14293/PR2199.002109.v1 |