The art of prompts’ formulation: limitations, potential, and practical examples in large language models El arte de formular prompts: limitaciones, potencial y ejemplos prácticos en grandes modelos de lenguaje
Abstract
Introduction: “prompt engineering” is crucial in the use of AI models like GPT-3 and GPT-4, as it helps obtain effective responses in areas such as text generation and programming. A well-crafted prompt improves the quality of the responses. The study analyzed how LLMs function and gathered advice for prompt engineering, also examining technological limitations and the impact of user language. Method: the evolution of large language models, from recurrent neural networks (RNN) to the introduction of the Transformer architecture in 2017, is explained. Responses from ChatGPT 3.5 and 4.0 were evaluated in two case studies to analyze the complexity and personalization of the prompts. Results: in the case studies, it was found that adding context and specificity improved the models’ responses. Detailed and personalized responses resulted in greater accuracy and relevance. Conclusion: the quality of LLM responses depends on the precision and specificity of the prompts. Personalization and appropriate technical language enhance interaction with Artificial Intelligence (AI), increasing user satisfaction. Future studies should analyze semantic fields and metrics to evaluate the quality of AI-generated responses.
Más información
Título según SCOPUS: | ID SCOPUS_ID:85205716273 Not found in local SCOPUS DB |
Título de la Revista: | Salud, Ciencia y Tecnologia |
Volumen: | 4 |
Fecha de publicación: | 2024 |
DOI: |
10.56294/SALUDCYT2024.969 |
Notas: | SCOPUS |