Andrés Neyem

Andrés Neyem

Especialidad: Ingeniería de software, computación móvil y en la nube, aprendizaje automático para sistemas inteligentes, educación médica y de ingeniería, realidad extendida
Andrés es profesor del Departamento de Ciencias de la Computación de la Pontificia Universidad Católica de Chile. Recibió su Ph.D. en informática en la Universidad de Chile. Sus intereses de investigación incluyen ingeniería de software, computación móvil y en la nube, aprendizaje automático para sistemas inteligentes, ingeniería y educación médica y realidad extendida. En estas áreas de investigación ha publicado una amplia gama de artículos en actas de congresos y revistas. Ha desarrollado varios productos de software de este tipo de sistemas móviles basados ​​en la nube.

PUBLICACIONES

Publisher: IEEE Transactions on Learning Technologies Link>

ABSTRACT

Software assistants have significantly impacted software development for both practitioners and students, particularly in capstone projects. The effectiveness of these tools varies based on their knowledge sources; assistants with localized domain-specific knowledge may have limitations, while tools, such as ChatGPT, using broad datasets, might offer recommendations that do not always match the specific objectives of a capstone course. Addressing a gap in current educational technology, this article introduces an AI Knowledge Assistant specifically designed to overcome the limitations of the existing tools by enhancing the quality and relevance of large language models (LLMs). It achieves this through the innovative integration of contextual knowledge from a local “lessons learned” database tailored to the capstone course. We conducted a study with 150 students using the assistant during their capstone course. Integrated into the Kanban project tracking system, the assistant offered recommendations using different strategies: direct searches in the lessons learned database, direct queries to a generative pretrained transformers (GPT) model, query enrichment with lessons learned before submission to GPT and large language model meta AI (LLaMa) models, and query enhancement with Stack Overflow data before GPT processing. Survey results underscored a strong preference among students for direct LLM queries and those enriched with local repository insights, highlighting the assistant's practical value. Furthermore, our linguistic analysis conclusively demonstrated that texts generated by the LLM closely mirrored the linguistic standards and topical relevance of university course requirements. This alignment not only fosters a deeper understanding of course content but also significantly enhances the material's applicability to real-world scenarios.

agencia nacional de investigación y desarrollo
Edificio de Innovación UC, Piso 2
Vicuña Mackenna 4860
Macul, Chile