Home » Publication » 28851

Dettaglio pubblicazione

2024, Proceedings of the 28th International Conference Information Visualisation (IV), Pages 332-337

An Exploration of Open Source Small Language Models for Automated Assessment (04b Atto di convegno in volume)

Sterbini A., Temperini M.

We explore the classification and assessment capabilities of a selection of Open Source Small Language Models, on the specific task of evaluating learners' Descriptions of Algorithms. The algorithms are described in the framework of programming assignments, to which the learners in a class of Basics in Computer Programming have to answer. The task requires to 1) provide a program, in Python, to solve the assigned problem, 2) submit a description of the related algorithm, and 3) participate in a formative peer assessment session, over the submitted algorithms. Can a Language Model, be it small or large, produce an assessment for the algorithm descriptions? Rather than using any of the most famous, huge, and proprietary models, here we explore Small, Open Source based, Language Models, i.e. models that can be run on relatively small computers, and whose functions and training sources are provided openly. We produced a ground-truth evaluation of a large set of algorithm descriptions, taken from one year of use of the Q2A-II system. In this we used an 8-value scale, grading the usefulness of the description in a Peer Assessment session. Then we tested the agreement of the models assessments with such ground-truth. We also analysed whether a pre-emptive, automated, binary classification of the descriptions (as useless/useful for a Peer Assessment activity) would help the models to grade the usefulness of the description in a better way.
ISBN: 979-8-3503-8016-3; 979-8-3503-8017-0
keywords
© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma