Evaluation Framework for LA

The Evaluation Framework for Learning Analytics (EFLA) addresses the current lack of evaluation instruments by offering a standardised way to evaluate learning analytics tools and to measure and compare the impact of learning analytics on educational practices of learners and teachers. The EFLA makes use of the subjective assessments about learning analytics tools by their users in order to obtain a general indication of the overall quality of a tool in a quick and simple, yet thoroughly developed, validated and reliable way.

The EFLA consists of the three dimensions Data, Awareness & Reflection, and Impact with a total of eight items. There is one version for learners and one version for teachers. All items are to be rated on a scale from 1 for ‘strongly disagree’ to 10 for ‘strongly agree’. The EFLA score can be any number between 0 and 100. In order to calculate a learning analytics tool’s EFLA score, the following steps should be taken per stakeholder group:

(1) calculate the average value for each item based on the answers given for that item,

(2) calculate the average value for each dimension based on the average of its items,

(3) calculate the dimensional scores by rounding the result of ((x-1)/9)*100 where x is the average value of a dimension (in order to get a number between 0 and 100), and

(4) calculate the overall EFLA score by taking the average of the three dimensional scores.

EFLAtemplateGreyscale

The EFLA questionnaire template pictured above is available for download here:
EFLA template in colour
EFLA template in greyscale

The two different stakeholder versions for learners and teachers can also be downloaded separately:
Learner EFLA in colour and Learner EFLA in greyscale
Teacher EFLA in colour and Teacher EFLA in greyscale

Additionally, an interactive spreadsheet can be downloaded to automatically calculate the EFLA scores and create visualisations of the scores. Simply fill in the questionnaire results in the first tab and the scores are automatically calculated on the second tab while the visualisations will show up on the third tab.

Should you have any questions, please get in touch with Maren Scheffel
maren.scheffel [@] ou.nl

Why use the EFLA?

First, the EFLA provides insights. Using the framework to evaluate learning analytics tools can provide insight to the learners’ and teachers’ perception of and experience with the tool. It can reveal problematic aspects and identify ways to provide students with a more adaptive and less one-size-fits-all learning experience. Once such issues are identified, they can be addressed in updated and improved versions of the learning analytics tool. The evaluated tool could be a whole dashboard as well as a single visualisation. The level of detail chosen is left to those who conduct the evaluation.

Second, the EFLA facilitates comparability. The framework can be used to compare learning analytics tools within one setting, e.g. two widgets for one course, or between different settings, e.g. widgets and dashboards from different courses or even from different educational institutions. Knowing how a tool performs according to the different EFLA dimensions can help to position it in the growing collection of tools available and can stimulate further development.

Third, the EFLA supplies evidence. With the rising urge to ground learning analytics tools more in learning theories, the framework can be used to ascertain whether a learning analytics tool fulfilled its intended purpose, i.e. whether it actually had an impact on learning and teaching processes and made them more efficient and more effective.

History of the development of the Evaluation Framework for Learning Analytics

The Evaluation Framework for Learning Analytics was created in an iterative process of use, evaluation and improvement. In a first step a group concept mapping (GCM) study was conducted with experts from the field of learning analytics. After first collecting a list of 103 quality indicators from the learning analytics community, the invited experts sorted and rated the indicators according to their importance and feasibility. Based on their aggregated input, shared patterns were revealed in the collected data using multidimensional scaling and hierarchical clustering. The resulting visualisations were used to interpret the data. The results of the group concept mapping study were then used to construct the dimensions and items of the first version of the evaluation framework for learning analytics (EFLA-1).

EFLA 1

EFLA1greyscale

The first version of the evaluation framework was turned into an applicable tool, i.e. a questionnaire. A group of learning analytics experts used the process of evaluating a collection of learning analytics tools to evaluate the applicability of EFLA-1. Using the quantitative and qualitative results of this evaluation study, useful insights were gained about the characteristics of the evaluation framework that were carried over into the creation of the next framework version. In order to address the requirements established in the evaluation study and to thus improve the framework, the data from the group concept mapping study was reconsidered and complemented by a look at related literature, i.e. other evaluation instruments, frameworks and categorisations, in order to decide on the framework’s dimensions and to narrow down the choice of items. For every dimension, further literature was consulted to motivate and theoretically ground the chosen items. This resulted in the second version of the evaluation framework for learning analytics (EFLA-2) that provided a learner and a teacher section, both consisting of four dimensions with three items each.

EFLA 2

EFLA2greyscale

The next iteration of the evaluation and improvement process of the evaluation framework for learning analytics was conducted by having students and tutors in a collaborative online learning environment evaluate a learning analytics widget using EFLA-2. The results of this widget evaluation using EFLA-2 were employed to evaluate the framework itself. That is, the students’ and tutors’ answers to the EFLA-2 questionnaire were statistically analysed by conducting principal component analysis as well as reliability analysis in order to identify problematic issues with any of the EFLA-2’s items. In addition to this quantitative analysis, qualitative feedback was gathered during a face-to-face full-day experts focus group. All four EFLA-2 dimensions and their items were discussed in detail in order to determine what was needed to improve the framework. Based on the quantitative and qualitative evaluation results, the third version of the evaluation framework for learning analytics (EFLA-3) was constructed. The framework still offered a learner and a teacher section and still consisted of four dimensions. Several items, however, were refined and adapted, while others were deleted from the framework, resulting in a total of ten items for EFLA-3.

EFLA 3

EFLA3greyscale

The EFLA-3 was then used to evaluate several widgets of a MOOC platform’s learning analytics dashboard in lab study. Using the answers to the EFLA-3 questionnaire by the lab study’s participants, the framework was evaluated using principal component and reliability analysis in order to determine whether the four EFLA-3 dimensions validly represented the underlying components and whether the items within the dimensions reliably measured the underlying component. After a first round of analysis, two items were eliminated from the evaluation framework. Also, it was detected that the framework’s structure was very likely to be three-dimensional instead of four-dimensional. The second round of analysis confirmed this assumption. Based on the analysis results the valid and reliable fourth and final version of the evaluation framework for learning analytics (EFLA-4) was constructed. The framework has a learner and a teacher section and consists of three dimensions with a total of eight items.

EFLA 4

EFLA4greyscale