4. Quality

The issue of quality is a problem for both digital and OER resources. In addition, the quality of OER is one of the most frequently discussed issues in the area of these resources and presents an obstacle in using them. Mainly OER functioning on Mediawiki software are a significant challenge in terms of ensuring quality, as they often allow the involvement of unregistered users in the creation of materials and can thus be created by practically anyone. The low level of the quality of some resources influences the attitude towards them as a whole, therefore causing a certain mistrust in using OER for educational purposes. 

This mistrust is often unjustified. The term Open educational resources is used to denote a relatively diverse combination of information/knowledge repositories on the internet. Neither these resource nor their quality can be discussed per se, as they may vary from personal blogs, in which unknown authors present their opinions, to high-quality materials such as reviewed scientific articles. This situation is further complicated by the fact that there are no standards (criteria) of quality that might be generally acknowledged by the creators or users of OER. Therefore, it is always necessary to assess a specific resource (or storage) or, even better, each material contained within it separately – this applies especially to cases in which there is no official guarantor of quality (e.g. a respected author or institution). Issues of OER quality can be broken up into four areas: 

a. The environment of the given system;

b. Content; 

c. Formal and ethical attributes of quality; 

d. User aspects. 

In terms of the environment of the given system, we can define criteria in the sense of how its operation from a technical standpoint is ensured (e.g. how quality is ensured and indicated; who can edit the given resource; how resource sustainability and updating material is ensured; whether a quality guarantor, i.e. an author or reviewer, can be listed – see the table below). It should be noted that a resource may fulfill the criteria for the quality of the technical environment, but it does not necessarily have to contain high-quality texts for study. OER quality primarily means quality of content, but this question presents a significant problem. Answering it in the sense of “is the given resource of good quality?” requires a content analysis, which is demanding both in terms of time and in finding relevant experts who could carry out the analysis in, for example, the form of a peer-review process. 

During such an analysis, various aspects of quality may be considered, such as: 

  • text complexity – in the sense of including various aspects and points of view in order to provide a balanced opinion (i.e. is it sufficiently complex?), 

  • processing (has it been properly processed stylistically, grammatically and visually?), and 

  • credibility (verifiability of information via respected sources). 

Some of these aspects are subjective, not only in the sense of an expert’s subjective opinion, but also in regard to the point of view of a student using the resource (e.g. a student in his/her first year of university bachelor’s study has different complexity requirements than a doctoral student). 

Table. Levels of OER content quality evaluation, which characterizes social and technological aspects (in Clements et al., 2015, see also inserted references). 

Approach to quality evaluation

References

Peer review system/quality evaluation by users, usually using the Likert scale (1-5)

Atenas and Havemann, 2014, Larsen and Vincent-Lancrin, 2005, Schuwer et al., 2010, Windle et al., 2010, Minguillón et al., 2010, Stacey, 2007, Lefoe et al., 2009, Catteau et al., 2008, Li, 2010, Krauss and Ally, 2005, Sanz-Rodriguez et al., 2010, Sampson and Zervas, 2013, Currier et al., 2004

Zervas et al., 2014, Liddy et al., 2002, Waaijers and van der Graaf, 2011, Venturi and Bessis, 2006, Zhang et al., 2004



Tools for quality evaluation by users (e.g. LORI)

Atenas and Havemann, 2014, Clements and Pawlowski, 2012, Downes, 2007, Richter and Ehlers, 2010, Atkins et al., 2007, Sinclair et al., 2013, Vargo et al., 2003, Defude and Farhat, 2005, Kumar et al., 2005, Alharbi et al., 2011

Recommendation tools (best sources)

Manouselis et al., 2013, Atenas and Havemann, 2014, Pegler, 2012, Petrides et al. (2008), Adomavicius and Tuzhilin, June 2005, Duffin and Muramatsu, 2008, Manouselis and Sampson, 2004, Manouselis et al., 2011, Li, 2010, Sanz-Rodriguez et al., 2010, Sabitha et al., 2012, Sampson and Zervas, 2013, Zervas et al., 2014

Commenting

Minguillón et al., 2010, Catteau et al., 2008, Li, 2010, Vargo et al., 2003, Sanz-Rodriguez et al., 2010, Sampson and Zervas, 2013, Waaijers and van der Graaf, 2011

Favorite

Minguillón et al., 2010, Sanz-Rodriguez et al., 2010, Sampson and Zervas, 2013, Zervas et al., 2014

Social tagging

Minguillón et al., 2010, Stacey, 2007, Sampson and Zervas, 2013

Tagging (reporting broken links, unsuitable content, etc.)

Sinclair et al., 2013, Clements and Pawlowski, 2012

However, quality assessment may be made easier by quality criteria that can be used to evaluate the given resources. We will deal with these criteria in the following subchapters.