User guide

Last updated 12.1.2022

User guide for the Publication Forum classification 2019

Download the user guide as a PDF here.

The Publication Forum is a publication channel classification system implemented by the Finnish scientific community that supports the evaluation of the quality of research output. This user guide contains the recommendations of the Publication Forum Steering Group set by the Board of Directors of the Federation of Finnish Learned Societies (TSV) on the responsible use of the Publication Forum classification system to assist in the evaluation of research output.

The Publication Forum classification system was originally meant (i) for the evaluation of the average quality of a large number of publications produced by universities. The classification is not meant for (ii) the evaluation of the quality of a smaller number of publications produced by the units of universities or other research organisations or individual publications – articles or monographs – nor for (iii) the evaluation or comparison of individual researchers.

The Steering Group feels it is necessary to provide instructions for the use of the Publication Forum classification system, as universities in Finland have also recently started to use it for (ii) unit and/or (iii) researcher level evaluation and comparison (1) These instructions describe the underlying assumptions and limitations of the classification and provide guidelines for the use of the classification in the evaluation of research according to the principles of responsible metrics. These instructions include, for example, the principles presented on the Publication Forum website and at various events. In the preparation of this user guide, international statements of responsible metrics have been used: DORA declaration (2), Leiden manifesto for research metrics (3), and the Metric Tide report (4).

The first user guide was published by the steering group in 2012 (Final report of the Publication Forum project 2010-2012, appendix 1 (pdf)). The steering group will update the user guide as necessary. This webpage contains the April 2019 updated version of the user guide.

The user’s guide consists of four chapters:

1. Using the Publication Forum classification at unit and researcher level
2. Description and underlying assumptions of the Publication Forum classification
3. Background of the Publication Forum classification: funding model of universities
4. Limitations of the Publication Forum classification

1. Using the Publication Forum classification at unit and researcher level

According to the recommendations of the Leiden Manifesto for research metrics (5) and the Metric Tide report (6), the evaluation of the quality of research at universities or other research organisation units (ii) or of individual researchers (iii) must primarily be based on expert evaluation, but research metrics can be used to support the evaluation. If the Publication Forum classification is used to support the evaluation, consider the following:

  • limitations concerning the use of the Publication Forum classification system (Chapter 4);
  • also use other publication channel and/or publication-specific research metrics as diversely as possible and consider the differences and characteristics of various fields of science (7)
  • use the expertise of libraries and/or other bibliometric experts in the creation and interpretation of research metrics based on the Publication Forum classification;
  • explain to the personnel in a transparent way in which contexts and how the Publication Forum classification is used;
  • hear researchers’ views about the applicability of the Publication Forum classification for various evaluation purposes in their own field of science or research.

The following qualifications must be taken into account, in addition to the above-mentioned considerations, in using the Publication Forum classification:

(ii) Evaluations of universities and other research organisation units:

  • Evaluations using external Expert Panels. The Publication Forum classification is only suitable for reviewing the profiles and internal development of research units’ publication activities, not for comparison between scientific disciplines. In addition to other publication channel and/or publication specific research metrics, the Expert Panels evaluating the research done in the units can also be informed about comparison data based on Publication Forum classification.
  • Internal funding models. If funding is distributed to university units (faculties, departments, units, etc.) based on the research volume they produce, the Publication Forum classification can be used, in addition to other publication channel and/or publication specific research metrics, as one of the indicators of average quality improvement of the scientific publication activities of a unit.

(iii) Evaluation of individual researchers:

  • Recruiting, tenure-track and individual performance. The evaluations must review the overall content of the production published by the researcher (e.g. research topics, methodologies, significance of the results) and the overall quality of the publications qualitatively on the basis of expertise, but quantitative research metrics can be used to support the evaluations. However, the use of one publication channel classification alone, such as the Publication Forum classification, in this situation must be avoided (8). In other words, other publication channel and/or publication-specific research metrics than the Publication Forum classification must also be used, and as diversely as possible, taking into account the differences and characteristics of various fields of science. It is not recommended to set for a researcher absolute quantitative publication criteria or goals based on the Publication Forum classification (9).
  • Reward system. In addition to scientific publications, the overall reward system must consider merits related to education and societal interaction. The Publication Forum classification can be used, in addition to other publication channel and/or publication-specific research metrics, as one of the criteria for the researchers’ personal reward systems.
  • Validation of publications of a doctoral dissertation. It is not recommended to set any absolute requirements based on the Publication Forum classification for the validation of publications of a doctoral dissertation. 
  • Participation in conferences. It is not recommended to set the Publication Forum classification as a condition for compensating costs for participation and travel to a conference. The necessity of participation in conferences must be evaluated based on other criteria.

2. Description and underlying assumptions of the Publication Forum classification

In the Publication Forum (JUFO), the national Expert Panels in each discipline identify and evaluate peer-reviewed, international and Finnish scientific journals/series, conferences and book publishers (level 1). The Expert Panels also evaluate the publication channels identifying those that are most highly appreciated and the most influential among the scientific community (levels 2 and 3). JUFO also lists those publication channels which according to the panels do not (yet) meet the minimum criteria set for level 1 (level 0).

As a whole, the following general ideas prevalent among the scientific community serve as underlying assumptions of the Publication Forum classification system:

  • Scientific publication channels differ from each other on the basis of the average scientific quality, impact and significance of research (i.e. individual articles and monographs) published in them.
  • The evaluation of the scientific quality, impact and significance of publication channels is based on the idea of the average quality and impact of articles and monographs published on these channels. An individual publication can, however, represent a higher or lower level of quality, impact or significance than the publications on a publication channel do on average.
  • Even though JUFO is a national system and the Expert Panels consist of researchers affiliated with Finnish universities and research institutions, the Expert Panels base their analysis of publication channels above all on the international appreciation and impact of the publication channels among the global scientific community, especially on the highest levels 2 and 3.
  • Publication channels publishing in Finland’s official languages can also be identified to represent the highest levels 2 and 3, especially in disciplines where the publication channels can be seen to represent the highest international standard due to the Finnish context of the research subject, or if the Finnish publication channel gains recognition as a particularly highly appreciated one among the international scientific community as well.
  • Openness is not a criterion or indicator in the evaluation of the scientific quality of publication channels. Open access, however, is seen to improve the accessibility of publications and, consequently, their impact. In this way the Expert Panels can at the highest levels 2 and 3 favour a directly open access channel or one allowing self-archiving of a peer-reviewed version of a manuscript, if this channel is seen as an equal alternative in scientific quality compared to a channel representing the same discipline which does not allow equally open access. From 2021, the funding model of universities will also reward open access.

3. Background of the Publication Forum classification: funding model of universities

One of the key purposes of the Publication Forum is its use in the funding model of the Ministry of Education and Culture (OKM) in the state funding for universities. In the OKM funding model, one of the indicators describing research is based on the number of publications produced by the university. The numbers of publications have been weighted by multipliers which describe their average quality and are based on the Publication Forum classification of publications on publication channels. This means that the review subject is the (i) university as an organisation, i.e. the average quality of the large number of publications produced by the organisation – and not (ii) the quality of smaller numbers of publications produced by universities or research organisation units or individual publications, nor (iii) the quality of the publications of individual researchers.

In the OKM funding model, the review focuses on the more than 25,000 peer-reviewed publications produced annually. A publication-specific expert evaluation of these would constitute an unreasonable amount of work. Therefore, the expert evaluation in the JUFO process focus on publication channels, not on individual publications. Ultimately the purpose of the JUFO process and classification is to encourage the scientific community in Finland to strive, in addition to quantity, for quality and impact, i.e. publishing research results in publication channels which are valued by the scientific community, are demanding in terms of peer reviews and reach the widest critical expert audience.

Responsible research metrics also call for robust publication data, transparency, diversity and reflectivity (for example the Metric Tide report). The publishing indicator of the funding model and the related JUFO classification implement the following principles of responsible research metrics:

  • Robustness (basing metrics on the best possible data in terms of accuracy and scope). The research metrics of the funding model is based on national publication data, which includes all peer-reviewed publications produced by universities. As publication data, this is significantly more comprehensive than the citation databases (Web of Science and Scopus) which focus on international scientific journals. The JUFO classification also increases the reliability of the publication data.
  • Transparency (those being evaluated can test and verify the results). The universities themselves produce and validate the publication data that the funding model is based on. The universities also have the opportunity to check the JUFO classifications of their own publications and those of other universities. Transparency is part of the quality assurance of the publication data. The JUFO classification is openly accessible and the scientific community participates in implementing it.
  • Diversity (accounting for variation by field, and using a range of indicators).  The research metrics of the funding model take into account peer-reviewed journals, conference articles and articles in books, monographs and edited works irrespective of the country or language of publication. The professional and popular publications are also considered. Monographs are given a higher weighting than articles in journals, conferences and books. An effort is made to balance the JUFO classification between disciplines so that the funding model treats universities equally and provides encouragement irrespective of the disciplinary profiles. As other education and research indicators are also used in OKM’s overall funding model, the publications do not alone determine the funding of universities.
  • Reflectivity (recognising and anticipating the effects of indicators, and updating them in response). The publication data enables the monitoring of changes that occur in publishing. In addition, research and surveys are carried out on the potential effects, which is also subject to public debate. The appropriateness of the indicators of the funding model is regularly assessed in the broad-based working groups set by the ministry. The JUFO classification is regularly updated, and the Publication Forum steering group reviews and develops the classification.

4. Limitations of the Publication Forum classification

There are some limitations for using the JUFO classification to support research evaluation. The limitations are not that problematic in assessments, such as OKM’s funding model, concentrating on (i) the average quality of a large number of publications at the level of universities, but play a bigger role regarding evaluations based on a smaller number of publications at unit or researcher level. If the JUFO classification is used to support (ii) the quality evaluation of a smaller number of publications produced by units of universities and research organisations or (iii) the evaluation of individual researchers, it is important to responsibly consider the following limitations:

  • Level quotas. The Expert Panels cannot classify all publication channels that are used and appreciated by the scientific community as levels 2 and 3, instead they have to make choices within the level quotas. The purpose of the quotas is to balance out the classification between different fields of science. The quotas are calculated on the basis of the publishing volume, and in some cases the choice is affected by the size of the journal, i.e. the annual number of publications published in the issues of the journal.
  • Range of quality and impact within levels. The peer-reviewed publication channels are divided in the JUFO classification into three levels based on the average quality and impact of the publications. Level 1 is particularly extensive, which means that there can be a significant difference between the channels of the highest- and lowest quality within this group. On the other hand, the difference in average quality may not necessarily be that great between the highest quality channels in level 1 and channels in level 2.  Even though exceeding the publication threshold for a level 1, 2 or 3 publication channels alone can be considered an indication of the scientific value and significance of an article or book, there may be significant differences between the quality and impact of individual publications even within these channels.
  • Differences between fields of science. Since the JUFO classification aims to consider all fields of science in a balanced way, it does not fully correspond to an ideal classification which could have been made based on the own starting points or special characteristics of each field of science or research. The choice between levels 2 and 3 is based on the overall evaluation of large fields of science, which means that more specialised publication channels may not necessarily end up in the higher levels in all subfields. On average only one third of peer-reviewed articles, monographs or edited works from a large field of science are placed at the higher levels, and only one in ten at level 3. However, there are differences in the distribution between the fields of science (Table 1). In addition, due to the differences between publishing practices, research questions and methods, the number of publications produced by individual researchers at higher level channels varies both between and within different fields of science.
Table 1. Peer-reviewed publications by JUFO-class 2015-2017
Field of science Level 0 Level 1 Level 2 Level 3
1 Natural sciences 7% 56% 27% 11%
2 Engineering and technology 16% 59% 20% 5%
3 Medical and health sciences 3% 66% 21% 9%
4 Agricultural sciences 7% 59% 22% 12%
5 Social sciences 14% 52% 24% 11%
6 Humanities 16% 46% 26% 12%
Total 10% 57% 24% 10%
  • Relationship with impact factors. In many fields of science, in which journals indexed to international citation databases cover a great majority of scientific publishing, the impact and prestige of the channels is usually measured by the Journal Impact Factor (JIF). In JUFO, the task of the Expert Panels is to balance out the classification between various fields of science and research. Because the JIF values vary between fields, there are many cases in which the classification of the journals in levels 2 and 3 does not follow the ranking order based on impact factors. The panels have also favoured journals publishing original research at the highest levels at the expense of review journals irrespective of the high JIF rates of the latter.
  • Level 0 ambiguity. Publication series, conferences and book publishers that did not meet the requirements concerning the level 1 editorial board and peer review of a scientific publication channel when the evaluation was made, are placed in level 0. Some of the level 0 channels may, however, meet these requirements. For example, channels that are just starting their operation can be placed at level 0 to begin with, until the panels are better equipped to evaluate their publishing. At the same time, the JUFO Expert Panels can also place peer-reviewed channels that are considered marginal for Finnish research or poor in quality at level 0 (for example, the so-called ”predatory journals”). The peer-reviewed channels published by universities and research institutions have also been placed at level 0 if they mainly serve the needs of researchers in their own organisation. Drawing the line between academic/scholarly channels and those intented for professional and general audiences is not always clear either. In other words, level 0 publication channels might publish scientific articles and books which have been duly peer-reviewed and would deserve to be acknowledged in the evaluations concerning individual researchers, for example.
  • Changes in the classification. The JUFO level 2 and 3 are updated once every four years, but in exceptional cases minor changes can also be made in the intervening years. Changes concerning levels 0 and 1 can take place annually. The evaluation must take into consideration the fact that the JUFO level may change during the publication process or evaluation period without the researcher being able to anticipate this. Since levels 2 and 3 are mainly updated every four years, the changes in the appreciation of publication channels can be considered in the JUFO classification with some delay. At the time of the evaluation it might not have been possible to identify the value of certain important publication channels that are gaining in appreciation.

References

1 Wahlfors, L. & Pölönen, J. (2018). Julkaisufoorumi-luokituksen käyttö yliopistoissa, Hallinnon Tutkimus 37(1): 7–21.

2 San Francisco Declaration of Research Assessment: DORA declaration

3 Hicks, D., Wouters, P. F., Waltman, L., de Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics, Nature 520, 429–431. doi:10.1038/520429a.

4 Wilsdon, J. et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management, HEFCE. doi:10.13140/RG.2.1.4929.1363.

5 “The quantitative evaluation must support the qualitative expert reviews. Quantitative metrics may question any susceptibility to partiality in peer reviews and make decision-making easier. This should strengthen the peer review, because it is difficult to review colleagues without essential information. The reviewers must, however, avoid the temptation to let the figures make the decisions. The indicators must not override decision-making based on expertise. Everyone is responsible for their own evaluations”: Leiden Manifesto

6 “Quantitative assessment can support qualitative assessment, but not replace it”.

7 Other research metrics based on the publication channel include, for example, impact factors, such as Journal Impact Factor (JIF), Source Normalized Impact per Paper (SNIP) and Scimago Journal Rank (SJR); national classification levels in Norway and Denmark based on expert evaluations; discipline-specific publication channel classifications, such as Association of Business Schools Academic Journal Quality Guide (ABS) and Nature Index. References to articles and books based on Web of Science, Scopus and Google Scholar materials represent publication specific research metrics, and the attention received by publications in social media is reflected by the number of downloads and mentions (Altmetrics).

8 The recommendation of the DORA declaration is: “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate indicator of the quality of individual research articles, to evaluate an individual scientist’s contributions, or when making decisions related to hiring, promotion, or funding.” DORA declaration

9 If indicative criteria and goals related to the Publication Forum classification are set, it is recommended, when evaluating the level, to apply the level that the publication channel had when these indicators criteria became public knowledge (for example, when tenure-track recruitment took place or an agreement was made).