A New Scale for Universities
A new, innovative method of university ranking was presented in Moscow at the international conference ‘Lessons of multi-dimension ranking of Russian universities: from trial to practice’ which took place on April 25-26, 2013. This is a new-generation ‘university compass’, multi-aspect, branched, and more objective. It has been created alongside a new European method of multi-dimensional global university ranking, U-multirank.
Gero Federkeil, leading expert at the CHE-Centre for Educational Development, Germany, and expert at IREG (International Observatory on Academic Ranking and Excellence), participated in developing the methodology of the new Russian ranking and spoke to us about these new Russian and European university rankings.
— So, the Russian rankings are based on some principles of IREG, such as research, teaching...
— Knowledge transfer, international orientation and regional engagement. I think this Russian initiative started with really very good, thorough and serious analysis of the existing rankings and indicators. It was very impressive to see this process and I think this project matches international standards very closely.
— There are a lot of indicators in the Russian ranking.
— Yes, there are a lot of indicators. Maybe, a few too many indicators. But I think we needed to start with a broader view of indicators. And then narrow them down to the core indicators that are most relevant.
— Do you think this ranking pays attention to the uniqueness of the Russian university system? Universities are so different in Russia.
— Yes. My impression is it takes into account the particular structure and situation of Russia. But, nevertheless, of course, there are some shared global issues for higher education in all countries. And these are also included in the system. So I think we’ve reached a good balance between global issues of higher education and national specifics. For example, different types of universities and institutions that have to be compared, and ensuring that we are comparing “like with like” within these groups.
— I'd like to ask you if you think it is possible to make an objective ranking? There are so many discussions about criteria, about the audience etc.
— I think there is no one single objective ranking. Each ranking, the selection of indicators and their weighting, reflects the views of those who produce the rankings. I think no one model is totally right, but some are useful. And I think this is the same for rankings. Some rankings can give useful information to particular users like students, the institutions themselves, or the government. They (those who produce the rankings) have to bear in mind that they do not tell the one and only truth.
— Do you think a wider range of Russian universities will have chance to be included in global rankings such as THE (Times Higher Education), QS (Quacquarelli Symonds World University Rankings), Shanghai ranking (ARWU – Academic Ranking of World University) etc.?
— With regard to these three existing systems, I don’t think that many Russian institutions will show up in the top hundred within the next, let’s say, ten or twenty years. But this is one of limitations of those rankings. They cover about three per cent of all universities from around the world. And they focus on research universities. But we have many other universities in a huge number of countries. Some of them are very good at doing other things. Some are really excellent at teaching, or excellent with regard to employability, or to knowledge transfer. But these institutions never show up in these rankings. So, this is why we have developed another approach, with U-multirank, which is broader, and which wants to be more comprehensive and open to other types of institutions.
— And the audience is huge – students, academic staff, policy makers, businessmen, employers.
— Yes, there’s a huge audience. And you need another methodology. You have to look, as the Russian system does, at teaching, knowledge transfer, and the original role of universities.
— And focusing on a “ like with like” comparison (universities of one type, for example), not comparing “apples with oranges”?
— Yes. Comparing “like with like”. Global rankings do actually compare “like with like”. Because they only include one type of institution – internationally oriented research universities. As soon as you want to look at original teaching institutions, or specialized establishments, like music, or art schools, you need a different methodology.
— Is there was some disciplinary distortion in rankings with regard to humanities?
— Yes. And I think this is one of the problem of the existing global rankings, universities which are top in terms of research in humanities but not so strong in natural sciences do not have chance to show up at the top of the rankings due to the indicators used.
— I'd like to ask about U-multirank. Will this include humanities in the future?
— In the future, yes. That means that we have to think about the indicators that are suitable for the humanities. Of course, we cannot look at the publications in Science and Nature in this case, or Nobel Prize winners. We have to develop different indicators. And we'll do this in close discussion and consultation with the stakeholders in these areas.
— And what parameters are crucial for U-multirank?
— We try to use a broad range of indicators. At the moment U-multirank includes about 25 to 30 indicators. But we look at the set of indicators that are relevant to particular users. One indicator may be relevant, and another not so relevant...
— So, they are focused.
— This is why we, as creators of the system, do not decide which indicators are the most relevant. And this means not giving weighting to the different indicators. This system is flexible. It is the user who decides what is more relevant and what is less relevant.
Olga Sobolevskaya, specially for HSE News Service