Excellence in Research Evaluation

Seminar on Excellence in Research Evaluation
Canberra, 8–9 September 2009

Impacts on scholarship, research funding and publication

University researchers must achieve the highest levels of excellence if their work is to be of maximum benefit to Australian society: so everyone readily agrees. But how, in practical terms, is research excellence ever to be measured and evaluated? What are its key signs and indicators? How can we confidently identify the most brilliant research in Australian universities, and know that it equals or surpasses that undertaken anywhere in the world?

Over the past four years two proposed systems of research assessment have been the subject of spirited debate in Australia: the now-abandoned Research Quality Framework (RQF) initiated by the Howard government, and the newly-conceived Excellence in Research for Australia (ERA) scheme of the present government, now at a critical stage of early trialling by the Australian Research Council (ARC), and destined for inclusion within the government’s larger program for Sustainable Research Excellence in Universities and other components of its research support.

Australia has not as yet had practical experience of any system of research assessment, however, and speculation, expectation and anxiety have accordingly been rising in almost equal measure over recent months. How accurate a system of measurement – it is nervously asked – is ERA likely to provide: what are its probable strengths, and what are its possible flaws and shortcomings? How is its introduction likely to affect the behaviour of Australian researchers and publishers? What financial rewards and difficulties for Australian universities may ensue? What impact (to use a much used and contested term from the exercise itself) is ERA likely to have on the lives of all Australian researchers?

The National Academies Forum, the peak body of Australia’s four learned academies, organised a two-day seminar in Canberra to look at such questions. It invited participants from each of the four academies, and from the Australian Publishers’ Association (a co-sponsor of the event) from the National Health and Medical Research Council, the ARC, the Australian universities group of Deputy Vice-Chancellors (Research), the Council for Humanities, Arts, and Social Sciences, and the Federation of Australian Scientific and Technological Societies, together with a number of speakers from countries which have recently undergone, or are currently bracing themselves for, some kind of national research assessment.

David Sweeney, Director of Research, Innovation, and Skills at the UK’s Higher Education Funding Council, and Professor Richard Hartley, Director of the Institute for Humanities and Social Science Research at Manchester Metropolitan University, reported on the aims and effects of the UK’s (now superseded) Research Assessment Exercise (the RAE) and its successor, the Research Excellence Framework (REF). Professor Hans-Dieter Daniel from the University of Zurich described current attempts to assess research performance in Switzerland and Germany.

From such a diverse group, pondering the qualities and likely consequences of Australia’s new assessment system at such an early stage of its initial testing, differences of opinion were naturally, and at times energetically, voiced. On the value of bibliometrics, for example, of journal ratings, of esteem factors and other commonly employed indices of research excellence. Surprisingly, however (given this diversity of composition) the meeting also showed a high degree of convergence and consensus about the value of the new system, expressing strong confidence in the work of the ARC’s ERA team, and in the central proposition vigorously argued by Senator Kim Carr, Minister for Innovation, Industry, Science, and Research: that ERA was ‘a genuinely transformative initiative – one that will change the way we think about university research’, and – potentially, at least – the national university system itself. Supporting this transformation, the minister reminded the group, was the large injection of new funding the government had provided for university research: an additional $703 million in this year’s budget, over the four years of forward estimates.

The meeting welcomed the willingness of the ARC to consult widely and openly and to discuss all issues frankly, to improve the evaluation process and to ensure clarity about its objectives. As Minister Carr said, ‘the sector has to behave collegially and constructively’, and also quickly. The ARC issued its paper inviting consultation on ERA on September 7, for responses by September 25. Trials of the system are already ongoing in the physical sciences and the humanities, and it is hoped to implement ERA fully next year.

It was noted with interest that the government would use the assessments to drive decisions on research funding under the new Sustainable Research Excellence (SRE) program. The meeting agreed that the ERA should be uncompromising in its defence of the very best research, and warmly welcomed the additional funding which the government had made available. The government’s paper on Sustainable Research Excellence in Universities observes that the performance targets for two-thirds of this SRE funding, $200 million per annum, ‘will eventually be based on ERA data’. If ERA data is sufficiently robust and defensible, it will be used from 2012 onwards in budgetary formulations for the allocation of funds. Given that the ERA data will be based on research activity for the years 2003 to 2008, there would be need for regular updates, especially since some of the most interesting and significant research initiatives will involve new collaborations and ideas. The government will also use ERA assessments when constructing and assessing the compacts that they will enter into with each university, to define roles, strengths and ambitions in the future.

Several speakers emphasised that ‘one size did not fit all’ with respect to disciplines, universities and collaborations. The meeting welcomed the ARC’s indication that the Research Evaluation Committees (RECs) would have the power to operate with flexibility, and urged that nomination of members of the RECs should be sought widely, with a view to ensuring a variety of experience and perspective. The ARC indicated that committees would be empowered to seek extra advice from specialist reviewers, including international experts, particularly to comment upon interdisciplinary overviews and international collaborations.

Some participants already had doubts about the accuracy and impact of bibliometrics, which were seen to provide quite crude data of limited value. Reports from those involved in the recent UK assessment exercise served to intensify such doubts. The ARC representatives said that the metrics would be available to the experts on the RECs, but assured the meeting that there would be no formulaic application of metrics, either with respect to publications or in terms of ‘adding up scores’. As the minister put it, ‘bibliometric indicators have a place, but cannot tell the whole story on their own.’

The inclusion of peer review in the assessment process was welcomed by a wide range of participants from different disciplinary backgrounds.

It was recognised that the application of research could deliver benefits to government, industry, business, the wider community, and the environment; and that such public uptake was an important indicator of the excellence of research conducted in Australia’s institutions.

Perhaps the most striking (and refreshing) aspect of the seminar was the frank recognition of the need for consultation between the minister, the ARC, and the academic community, particularly those who are committed to research at the highest level. Members of the ARC declared their readiness to discuss frankly problems that might arise with respect to the inclusion of collaborators who are not university staff, with the recruitment of suitable panel members, the measuring of interdisciplinary and international collaboration, and the involvement of those people working in (for instance) industry, hospitals, medical research institutes, CSIRO and DSTO. There was acknowledgement that ERA should also value the transfer of knowledge from the universities to the benefit of a wider community.

Certain common issues would require further discussion: how early-career researchers might best be supported within its system, for example, and how new disciplines that do not fit well into the existing two digit and four digit categories might be appropriately dealt with. The meeting gave a clear welcome nevertheless to the new assessment process, which, it was agreed – in its finally tested and fine-tuned state – should serve accurately to identify and reward Australian research excellence, to the immense cultural, social, economic and physical benefit of the nation as a whole.