Westminster Higher Education Forum – 31 October 2016
The forum focused on two themes: lessons learned from REF 2014 and plans for Ref 2020/1 following the Stern Review. It was stressed several times that peer review would remain at the centre of the process so that the sector would continue to own the process while also engaging with the metrics as appropriate. Areas for improvement, among others, include incentivising university collaborations, greater recognition for staff development (protecting ECRs to follow their own research path), constraints on interdisciplinary research, limiting game-playing (easier said than done). There should also be more formal mechanisms across sub-panels to respond appropriately to inter/multi/cross/transdisciplinary research. Consultation will start at the end of November 2016; initial decision to be published in mid-2017. The three main topics of discussion were 100 per cent submission, the number of outputs, and impact.
- 100 per cent submission
The forum cautioned against unravelling equality infrastructure and invited a proper equality assessment before any decisions would be made. It is also unclear how ‘research active’ staff will be defined. It is suggested that HESA returns are used to identify staff on research-only or teaching and research contracts, but it may be necessary to mitigate for potential unforeseen circumstances. To maintain the volume of outputs, the number of staff submitted will be multiplied by 2 to produce the number of outputs to be submitted (e.g. 12 staff = 24 outputs). This brings up the question of how much or little is appropriate for a staff contribution to submission. Also, if all research active staff is submitted, would this mean that each university will contribute to all UOAs for which they have staff rather than submitting certain UOAs? This is currently the biggest challenge – the idea is good but it is unclear how this will work in practice.
- Number of outputs
It is expected that there would be the requirements of 2 outputs per FTE spread across each unit; decouple individuals from outputs. Individual circumstances will not be included since the units would be selecting outputs and not people. There was concern expressed about ending output portability: this will provide equality of opportunity but also entail risk for ECRs (and more experienced academics) losing their key bargaining chips in the form of REF outputs. It may also suggest that universities own outputs, which would be demoralising for those academics receiving little or no support from their universities. It is also unclear what ‘demonstrably generated’ means in reference to research done for a publication at an institution, whether the definition have a negative impact on career progression, people on fixed contracts, etc. and, if so, how this will be mitigated.
Institutional-level impact case studies for impact arising from collaborative and interinstitutional work will be required or strongly encouraged to support the case for environment and impact. The questions arising are how many will be required, how this will impact on the overall number of case studies, What makes a case study institutional and who/how will decide this. A pilot is required.
Impact will continue to play an important role in the assessment. The main challenge for impact evidence gathering is the fact that a lot of information remains in silos and is sometimes unknown.
Moving forward with impact – key principles:
– Capture impact in real time – readily identify, capture and report impact. Systematic approach. As in REF 2014, there will be an allowance for a 20-year period for research and 4-5 years for impact case studies; it has not been confirmed whether previous case studies can be used. The starting point for research will be brought forward (to 1998?).
– Structured data for robust analysis: using ORCID, etc. in a more systematic way.
– Collaboration: universities, technology, funders, researchers.
Impact might include work done outside the sector (e.g. in industry, civil service, etc.) to encourage hiring from outside the sector. Impact will be broadened to include impact on teaching at the institution. There is potential risk of turning contracts into teaching-only, but departments with a spike in teaching-only contracts before REF might be penalised (somehow). The proposal is to decouple case studies from number of staff submitted per UoA (min. 1), link impact with research activity and a body of work, and include institutional case studies and impact on teaching (capacity building).
While the case study approach is preferable to metrics-based, it is not yet clear how the value of case studies will be measured. This may be done through unobtrusive assessment (using research data management systems), discipline-agnostic to recognise the contribution of many disciplines to a single case study.