NEW DELHI: The Union Education Ministry, earlier in August, released the National Institutional Ranking Framework (NIRF)-based all-India university and college rankings 2024. This year, IIT Madras retained its top ranking among various other IITs and IIMs, as well as other government and private educational institutions.
Since 2016, the National Institutional Ranking Framework (NIRF) has been a key tool for evaluating higher education institutions (HEIs) in India, aiming to promote transparency and accountability.However, a recent research paper by former IIT-Delhi director and current vice chancellor of Birla Institute of Technology and Science (BITS), V Ramgopal Rao, along with Abhishek Singh, highlights significant inconsistencies in the ranking process.
Published in the Current Science journal, the study points out that inadequate transparency in methodology and an over-reliance on self-reported data are among the issues that raise concerns about the reliability of the rankings.
These discrepancies suggest that, despite the NIRF’s objective to provide a fair assessment, the methodology may not fully capture the true performance of institutions, indicating the need for a closer and more thorough examination.
Understanding NIRF Ranking Methodology
The NIRF rankings are based on five key parameters: Teaching, Learning, and Resources (TLR); Research and Professional Practice (RP); Graduation Outcome (GO); Outreach and Inclusivity (OI); and Perception (PER). Each of these parameters is weighed differently, and the final ranking is determined by combining the scores. TLR evaluates factors such as the faculty-student ratio, research publications, and infrastructure. RP focuses on research output, patents, and industry collaborations. GO measures student success rates, placements, and alumni achievements. OI assesses diversity, outreach programmes, and social impact. Lastly, PER considers the perception of the institution among peers, employers, and students.
Although this approach offers a structured evaluation, certain limitations challenge the accuracy and fairness of the results. In their detailed study of the NIRF institutional rankings 2024, and the use of data for evaluating various scoring criteria, the study found various challenges and limitations.
Challenges in NIRF’s Approach to Research Evaluation
One of the main issues with the NIRF rankings is their heavy reliance on bibliometrics, which measures the impact of research based on the number of publications and citations. While numbers can offer some insight into research activity, they don’t fully capture the importance, innovation, or social impact of the work.
This focus on bibliometrics can lead to a skewed assessment of an institution’s contribution to research. This is because it doesn’t account for other valuable outcomes beyond traditional publications. Furthermore, the NIRF rankings depend heavily on data from commercial databases, which might miss out on unique research contributions that don’t fit into traditional categories.
Perception versus Reality in NIRF Rankings
Another challenge with the NIRF rankings is the inclusion of a ‘Perception’ factor, which is somewhat subjective. This factor relies on opinions from academics and employers, but these opinions can be influenced by things like an institution’s reputation or publicity, rather than its current performance. This reliance on perception can lead to inconsistencies, where institutions with a strong history might be favoured over newer ones that are doing equally well or even better. Additionally, some institutions may not get a fair assessment because of lower participation in the surveys used to gather perception data.
Issues with Regional Diversity in NIRF Rankings
The NIRF rankings also consider regional diversity by looking at the percentage of students from other states or countries. However, this can be problematic for institutions located in states with large populations, as it may create a bias against them. To address this, a more balanced approach, similar to international rankings, could be used to adjust for population differences. This would ensure that institutions in larger states are not unfairly penalised.
Limited Assessment of Online Education
The NIRF rankings include an assessment of online education, focusing on the number of courses developed and available on the Swayam platform, an initiative by the Government of India. However, the rankings often overlook the quality of these online programmes. As a result, institutions that have developed extensive online education programmes might not receive the recognition they deserve, especially if the rankings don’t adequately consider the completion of online courses and exams.
Overlooking Teaching Quality
While research is important, the primary goal of educational institutions is to provide high-quality education. The NIRF rankings do not have specific measures to directly assess teaching quality, such as classroom observations or feedback from students and alumni.
This omission leads to an incomplete picture of how well an institution educates its students. Additionally, the rankings do not place enough emphasis on practical skills and hands-on learning, which are crucial in many fields. Consequently, institutions that focus on experiential learning might be undervalued in the rankings.
Concerns about Data Integrity
The effectiveness of the NIRF rankings depends on the data provided by the participating institutions. However, since institutions self-report this data, there can be inconsistencies in how they interpret and present it. Without a standardised approach to data reporting, there is a risk that the rankings might favour institutions that are better at presenting data rather than those that are truly excelling. This lack of consistency and accuracy in the data can undermine the credibility of the rankings.
Since 2016, the National Institutional Ranking Framework (NIRF) has been a key tool for evaluating higher education institutions (HEIs) in India, aiming to promote transparency and accountability.However, a recent research paper by former IIT-Delhi director and current vice chancellor of Birla Institute of Technology and Science (BITS), V Ramgopal Rao, along with Abhishek Singh, highlights significant inconsistencies in the ranking process.
Published in the Current Science journal, the study points out that inadequate transparency in methodology and an over-reliance on self-reported data are among the issues that raise concerns about the reliability of the rankings.
These discrepancies suggest that, despite the NIRF’s objective to provide a fair assessment, the methodology may not fully capture the true performance of institutions, indicating the need for a closer and more thorough examination.
Understanding NIRF Ranking Methodology
The NIRF rankings are based on five key parameters: Teaching, Learning, and Resources (TLR); Research and Professional Practice (RP); Graduation Outcome (GO); Outreach and Inclusivity (OI); and Perception (PER). Each of these parameters is weighed differently, and the final ranking is determined by combining the scores. TLR evaluates factors such as the faculty-student ratio, research publications, and infrastructure. RP focuses on research output, patents, and industry collaborations. GO measures student success rates, placements, and alumni achievements. OI assesses diversity, outreach programmes, and social impact. Lastly, PER considers the perception of the institution among peers, employers, and students.
Although this approach offers a structured evaluation, certain limitations challenge the accuracy and fairness of the results. In their detailed study of the NIRF institutional rankings 2024, and the use of data for evaluating various scoring criteria, the study found various challenges and limitations.
Challenges in NIRF’s Approach to Research Evaluation
One of the main issues with the NIRF rankings is their heavy reliance on bibliometrics, which measures the impact of research based on the number of publications and citations. While numbers can offer some insight into research activity, they don’t fully capture the importance, innovation, or social impact of the work.
This focus on bibliometrics can lead to a skewed assessment of an institution’s contribution to research. This is because it doesn’t account for other valuable outcomes beyond traditional publications. Furthermore, the NIRF rankings depend heavily on data from commercial databases, which might miss out on unique research contributions that don’t fit into traditional categories.
Perception versus Reality in NIRF Rankings
Another challenge with the NIRF rankings is the inclusion of a ‘Perception’ factor, which is somewhat subjective. This factor relies on opinions from academics and employers, but these opinions can be influenced by things like an institution’s reputation or publicity, rather than its current performance. This reliance on perception can lead to inconsistencies, where institutions with a strong history might be favoured over newer ones that are doing equally well or even better. Additionally, some institutions may not get a fair assessment because of lower participation in the surveys used to gather perception data.
Issues with Regional Diversity in NIRF Rankings
The NIRF rankings also consider regional diversity by looking at the percentage of students from other states or countries. However, this can be problematic for institutions located in states with large populations, as it may create a bias against them. To address this, a more balanced approach, similar to international rankings, could be used to adjust for population differences. This would ensure that institutions in larger states are not unfairly penalised.
Limited Assessment of Online Education
The NIRF rankings include an assessment of online education, focusing on the number of courses developed and available on the Swayam platform, an initiative by the Government of India. However, the rankings often overlook the quality of these online programmes. As a result, institutions that have developed extensive online education programmes might not receive the recognition they deserve, especially if the rankings don’t adequately consider the completion of online courses and exams.
Overlooking Teaching Quality
While research is important, the primary goal of educational institutions is to provide high-quality education. The NIRF rankings do not have specific measures to directly assess teaching quality, such as classroom observations or feedback from students and alumni.
This omission leads to an incomplete picture of how well an institution educates its students. Additionally, the rankings do not place enough emphasis on practical skills and hands-on learning, which are crucial in many fields. Consequently, institutions that focus on experiential learning might be undervalued in the rankings.
Concerns about Data Integrity
The effectiveness of the NIRF rankings depends on the data provided by the participating institutions. However, since institutions self-report this data, there can be inconsistencies in how they interpret and present it. Without a standardised approach to data reporting, there is a risk that the rankings might favour institutions that are better at presenting data rather than those that are truly excelling. This lack of consistency and accuracy in the data can undermine the credibility of the rankings.