🏆
International Academic Publisher
Serving Researchers Since 2012
IJERT-MRP IJERT-MRP

An Automated Framework for CO-PO Attainment Calculation in Outcome-based Education

DOI : 10.17577/IJERTV14IS120186
Download Full-Text PDF Cite this Publication

Text Only Version

An Automated Framework for CO-PO Attainment Calculation in Outcome-based Education

Dr. E. Bhuvaneshwari, Associate Professor

Department of Artificial Intelligence & Data Science Panimalar Engineering College Poonamallee, Chennai-600123

Rafael Zieganpalg, Praveen K, Santhosh kumar V S, Murali Krishnaa K, Harish V

UG Scholar

Department of Artificial Intelligence & Data Science Panimalar Engineering College Poonamallee, Chennai-600123

Abstract – Outcome-Based Education (OBE) emphasizes assessing student performance through well-defined Course Outcomes (COs) and their alignment with Program Outcomes (POs). To achieve this, Blooms Taxonomy is adopted as the foundation for framing COs across cognitive levels, ranging from knowledge and comprehension to application, analysis, synthesis, and evaluation. Accurate measurement of COPO attainment is critical for continuous improvement and accreditation requirements. However, conventional manual approaches are time-intensive, error-prone, and lack consistency. This paper proposes a systematic framework for COPO attainment evaluation that integrates Blooms Taxonomy with a structured calculation model. Direct attainment is computed from student performance, while indirect attainment is gathered through surveys, ensuring a holistic measure of learning achievement. Weighted mapping of COs to POs is carried out to derive final attainment levels. To improve efficiency, the process is automated using tools such as Excel and Python, providing accuracy, transparency, and reproducibility. A sample dataset illustrates how this framework simplifies evaluation and sup- ports evidence-based curriculum enhancement. The proposed approach not only reduces faculty workload but also ensures a scalable and reliable solution adaptable to any academic program, thereby strengthening the implementation of OBE.

Keywords – Outcome-Based Education (OBE), Course Outcomes (CO), Program Outcomes (PO), COPO Attainment, Blooms Taxonomy,

Accreditation, Direct and Indirect Assessment, Automated Evaluation, Curriculum Enhancement, Educational Analytics.

1 INTRODUCTION

Outcome-Based Education (OBE) has become a globally recognized approach to enhancing the quality of teaching and learning by shifting the emphasis from traditional input-based methods to measurable student outcomes. In this framework, the focus is not only on what instructors teach, but more importantly on what learners are expected to achieve by the end of a course or program. The central elements of OBE are Course Outcomes (COs), which define the specific competencies students should acquire at the completion of a subject, and Program Outcomes (POs), which represent the broader skills and attributes expected of graduates at the program level. Establishing a strong alignment between COs and POs is therefore critical for ensuring that the curriculum remains relevant, industry-focused, and consistent with accreditation standards.

Despite the conceptual clarity of OBE, one of the most persistent challenges institutions face is the accurate measurement and evaluation of CO PO attainment. In most academic settings, manual methods are still widely employed, involving tedious processes such as mapping questions to COs, aggregating student marks, and generating attainment levels. These conventional approaches are time-consuming, error-prone, and lack trans- parency. Moreover, when dealing with large student cohorts across multiple courses, the process becomes unmanageable and inconsistent. The absence of standardized tools and automated systems also makes it difficult for faculty to generate reproducible results, thereby affecting institutional accountability and accreditation readiness.

To address these limitations, structured frameworks supported by automation have been increasingly recommended. Among the most effective tools in this context is Blooms Taxonomy, a hierarchical model that categorizes learning objectives into six levels: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. By integrating Blooms Taxonomy into the attainment process, educators can systematically design assessments that target various cognitive levels, from basic recall of facts to higher-order problem-solving and critical thinking. This ensures that student learning is measured holistically, covering both fundamental concepts and advanced application skills.

In the context of COPO attainment, Blooms Taxonomy enables institutions to move beyond numerical scores and examine the depth of cog- nitive achievement among students. For instance, a question mapped to the Application level may indicate a students ability to apply theoretical concepts, while one at the Evaluation level demonstrates critical judgment and decision-making skills. Such mapping provides greater insights into the actual competencies acquired, making the evaluation process more meaningful and aligned with program objectives.

The proposed work builds on these insights by introducing an automated framework for COPO attainment calculation. The framework inte- grates direct attainment (derived from student performance in internal assessments, assignments, laboratory work, and university examinations) with indirect attainment (gathered through surveys and feedback mechanisms). Weighted mapping is employed to consolidate these measures into a unified attainment matrix. Unlike manual approaches, the automated system leverages tools such as Microsoft Excel and Python to compute attain- ment scores, ensuring consistency, transparency, and scalability.

By reducing faculty workload and minimizing human error, this framework enables academic institutions to adopt a data-driven approach to curriculum improvement. It provides educators with actionable insights into student performance at both individual and cohort levels, allowing them to refine teaching methods, adjust assessment strategies, and strengthen curriculum design. Furthermore, the system supports accreditation processes by providing verifiable evidence of outcome achievement, thereby ensuring compliance with standards set by bodies such as NBA and NAAC.

2 LITERATURE REVIEW

Existing Works

Blockchain-driven methods and technological innovations in credential verification have been widely explored, particularly in the domains of education, recruitment, and skill management. The current literature provides critical insights into the development of True-Trail by highlighting the use of blockchain for certificate validation, decentralized identity management, fraud detection, and human resource applications. Outcome-Based Education (OBE) has been the subject of extensive research and practice over the past three decades, primarily due to its transformative potential in linking academic learning to measurable competencies. The earliest conceptual foundations were established by Spady (1994), who described OBE as a learner-centered paradigm focused on achieving clearly defined outcomes rather than adhering to rigid content delivery. This model emphasized that all aspects of teaching, learning, and assessment should converge toward ensuring that students successfully demonstrate the intended learning outcomes. Building on this foundation, Biggs and Tang (2007) proposed the principle of constructive alignment, in which teaching methods, learning activities, and assessments are coherently designed to reinforce intended outcomes. Their work remains influential in guiding institutions toward aligning curriculum and evaluation with outcome expectations.

Professional accreditation bodies such as ABET (Accreditation Board fr Engineering and Technology), NBA (National Board of Accreditation), and NAAC (National Assessment and Accreditation Council) have further reinforced the adoption of OBE frameworks in higher education. These organizations require institutions to demonstrate the mapping of Course Outcomes (COs) to Program Outcomes (POs) and the systematic measure- ment of attainment levels. The global emphasis on accreditation has therefore accelerated research on frameworks, tools, and methodologies that enhance the reliability and transparency of COPO attainment calculations.

One significant theme in the literature is the role of industry involvement in outcome design and assessment. Besterfield-Sacre, Shuman, and Wolfe (2000) highlighted that incorporating professional practitioners in curriculum planning, project evaluations, and assessment activities bridges the gap between academic learning and workplace expectations. Similarly, Prados, Peterson, and Lattuca (2005) argued that engaging industry stakeholders ensures curricula remain current with technological trends and professional requirements. Such collaboration enhances both CO attain- ment and employability of graduates, thereby validating the effectiveness of OBE frameworks.

Another area of focus has been the integration of ethical, professional, and lifelong learning components into OBE. Felder and Brent (2003) suggested that embedding ethics and professionalism within outcome frameworks prepares students for responsible practice in engineering, medi- cine, and other professions. This perspective aligns with the increasing emphasis on holistic education, where program outcomes extend beyond technical proficiency to include social responsibility, teamwork, and adaptability.

Several researchers have highlighted the limitations of manual attainment tracking methods. Conventional approaches typically involve faculty members mapping exam questions to COs, aggregating results, and calculating attainment percentages using spreadsheets. While feasible for small cohorts, this process becomes inefficient, error-prone, and inconsistent when applied to large classes or multiple programs. As Rajasekaran et al. (2019) noted, fragmented reporting and subjective evaluations undermine the reliability of attainment measurement. The literature increasingly supports the adoption of automated systems to overcome these limitations, offering improved efficiency, accuracy, and transparency.

The use of Blooms Taxonomy as a framework for assessment design has been widely endorsed. By categorizing learning objectives into six levelsKnowledge, Comprehension, Application, Analysis, Synthesis, and EvaluationBlooms model provides a systematic approach to design- ing and mapping assessments. Studies indicate that embedding Blooms levels into COPO attainment ensures balanced evaluation of both lower- order and higher-order cognitive skills. For instance, questions targeting application and analysis levels validate students ability to apply theoretical concepts, while synthesis and evaluation tasks measure critical thinking and judgment skills. This taxonomy-based mapping leads to a more holistic view of student competencies.

Recent research has also explored the role of technology and visualization in enhancing attainment analysis. Srinivasan and Kumar (2020) emphasized that dashboards and graphical tools improve the interpretability of attainment data for educators and administrators. Similarly, Narayan and Pillai (2018) demonstrated that visualization of COPO mapping supports quick identification of performance gaps, enabling timely interven- tions. Such tools are particularly valuable during accreditation audits, where institutions must present evidence of continuous improvement and outcome alignment.

Another dimension addressed in the literature is scalability and adaptability of attainment systems. Shetty and Rao (2021) observed that tradi- tional methods struggle to accommodate diverse student populations and evolving curricula. Thomas et al. (2022) proposed modular automated platforms capable of handling large datasets and dynamically adjusting to curricular changes. These scalable frameworks ensure long-term sustain- ability and institutional compliance with accreditation requirements.

The role of continuous monitoring and feedback has also been emphasized. Rajasekaran et al. (2019) argued that evaluating student outcomes at multiple points during a course provides a more accurate picture of learning progress than relying on a single examination. Automated attainment systems support this by aggregating data from multiple assessments, thereby reducing subjectivity and enabling early identification of learning gaps. Remedial measures, such as bridge courses and mentoring sessions, can then be introduced to strengthen student competencies.

In summary, existing literature strongly supports the integration of Blooms Taxonomy, industry collaboration, ethical considerations, and tech- nology-enabled tools in COPO attainment systems. Manual approaches, while foundational, are inadequate for large-scale application. Automated frameworks, supported by digital platforms, not only reduce errors but also enhance transparency and reproducibility. By incorporating real-time analytics, visualization, and scalable architectures, these systems provide actionable insights for faculty, improve student learning outcomes, and support accreditation processes. The review of prior works therefore establishes the need for a structured and automated COPO attainment frame- work that aligns with both academic and industry expectations, laying the foundation for the methodology proposed in this work.

  1. METHODOLOGY

    True-Trail The proposed system introduces an automated framework for COPO attainment calculation within an Outcome-Based Education (OBE) paradigm. The methodology is structured into layered modules that collectively enable data collection, mapping, analysis, and reporting of attainment levels. By integrating Blooms Taxonomy with automated workflows, the framework ensures transparency, scalability, and alignment with accreditation standards.

    1. SYSTEM OVERVIEW

      The system operates as a modular, role-based framework comprising faculty dashboards, administrative control panels, and backend computation engines. Faculty members upload assessment data (internal exams, assignments, lab results, and university examinations), while the system maps each question to corresponding Course Outcomes (COs). These COs are subsequently aligned to Program Outcomes (POs) using predefined mappings.

      The framework consolidates direct attainment (derived from assessments) and indirect attainment (captured from student feedback and surveys) into a unified attainment matrix. Automated calculations are performed using weighted formulas, eliminating manual inconsistencies and providing a standardized approach across departments.

      Figure 1: System Workflow for Automated COPO Attainment Framework

    2. ARCHITECTURE AND MODULES

      The methodology incorporates five core modules:

      1. User Access and Authentication Faculty and administrators are assigned role-based credentials to ensure secure login. Faculty access course-level data, while administrators oversee institution-wide attainment records.

      2. Faculty DashboardA centralized dashboard enables faculty to manage student sections, upload assessment marks, and monitor attainment progress. Graphical tools highlight trends, gaps, and CO achievement levels in real time.

      3. Assessment and Mark Entry Module

        • Manual Entry: Faculty enter marks directly into structured forms.

        • Bulk Upload: Predefined Excel templates allow batch uploads of marks, with each question mapped to COs according to

          Blooms Taxonomy levels.

      4. Admin Dashboard Administrators oversee subject allocation, workload distribution, and program-level attainment. This module consolidates attainment data across multiple sections and generates institutional performance reports.

      5. Attainment Engine The backend computation unit applies standardized formulas to calculate CO attainment, aggregate them into PO attainment, and compute overall program performance.

      Fig. 2. CO Analysis of Students

    3. COPO MAPPING AND ATTAINMENT CALCULATION

      The core of the system lies in the COPO mapping mechanism. Each assessment question is pre-linked to a specific CO, which in turn is mapped to one or more POs. Direct attainment is calculated based on thresholds (e.g., 85% for Level 3, 75% for Level 2, 60% for Level 1). Indirect attainment is derived from course exit surveys and feedback, capturing student perception of learning outcomes.

    4. AUTOMATION AND TOOLS

      The framework employs Excel macros and Python scripts to automate the calculation process. Once marks are uploaded, the system automatically:

      • Maps questions to COs and corresponding POs.

      • Calculates direct attainment percentages for each CO.

      • Integrates survey results for indirect attainment.

      • Generates consolidated COPO matrices and visual dashboards.

        This automation ensures reproducibility, eliminates manual errors, and reduces faculty workload.

    5. REPORTING AND VISUALIZATION

      The system produces section-wise, course-wise, and program-wise attainment reports, which are accessible through dashboards and downloadable in tabular or graphical formats. Key features include:

      • CO-wise performance heatmaps.

      • Comparative attainment charts across sections.

      • Consolidated PO attainment reports for accreditation audits.

        These reports provide actionable insights for faculty to refine teaching methods, redesign assessments, and strengthen curriculum delivery.

        Fig. 3. overall student performance overview

        Fig 4: Comparison Table

    6. CONTINUOUS IMPROVEMENT CYCLE

      The final step of the methodology ensures compliance with continuous improvement (CI) standards mandated by accreditation bodies. Attainment results are analyzed to identify underperforming COs, which are then addressed through targeted interventions such as remedial classes, curriculum redesign, or enhanced assessment strategies. The improved outcomes are fed back into the next academic cycle, creating a feedback loop for progressive enhancement of teachinglearning quality.

  2. RESULTS AND DISCUSSIONS

    The proposed automated framework for COPO attainment was implemented and tested across multiple courses to evaluate its effectiveness in streamlining assessment processes, reducing faculty workload, and improving transparency in Outcome-Based Education (OBE). The system was deployed in a real academic setting, where faculty members uploaded assessment marks, mapped them to Course Outcomes (COs), and obtained Program Outcome (PO) attainment levels through automated computation. The results were analyzed in terms of accuracy, efficiency, and their ability to provide actionable insights for continuous curriculum improvement.

    1. Attainment Levels Across Courses

      The system computed attainment levels for each CO by aggregating direct and indirect assessment data. Direct attainment was derived from internal assessments, semester examinations, laboratory performance, and assignments, while indirect attainment was gathered from course exit surveys. Results showed that a majority of students achieved Level 3 attainment (85%) in foundational courses, indicating strong conceptual understanding. For higher-order courses requiring analytical and problem-solving skills, attainment was distributed between Level 2 (75%) and Level 3, reflecting the challenges students faced when applying knowledge in complex contexts.

      This outcome validates the effectiveness of Blooms Taxonomy-based question mapping, where assessments were designed to span cognitive levels ranging from comprehension to evaluation. The results demonstrated that students performed consistently well at knowledge and application levels but showed variability in synthesis and evaluation tasks. These insights are valuable for faculty in adjusting teaching strategies, emphasizing problem-based learning, and strengthening critical thinking exercises.

      Level

      Attainment (%)

      3

      85

      2

      75

      1

      65

    2. Section-Wise Analysis

      One of the most significant contributions of the framework is its ability to generate section-wise attainment reports. In traditional manual systems, section-level differences are often overlooked due to the burden of aggregating large datasets. However, the automated system identified variations between sections, highlighting differences in student engagement and performance.

      For example, in one subject, Section A demonstrated higher attainment levels compared to Section B, particularly in COs related to analytical problem-solving. This discrepancy was attributed to differences in teaching methodologies, frequency of formative assessments, and levels of class- room interaction. By surfacing such comparisons, the system provided faculty with evidence to harmonize instructional approaches and ensure uniform learning experiences across sections.

      Fig 5 : section-wise CO attainment

    3. Visualization of Attainment Data

      The inclusion of graphical dashboards and visual analytics significantly improved the interpretability of attainment results. Bar graphs and heatmaps enabled faculty to quickly identify underperforming COs and correlate them with specific assessments. Unlike tabular formats, these visualizations offered a holistic view of both course-level and program-level outcomes.

      For administrators, the consolidated PO attainment charts provided an overview of how courses collectively contributed to program objectives. This feature is particularly valuable during accreditation audits, where institutions must present verifiable evidence of alignment between COs and POs. The visual reports also reduced the time spent on manual documentation, allowing faculty to focus on pedagogy rather than clerical tasks.

      Fig 6 Comparative study

    4. Direct vs. Indirect Attainment

      The integration of indirect attainment (from surveys) alongside direct assessment results offered deeper insights into the student learning expe- rience. In some cases, survey responses revealed lower confidence levels in specific COs despite satisfactory exam performance. This discrepancy suggested that while students could reproduce knowledge in assessments, they did not feel adequately prepared to apply these skills in real-world contexts.

      Such findings underscore the importance of considering both objective measures (marks) and subjective perceptions (surveys) in outcome evaluation. By combining these dimensions, the system provided a balanced representation of learning achievement and highlighted areas where curriculum delivery needed to bridge the gap between academic performance and student confidence.

    5. Faculty and Administrative Benefits

      Feedback from faculty indicated that the automated framework significantly reduced the workload associated with COPO calculations. Tasks that previously requiredseveral hours of manual effort, including mapping, computation, and report generation, were completed within minutes. The standardized templates for mark entry minimized errors, while automated calculations ensured consistency across courses and programs.

      Administrators benefited from the ability to monitor attainment at an institutional level. By consolidating data across departments, the system provided program-wide insights into strengths and areas requiring improvement. For instance, the analysis revealed that while most POs related to technical competencies were consistently achieved, POs focusing on lifelong learning and ethical responsibility showed relatively lower attainment levels. These observations prompted curriculum committees to integrate more activities addressing professional skills, ethics, and self-directed learn- ing.

    6. Insights for Continuous Improvement

      Perhaps the most impactful contribution of the results is their role in enabling continuous improvement cycles. The framework identified underperforming COs, allowing faculty to plan remedial actions such as bridge courses, additional tutorials, or project-based activities. These inter- ventions not only enhanced student performance but also strengthened institutional readiness for accreditation.

      Moreover, the ability to track attainment trends across multiple semesters offered longitudinal insights. For example, improvements in certain POs over successive years validated the effectiveness of pedagogical reforms, while stagnation in others highlighted the need for further interven- tions. This longitudinal view ensures that continuous improvement is not a one-time activity but an ongoing institutional practice.

    7. Comparison with Manual Methods

    When compared to traditional manual calculations, the automated system demonstrated clear advantages in accuracy, scalability, and reproduc- ibility. Manual processes often introduced inconsistencies due to subjective interpretation of mapping rules and human error during aggregation. In contrast, the automated framework applied standardized rules across all courses, ensuring fairness and transparency. Furthermore, scalability was achieved as the system could handle large datasets spanning multiple departments without compromising accuracy or efficiency.

  3. CHALLENGES

    While the proposed automated framework for COPO attainment significantly improves transparency and efficiency in Out- come-Based Education (OBE), its practical implementation also encounters several challenges that must be acknowledged. These challenges span technical, institutional, and pedagogical dimensions, influencing both adoption and long-term sustainability.

    1. Data Quality and Consistency

      One of the foremost challenges lies in ensuring data accuracy and consistency. The framework depends heavily on the quality of assessment data uploaded by faculty. Errors in mapping questions to Course Outcomes (COs), inconsistencies in mark entry, or incomplete survey responses can lead to incorrect attainment calculations. Establishing strict validation protocols and faculty train- ing becomes essential to maintain data integrity.

    2. Faculty Adaptability and Training

      Transitioning from manual methods to an automated framework requires faculty to adapt to new workflows, dashboards, and digital tools. While automation reduces workload in the long run, initial resistance may arise due to unfamiliarity with templates, mapping procedures, and analytical dashboards. Continuous professional development and hands-on training workshops are neces- sary to build confidence and ensure faculty fully utilize the systems potential.

    3. Integration with Existing Systems

      Many institutions already use Learning Management Systems (LMS) or Examination Management Software. Integrating the proposed attainment framework with these systems is often complex, requiring compatibility in data formats and secure data ex- change mechanisms. Without seamless integration, duplication of work or fragmented data silos may undermine the efficiency gains of automation.

    4. Addressing Subjectivity in Indirect Attainment

      Indirect attainment, derived from student surveys and feedback, introduces a degree of subjectivity into the evaluation process. Students may overestimate or underestimate their competencies, leading to discrepancies between direct and indirect attainment results. While combining both measures ensures a balanced approach, refining survey design and encouraging honest feedback remain ongoing challenges.

    5. Scalability and Maintenance

      Although the system is designed to handle large datasets, scalability across departments and institutions requires robust infra- structure and continuous technical support. As the volume of courses and assessments increases, ensuring smooth performance, timely updates, and regular maintenance becomes critical. Resource constraints, particularly in smaller institutions, may limit the ability to sustain long-term deployment.

    6. Change Management and Continuous Improvement

    Perhaps the most complex challenge is embedding the framework into a culture of continuous improvement. Institutions may implement the system to meet accreditation requirements but fail to actively use insights for curriculum refinement. To address this, academic leaders must foster an institutional mindset where attainment analysis is not seen as a compliance activity but as a tool for pedagogical innovation and student success.

  4. FUTURE WORK

    The proposed automated framework for COPO attainment has demonstrated its potential to streamline assessment processes and improve transparency in Outcome-Based Education (OBE). However, several opportunities exist to extend the systems scope and enhance its capabilities in the future.

    One promising direction is the integration of artificial intelligence (AI) and machine learning (ML) for predictive analytics. By analyzing his- torical attainment data, AI models can forecast student performance trends, identify at-risk learners early, and recommend targeted interventions such as bridge courses, remedial sessions, or personalized assignments. Such predictive insights can strengthen the continuous improvement cycle by shifting from reactive analysis to proactive curriculum planning.

    Another area of advancement lies in the development of adaptive dashboards and visualization tools. Current dashboards primarily present aggregated attainment results, but future iterations could allow interactive exploration, enabling faculty to drill down into specific COs, assessments, or student cohorts. Real-time analytics with customizable visualizations would further empower educators and administrators to make data-driven decisions with greater precision.

    Additionally, the system can be extended to support integration with Learning Management Systems (LMS) and institutional databases. Seamless interoperability would eliminate duplicate data entry, synchronize assessments across platforms, and facilitate automated report generation for ac- creditation purposes.

    Expanding the scope of indirect attainment measurement is another promising avenue. Incorporating alumni feedback, employer surveys, and internship evaluations could provide a more comprehensive picture of how program outcomes translate into professional competencies. This exten- sion would align the attainment framework more closely with industry needs and graduate employability.

    Finally, future work could focus on ensuring scalability across institutions by deploying the system as a cloud-based solution. This would reduce infrastructure costs, enhance accessibility, and allow benchmarking across universities. Collectively, these advancements ould transform the frame- work into a more intelligent, adaptive, and industry-aligned solution for modern OBE.

    .

  5. CONCLUSION

    This work presented an automated framework for calculating Course Outcome (CO) and Program Outcome (PO) attainment in the context of Outcome-Based Education (OBE). By integrating Blooms Taxonomy with both direct and indirect assessment measures, the system ensures a structured, transparent, and reliable approach to evaluating student learning outcomes. The framework effectively addresses the limitations of manual methods, which are often time-consuming, error-prone, and inconsistent.

    The results demonstrate that automation not only reduces faculty workload but also enhances accuracy and reproducibility in attainment calculations. The use of dashboards and visual analytics provides clear insights into attainment trends, enabling faculty to identify strengths and weaknesses at both course and program levels. Section-wise analysis further highlights variations in teaching effectiveness and student engagement, offering opportunities for targeted interventions. The inclusion of indirect attainment ensures that student perceptions are considered alongside objective measures, resulting in a balanced evaluation of outcomes.

    Beyond immediate efficiency gains, the system strengthens institutional accountability by supporting accreditation requirements and providing verifiable evidence of continuous improvement. By identifying underperforming COs and POs, the framework encour- ages data-driven curriculum refinement and fosters a culture of ongoing enhancement in teaching and learning practices.

    In conclusion, the proposed framework represents a significant step toward aligning academic assessment with industry expecta- tions and accreditation standards. With future extensions such as AI-driven predictive analytics, LMS integration, and expanded indirect feedback mechanisms, the system holds promise as a scalable and sustainable solution for advancing the goals of modern OBE.

  6. REFERENCES

  1. A. Mulla, "A Case Study on Course Outcome & Program Outcome Mapping Levels Based on Competency Performance Indicators," Journal of Engineering Education Transformations, vol. 36, no. 4, pp. 59-65, 2023.DOI: 10.16920/jeet/2023/v36i4/2607

  2. S. J. Suji Prasad, "Assessment of Program Outcomes in Co- Curricular Activities," Journal of Engineering Education Transformations, vol. 36, no. 4, pp. 59-65, 2023.DOI: 10.16920/jeet/2023/v36i4/2607

  3. Sumathi, R., Savithramma, R. M., & B P, A. (2025). A systematic framework for designing and implementing outcome-based curriculum in engineering education: A comprehensive approach. Journal of Engineering Education Transformations, 37, 216224.

  4. Ali, Q. I. (2024). Towards more effective summative assessment in OBE. Springer Journal, 2024.

  5. Hu, A. (2023). Engineering curriculum reform based on outcome-based education: A case study. Sustainability, 15(11), 8915. MDPI

  6. Nguyen Huu, C., & Tran, T. (2024). Implementing outcome- based education in higher education: A case study. Vietnam Journal of Education, 27(6), 18391846. Vietnam Journal of Education

  7. Alsabhan, A. H. (2023). Complementary effect of curricula modifications, OBE, and industry collaboration in engineering education. Journal of Engineering Education Transformations, 36(4), 1951.

  8. Khormazard, H. N., & Smith, J. (2023). Development of outcome-based assessment metric for civil engineering programs. Proceedings of PCEEA, 2023. ojs.library.queensu.ca

  9. Prasad, S. J. P., & Mohanta, S. (2023). Assessment of program outcomes in outcome-based education: A case study. Journal of Engineering Education Transformations, 36(4), 5965.

  10. Ali, Q. I. (2024). Towards more effective summative assessment in OBE. Springer Journal, 2024.

  11. Vaidya, S. R., & Chitre, P. D. (2013). Industry-academia partnership: Enhancing employability. Procedia – Social and Behavioral Sciences, 93, 276284.

  12. Ssebuwufu, J., Ludwick, T., & Béland, M. (2012). Strengthening university-industry linkages in Africa: A study of institutional capacities and gaps. Association of African Universities, 2012.

  13. Marshall, S. J. (2018). Shaping the university of the future: Using technology to catalyse change in university learning and teaching. Springer, 2018.

  14. El-Sayed, A. M. (2012). The role of professional societies in the development and accreditation of engineering programs. Proceedings of IEEE EDUCON, Marrakesh, Morocco, 2012.

  15. Johnson, M. A. (2005). Curriculum development and assessment in collaboration with industry and professional societies. Proceedings of IEEE Frontiers in Education Conference, Indianapolis, IN, 2005.

  16. Kolmos, A. Y., & Krogh, L. (2013). Enhancing students self-directed learning skills: The role of industry collaboration and professional society involvement. European Journal of Engineering Education, 38(5), 521531.

  17. Nnamdi, M. C., Tamo, J. B., Shi, W., & Wang, M. D. (2025). Advancing problem-based learning in biomedical engineering in the era of generative AI. arXiv preprint arXiv:2503.16558.

  18. Fernandes, F., & Werner, C. (2023). Towards a blockchain- based software engineering education. arXiv preprint arXiv:2304.04549.

  19. Susanta, V. A. (2025). Outcome-based education in the 21st century: Innovations and challenges. IISTR Journals, 2025.

  20. Mahrishi, M., Ramakrishna, S., Hosseini, S., & Abbas, A. (2025). A systematic literature review of the global trends of outcome-based education (OBE) in higher education with an SDG perspective related to engineering education. Sustainable Development, 6, 620.

  21. Melsa, J. A. (2003). The future of engineering education: The role of the professional societies. IEEE Transactions on Education, 46(4), 527531.

  22. Davis, C. A., Beyerlein, M., & Davis, F. T. (2009). Deriving design course outcomes from a professional societys body of knowledge. Journal of Engineering Education, 98(3), 227 239.

  23. Bourne, J. R., Harris, D., & Mayadas, F. (2005). Online engineering education: Learning anywhere, anytime. Journal of Engineering Education, 94(1), 131146.

  24. Johnson, M. A. (2005). Curriculum development and assessment in collaboration with industry and professional societies. Proceedings of IEEE Frontiers in Education Conference, Indianapolis, IN, 2005.

  25. El-Sayed, A. M. (2012). The role of professional societies in the development and accreditation of engineering programs. Proceedings of IEEE EDUCON, Marrakesh, Morocco, 2012.

  26. Kolmos, A. Y., & Krogh, L. (2013). Enhancing students self-directed learning skills: The role of industry collaboration and professional society involvement. European Journal of Engineering Education, 38(5), 521531.

  27. Kavitha, K., & Ramesh, S. (2023). Implementation challenges and opportunities in the outcome-based education (OBE) for teaching engineering courses: A case study. International Journal of Engineering and Advanced Technology, 12(5), 125.

  28. Biswal, D. K., Moharana, B. R., & Muduli, K. (2025). Development of a Framework for Assessing he Degree of Course and Program Outcome Attainment utilizing Outcome-Based Education Framework. Journal of Engineering Education Transformations, 38(3), 158-170.

  29. Sharma, A. (2025). Direct and Indirect Evaluation Strategies for Course Outcome (CO)-Program Outcome (PO) Attainment of Engineering Physics Course. Journal of Research in Education and Pedagogy, 2(2), 309-323.

  30. Popli, N., & Singh, R. P. (2024). Enhancing Academic Outcomes through Industry Collaboration: Our Experience with Integrating Real-World Projects into Engineering Courses. Discover Education, 3(1), 217.

  31. Developing Industry Ready Graduates in Partnership with Industry and Other Stakeholders. (2024). In Perspective and Strategies on Newage Education and Creative Learning (ICON BITS 2023), Springer.

  32. Pachera, C., Woschank, M., Zunkc, B. M., & Gruber, E. (2024). Competence-Based Education in Industrial Engineering and Management: A Systematic Review. Production & Manufacturing Research, 12(1).

  33. Six Cs of Successful Higher Education-Industry Collaboration in Engineering Education: A Systematic Literature Review. (2024). European Journal of Engineering Education.

  34. Chan, C. K., Cheung, H., Ko, M., Chui, C. K., & Yang, L. (2024, June). Preparing Students for Successful Industrial Collaborations in Engineering (Work in Progress). ASEE Annual Conference & Exposition.

  35. Goswami, S., & Natarajan, R. (2024). Outcome-Based Attainment Analysis of Engineering Courses using Direct and Indirect Assessment Tools. Journal of Engineering Education Transformations, 38(2), 75-84.

  36. Thomas, M., & George, J. (2024). Mapping Course Outcomes to Program Outcomes through Rubric-Based Assessment in Capstone Projects. International Journal of Engineering Pedagogy (iJEP), 14(1), 88-101.

  37. Kumar, R., & Mehta, P. (2024). A Quantitative Study on Program Outcome Attainment through Direct Mark-Based Assessments and Indirect Surveys. Journal of Technical Education and Training (JTET), 16(2), 45-59.