๐Ÿ†
Global Research Press
Serving Researchers Since 2012

Automated Subject Allocation in Academic Institutions Using a Priority-Based Multi-Criteria Decision-Making Approach

DOI : 10.17577/IJERTCONV14IS010002
Download Full-Text PDF Cite this Publication

Text Only Version

Automated Subject Allocation in Academic Institutions Using a Priority-Based Multi-Criteria Decision-Making Approach

Manish

PG student, St. Joseph Engineering College, Mangalore

Rakshith Kumar

PG student, St. Joseph Engineering College, Mangalore

Sunith Kumar T Assistant Professor, St. Joseph Engineering College, Mangalore

Abstract – Allocating subjects (courses) to university faculty requires balancing multiple conflicting criteria such as instructor preferences, expertise, seniority, and workload. Manual allocation is laborious and often suboptimal. We propose Weighted Multi- Criteria Ranking and Assignment (W-MCRA), an algorithm that scores each lecturersubject pair by combining criteria (preference rank, seniority, teaching history). Subjects allocation must adhere to three strict rules: (1) coverage -every lecturer receives at least one course; (2) capacity -no lecturer exceeds their declared teaching load; and (3) uniqueness a lecturer may not teach two different subjects in the same section. We study this problem in the context of a two-section (Section A and Section B) and present Weighted Multi-Criteria Ranking and Assignment (W- MCRA), an algorithm that scores each lecturersubject pair by combining criteria (preference rank, seniority, teaching history). W-MCRA is a fast two-phase algorithm that (i) scores each lecturersubject pair by a weighted blend of preference rank, seniority, and prior teaching history, then (ii) greedily assigns the highest-scoring feasible pairs while enforcing the above rules.

This approach resembles the optimal solution of an assignment problem (e.g., via the Hungarian method [1]) but employs a simpler greedy procedure. We illustrate W-MCRA with a data set of ten faculty and six core subjects (twelve delivery slots in total), achieving 100% slot coverage with zero rule violations. The resulting allocation strongly satisfies the preferences of the faculty: 75% of the assignments are first choice, and 91.7% fall within the top two choices, resulting in an average preference rank of 1.33. The workload is equitable (variance = 0.18; fairness ratio = 0.50), and the overall composite score for all assignments is 9.45. These results underscore W-MCRAs effectiveness in delivering high satisfaction, fairness, and policy compliance without requiring complex optimization frameworks. W-MCRA offers a practical, scalable, and easily auditable solution for academic course scheduling. Future extensions may incorporate dynamic semester constraints, cross-departmental allocations, and adaptive weighting based on institutional goals.

Index Terms – Course allocation, Lecturer assignment, Multi- criteria decision, Weighted ranking, Academic scheduling

  1. INTRODUCTION

    Assigning courses to faculty is traditionally done manually by department chairs or committees. This ad-hoc process is time-consuming and prone to bias or inefficiency. Studies show that manual assignment often leads to misalignment with instructors expertise and inequitable workloads [3].

    In practice, a few faculty may be overburdened or repeat- edly given undesirable assignments, while others are underutilized [5]. In a typical two-section timetable (Section A and Section B), three constraints must always hold: every lecturer must be scheduled for at least one course (coverage); no lecturer can exceed their contractual maximum number of sections (capacity); and a lecturer may not teach more than one subject in the same section (uniqueness). Manual allocation procedures often struggle to satisfy these constraints simultaneously, resulting in delays, inconsistent fairness, and faculty dissatisfaction.

    Digital transformation in higher education is accelerating the push toward data-driven scheduling platforms. However, existing solutions tend to cluster at two extremes. On one end are highly specialized optimization enginesinteger or constraint programming, branch-and-bound, or evolutionary searchthat guarantee optimality but require expert con- configuration and significant compute time for even moderate department sizes. On the other end are ad-hoc spreadsheet workflows that are fast but opaque, error-prone, and difficult to audit. The gap between these approaches motivates a middle-ground method that is both transparent and sufficiently expressive to encode common academic policies.

    Recent literature frames course allocation as a multi-criteria decision problem, leveraging techniques such as the Analytic Hierarchy Process (AHP) or simple additive weighting to integrate disparate criteria into a single utility score [2]. These Multi-Criteria Decision Analysis (MCDA) frameworks provide the interpretability lacking in black-box solvers while retaining the ability to prioritize competing objectives. Our work adopts this philosophy and extends it with an assignment phase that directly enforces the three logistical rules most institutions demand.

    We therefore propose Weighted Multi-Criteria Ranking and Assignment (W-MCRA), a lightweight two-phase heuristic:

    1. Scoring phase: Each lecturersubject pair receives a composite score computed as a weighted sum of preference rank, normalized seniority, and prior- teaching history. The weights are transparent knobs that administrators can tune to their policy priorities.

    2. Assignment phase: Pairs are sorted by score and greed- ily accepted if (i) the subject is still unassigned, (ii) the lecturer is below their load limit, and (iii) the lecturer is

    not already teaching another subject in the same section. The process terminates when all subjects are placed or no feasible pairs remain.

    Although greedy, W-MCRA performs competitively with optimal matching techniquesachieving 75 first-choice satisfaction and full coverage in our empirical studywhile executing in milliseconds and generating allocations that stakeholders can verify line-by-line. By design, every decision is traceable to its underlying score, a property often cited as crucial for organizational trust in algorithmic scheduling systems.

    This paper makes four key contributions. First, it formalizes the lecturercourse allocation problem under three practical constraints: coverage, capacity, and uniqueness, which are common in two-section academic timetables. Second, it introduces the W-MCRA algorithm, a two-phase approach that combines Multi-Criteria Decision Analysis (MCDA) scoring with a constraint-aware greedy assignment process. Third, it provides a concise open-source Python implementation, enabling easy adoption and experimentation. Finally, it presents a case study using a dataset of ten faculty and six subjects, demonstrating complete slot coverage, high satisfaction of lecturer preferences, and balanced workload distribution, with a measured variance of 0.18 and a fairness ratio of 0.50.

  2. LITERATURE REVIEW

    The facultycourse assignment problem has been examined through a variety of optimization lenses, ranging from exact mathematical programming to meta-heuristics and multi-criteria frameworks.

    A. Exact Optimization Methods

    Early work modelled course allocation as an assignment problem and solved it with the Hungarian (KuhnMunkres) algorithm, achieving provably optimal matchings when the objective is a single weighted cost [1]. Linear and integer

    C. Multi-Criteria Decision Analysis (MCDA)

    More recently, MCDA frameworks have gained traction because they offer a transparent way to combine heterogeneous criteriae.g., preference rank, seniority, subject expertiseinto a single composite score [2]. Abdul Munim and Islams weighted allocation framework [3] exemplifies this trend, showing that simple additive scoring can achieve near-opimal preference capture while remaining interpretable. However, these studies often stop short of enforcing practical scheduling rules such as no lecturer may teach two subjects in the same section, leaving a gap between theory and deployable timetabling tools.

    Research Gap

    Existing exact and heuristic models excel at either optimality or scalability but rarely both, and many lack the rule- enforcement layer required for real-world deployment (coverage, capacity, and uniqueness constraints in a multi- section context). Moreover, few approaches expose a weight- tuning interface that lets administrators transparently prioritize institutional policies over individual preferences. The proposed W-MCRA algorithm addresses these gaps by embedding MCDA scoring in a greedy assignment routine that explicitly enforces all three hard rules while remaining computationally light and policy-tunable.

  3. W-MCRA ALGORITHM

    The Weighted Multi-Criteria Ranking and Assignment (W- MCRA) algorithm proceeds in two main phases:

    1. Scoring Phase: For each lecturer i and each subject j, compute a composite score Scorei,j as a weighted sum of three components:

      • Preference Score: Let ranki,j be the position of subject j in lecturer is preference list (rank 1 = top preference). We define a normalized preference score:

        maxRank ranki,j + 1

        programming formulations later extended this foundation to Pi,j = maxRank

        Incorporate capacity and workload constraints; for example, the LPSA model of Muniandy et al. uses mixed-integer linear programming (MILP) to minimize total mismatch cost under maximum-load limits. While these methods guarantee optimality, their computational complexity grows quickly with the number of faculty and courses, and the resulting solutions can be opaque to stakeholders unfamiliar with optimization theory.

        B. Meta-Heuristic Approaches

        To address scalability, researchers have applied evolutionary algorithms, simulated annealing, and tabu search. Chan et al., for instance, reported a 13 % improvement in preference satisfaction using a genetic algorithm seeded with preference data. Multi-objective variants such as NSGA-III have also been explored to balance lecturer satisfaction against workload equity. Meta-heuristics are flexible but require careful parameter tuning, and convergence to high-quality solutions is not guaranteed [12].

        where maxRank is the length of the longest preference list. This gives higher values to higher preferences (e.g. top-ranked course yields 1.0).

        • Seniority Score: Normalize lecturer is seniority Si

          by the maximum seniority over all faculty:

          i

          S = Si

          Smax

          This ensures the most senior faculty get a score of 1.0, scaling down for less senior ones.

        • History Score: A binary value Hi,j, set to 1 if lecturer i has previously taught subject j, and 0 otherwise. We treat prior experience as a positive factor.

        The total score is then:

        Scorei,j = wpref ยท Pi,j + wsen ยท Si + whist ยท Hi,j

        where wpref, wsen, whist are user-defined weights (summing to 1) reflecting the relative importance of each

        criterion. For example, we might set wpref = 0.5, wsen = 0.3, whist = 0.2 as in our implementation. The result is a list of tuples (i, j, Scorei,j) for all lecturer- subject pairs.

    2. Assignment Phase: Sort all lecturersubject pairs by their score in descending order. Then iterate through this sorted list, and greedily assign subjects to faculty as follows: For each pair (i, j) in order of highest score, if subject j is not yet assigned and lecturer i has not reached their maximum load, then assign subject j to lecturer i. Continue until all subjects are assigned or no valid assignments remain. This greedy strategy ensures each subject goes to the best available candidate under the weighted scoring. It is a fast heuristic; formally, the assignment problem can be solved exactly (and optimally) using the Hungarian (KuhnMunkres) algorithm, but our simpler greedy method often performs well in practice.

    Pseudocode:

    • Input: Faculty with (preferences, seniority, max load, taught history); list of subjects; weights wpref , wsen, whist.

    • Scoring: Compute Scorei,j for all (lecturer i, subject j) pairs as above.

    • Sort: Order all pairs by decreasing score.

    • Greedy assignment: Initialize all subjects unassigned and lecturer loads to zero. For each pair in sorted order: if subject j is unassigned and lecturer i has load < max load, then assign j to i and increment is load.

    • Output: A set of assignments lecturersubject.

    This method falls under multi-criteria decision-making: by weighting and aggregating different factors, we approximate an overall preference satisfaction score [6], [7]. Using normalized scores and fixed weights, W-MCRA is computationally- ally efficient (roughly O(n2 log n) to sort n pairs) and easy to implement.

  4. EXAMPLE APPLICATION AND RESULTS

    To illustrate the proposed method, we tested W-MCRA on a synthetic timetable consisting of 10 faculty, 6 core subjects, and two parallel sections (A and B), yielding twelve delivery slots in total.1 Each lecturer submitted an ordered list of up to four preferred subjects and a maximum load cap. The scoring weights were fixed at wpref = 0.5, wsen = 0.3, and whist = 0.2.

    1. Section-wise Allocations

      Tables I and II list the assignments produced by W-MCRA for Sections A and B. All twelve slots are filled, no lecturer appears twice in the same section, and no individual exceeds their stated load limit.

      TABLE I

      W -MCRA ALLOCATION SECTION A

      Subject

      Lecturer

      Score

      Rank

      AI

      Dr. A

      0.950

      1

      Computer Networks

      Dr. D

      1.000

      1

      DBMS

      Prof. I

      0.525

      3

      ML

      Prof. B

      0.825

      1

      NLP

      Dr. F

      0.925

      1

      Operating Systems

      Dr. G

      0.875

      1

      Fig. 1. Histogram of assigned preference ranks produced by W-MCRA on the 12-slot test instance. Three-quarters of assignments are first-choice (rank 1), and more than 91.7% fall within the top two preferences.

    2. Aggregate Performance Metrics

      Table III summarizes the key indicators. W-MCRA achieved 100% coverage with zero rule violations. Preference sat- satisfaction is high: three-quarters of all slots are first-choice matches and 91.7% fall within the top two preferences, giving an average preference rank of 1.33. Workload is evenly spread (variance 0.18) with a minimum-to-maximum ratio of 0.50, and the composite utility across all assignments totals 9.45.

      TABLE II

      W -MCRA ALLOCATION SECTION B

      Subject

      Lecturer

      Score

      Rank

      AI

      Dr. C

      0.900

      1

      Computer Networks

      Dr. D

      1.000

      1

      DBMS

      Dr. J

      0.575

      1

      ML

      Prof. E

      0.525

      2

      NLP

      Dr. H

      0.475

      2

      Operating Systems

      Dr. G

      0.875

      1

      p>

    3. Comparison with an Optimal Baseline

      To gauge optimality, we solved the same instance with the Hungarian algorithm using identical composite scores as edge weights. The optimal total score was 9.67only 2.3 higher than W-MCRAs 9.45while preference and fairness statistics differed negligibly (<3 absolute across all metrics). Hence, the greedy heuristic captures almost all attainable utility at a fraction of the computational and implementation cost.

    4. Runtime and Scalability

      Implemented in Python (NumPy + pandas), W-MCRA completed in 0.6 ms on a 3.4 GHz desktop for the 12-slot instance. The algorithm is O(nm log nm) due to the global sort, where n is the number of faculty and m is the number of subjects. Stress tests up to n = m = 100 (10,000 pairs) are still executed under 70, confirming suitability for real-time scheduling.

    5. Sensitivity to Weight Parameters

      We varied wpref from 0.3 to 0.7 (adjusting the other weights proportionally).

      TABLE III

      OVERALL EVALUATION METRICS

      Metric Value

      Total composite score 9.45

      Average preference rank 1.33

      Top-1 accuracy (%) 75.00

      Top-2 accuracy (%) 91.67

      Top-3 accuracy (%) 100.00

      Top-4 accuracy (%) 100.00

      Allocation coverage (%) 100.00

      Load variance 0.18

      Load fairness ratio 0.50

      Average load 1.20

      Rule violations 0

      Increasing preference weight predictably boosts top-choice matches (Table 3) but also raises load variance, illustrating the policy trade-off between satisfaction and equity.

    6. Fairness Discussion and Limitations

    W-MCRA enforces hard fairness (no overload, no duplicate section teaching) yet does not guarantee envy-freeness or proportional fair share. In practice, we observed low envy frequency in simulations, but pathological instances remain possible. Future work could integrate a post-processing swap phase or adopt fair-division rules (e.g., max-min fair matching) to address residual envy while retaining W- MCRAs trans- parency and speed.

  5. DISCUSSION

    The preceding results confirm that Weighted Multi-Criteria Ranking and Assignment (W-MCRA) offers a practical middle ground between black-box optimality and rule-of-thumb scheduling. This section reflects on the algorithms strengths, tunable levers, fairness properties, and outstanding limitations.

    1. Main Strengths

      • Transparency and Simplicity. All decision factors are explicitly weighted in a single additive score, giving administrators an explainable allocation pathan ad- vantage over integer-programming models that hide trade- offs in constraints and dual variables.

      • Speed and Scalability. The Python implementation schedules a 100-pair instance in under a millisecond and scales sub-second to thousands of pairs (ยง V-D), enabling interactive what-if exploration during faculty meetings.

      • Hard-rule Compliance. Coverage, capacity, and unique- ness are enforced during assignment, guaranteeing a feasible timetable in a single passno post-hoc repair steps are required.

    2. Weight Tuning as a Policy Lever

      Because wpref , wsen, and whist sum to one, decision-makers can slide emphasis between individual satisfaction and institutional policy. The sensitivity sweep in Table 3 shows that increasing the preference weight from 0.30 to 0.70 boosts top-choice matches by 16.6 but doubles load variance. This built-in trade-off mechanism aligns with MCDA best practice

      [2] and is easier to communicate than, say, multi-objective Pareto sets.

    3. Optimality Gap vs. Hungarian Benchmark

      W-MCRA captured all but 2.3 of the Hungarian algorithms optimal utility on the test instance (ยง V-C). Given the marginal gain, the extra coding effort and computational cost of an exact solver may be unwarranted for day-to-day timetabling, echoing empirical findings that greedy MCDA heuristics often reach near-optimal satisfaction .

    4. Fairness Analysis

      The algorithm satisfies hard fairness (no overload, no du- duplicate section teaching). Soft fairnesse.g., envy- freenessis not guaranteed, but simulations exhibited low envy frequency. Weighting seniority at wsen = 0.3 implicitly honours promotion rules, while the fairness ratio (0.50) indicates that no lecturer carries more than double anothers load. Recent fair-division literature suggests hybrid swap phases can close the remaining envy gap [3]; integrating such swaps is a promising extension.

    5. Limitations and Future Work

      • Single-slot Scores. Current scoring ignores schedule conflicts (time slots). Extending Scorei,j with a clash penalty would enable full timetable generation.

      • Static Preference Lists. Preferences are assumed fixed; eliciting indifference classes or probabilistic preferences could yield richer allocations.

      • Post-allocation Equity. Incorporating a light swap- improve phase could nudge the greedy assignment toward envy-freeness without a full combinatorial search.

      • Broader Criteria. Expertise scores, student feedback, and research commitments are natural additions to the weight vector, extending W-MCRA into a comprehensive workload-balancing tool.

    Practical Takeaway: For institutions prioritizing quick, interpretable results, W-MCRA delivers a 75% top-choice hit rate and perfect rule compliance in real time. Where absolute optimality or formal fairness is paramount, W-MCRA can serve as a high-quality initial solution for more sophisticated optimization or swap-based enhancement phases.

  6. CONCLUSION

This study introduced Weighted Multi-Criteria Ranking and Assignment (W-MCRA), a lightweight yet policy- aware heuristic for allocating courses to university faculty under hard operational rules. Unlike black-box optimisers, W-MCRA is transparent: every lecturersubject pair receives an inter- pretable composite score that blends preference rank, seniority, and teaching history. A single greedy sweep then produces a timetable that respects three strict constraintscoverage, capacity, and uniquenessin real time.

Applied to a ten-lecturer, six-subject (two-section) case study, W-MCRA delivered 100 % slot coverage with zero rule violations while satisfying 75 of faculty first choices and 91.7 of their top-two preferences. The total utility fell within 2.3 of an optimal Hungarian matching yet executed in milliseconds, underscoring the algorithms favourable trade-off between solution quality and computational simplicity.

Practical implications. Because the weight vector (wpref , wsen, whist) is user-tunable, departments can align the allocator with local policywhether that means prioritis- ing senior faculty, rewarding prior expertise, or maximising preference satisfaction. The Python reference implementation demonstrates how easily W-MCRA can be embedded in ex- isting academic ERP or scheduling tools.

Future work. Two avenues merit exploration: (i) enriching the scoring function with schedule-conflict penalties, student feedback, or research load, and (ii) adding a lightweight swap-improvement phase to enhance envy-freeness without sacrificing transparency. These extensions would move W- MCRA from a high-quality heuristic toward a fully fledged, fairness-aware timetabling engine.

In sum, W-MCRA shows that combining Multi-Criteria Decision Analysis with a simple greedy assignment can yield fast, fair, and easily explainable course allocations making it an attractive choice for institutions that value both efficiency and stakeholder trust.

ACKNOWLEDGMENT

The authors thank their faculty mentors and peers fo support and feedback.

REFERENCES

  1. Hungarian Algorithm, Wikipedia (online). [Accessed 2023]. Available: https://en.wikipedia.org/wiki/Hungarian algorithm.

  2. SixSigma.us, Multi-criteria decision analysis (MCDA). All You Need to Know, Apr. 18, 2024. [Online]. Available: https://www.6sigma.us/ six-sigma-in-focus/multi-criteria- decision-analysis-mcda/

  3. M. A. Munim and M. J. Islam, Design of a Fair and Scalable Course Allocation Framework for University Faculty, Int. J. Computer Applications, vol. 187, no. 13, pp. 18, Jun. 2025. [Online]. Available: https://www.ijcaonline.org/archives/ volume187/number13/munim-2025-ijca-921234.pdf

  4. M. Brahimi and M. Reghioui, An equity-oriented simulated-annealing algorithm for university course allocation, Appl. Soft Comput, vol. 141, 110204, 2024.

  5. B. K. Bhoi and A. Dhodiya, Fuzzy multi-objective faculty course assignment using teaching effectiveness, Expert Syst. Appl, vol. 224, 119771, 2024.

  6. S. Biswas, S. Ghosh, and A. Sen, Fairness in course allocation: Max- min and envy-freeness, ACM Trans. Soc. Comput, vol. 6, no. 1, pp. 125, 2023.

  7. A. Ozkan, A. Ulucan, C. Dirik, and K. B. Atici, University course timetabling with multi-section courses, room stability and lecturer pref- erences: An application in a business school, Comput. Manag. Sci, vol. 22, no. 1, 2025.

  8. G. Bissias, C. Cousins, P. Navarrete Dยดaz, and Y. Zick, Deploy- ing fair and efficient course-allocation mechanisms, arXiv preprint, arXiv:2502.10592, 2025.

  9. K. N. Subang, E. I. Balaba, and J. C. Agoylo Jr., Optimising course scheduling with genetic algorithms: A dynamic approach, SAR J., vol. 7, no. 4, pp. 296302, 2024.

  10. F. Dunke and S. Nickel, A matheuristic for customised multi-level multi-criteria university timetabling, Ann. Oper. Res, vol. 328, pp. 13131348, 2023.

  11. D. F. Dofadar, R. H. Khan, and M. Majumdar, A hybrid evolutionary ap- proach to solve university course- allocation problem, arXiv preprint, arXiv:2212.02230, 2023.

  12. A. Ahmed and S. Iqbal, Greedy optimisation techniques for course scheduling under faculty constraints, Int. J. Adv. Comput. Sci. Appl. vol., 12, no. 5, pp. 17, 2021.

  13. S. Azad and M. Ahmed, Workload-aware automated course scheduling for higher education, Int. J. Educ. Manag., vol. 33, no. 6, pp. 1155 1171, 2019.