🏆
Global Peer-Reviewed Platform
Serving Researchers Since 2012

Recurring Errors and Misconceptions in Cambridge International A-Level Physics 9702: A Systematic Analysis of Principal Examiner Reports (2018–2025)

DOI : https://doi.org/10.5281/zenodo.19451687
Download Full-Text PDF Cite this Publication

Text Only Version

 

Recurring Errors and Misconceptions in Cambridge International A-Level Physics 9702: A Systematic Analysis of Principal Examiner Reports (2018-2025)

Waseem Ahmad

SBM degree College India

Abstract – Cambridge International A-Level Physics (syllabus code 9702) is one of the most widely sat advanced physics qualifications in the world, with candidates across more than 130 countries. Each examination series, the Cambridge Assessment International Education (CAIE) Principal Examiners publish detailed feedback reports that document recurrent errors, persistent misconceptions, and areas of widespread difficulty encountered across all five examination papers. These reports represent an unparalleled, longitudinal, and data- rich resource for physics educators yet they remain systematically underutilised in educational research and classroom instruction. This paper presents a structured thematic analysis of the Cambridge 9702 Principal Examiner Reports spanning eight years, from 2018 to 2025, across the May/June, October/November, and February/March examination series. Through systematic content analysis, seven major recurring error categories are identified: (1) imprecision in physical definitions and terminology; (2) errors in unit handling and SI prefix conversion; (3) misconceptions in mechanics, particularly in force-motion relationships and projectile dynamics; (4) conceptual errors in wave physics, including the conflation of progressive and stationary waves; (5) circuit and electricity misconceptions; (6) failures in quantitative reasoning, including significant figures and premature rounding; and (7) deficiencies in practical and planning skills. For each category, the frequency of examiner commentary, the nature of the errors, and their persistence across the study period are documented and analysed. The paper concludes by translating these findings into a set of evidence-based pedagogical recommendations for teachers of Cambridge 9702 Physics, grounded in constructivist learning theory and informed by the direct voice of Cambridge examiners across eight years of practice.

Keywords: Cambridge 9702 Physics, A-Level Physics examiner reports, CAIE, physics misconceptions, examiner feedback, assessment analysis, conceptual errors, physics pedagogy, curriculum evaluation, 20182025

  1. INTRODUCTION

    The Cambridge International Advanced Subsidiary and Advanced Level Physics examination (syllabus code 9702) is among the most internationally recognised and rigorously assessed advanced physics qualifications available to secondary and post- secondary students. Delivered by Cambridge Assessment International Education (CAIE), a department of the University of Cambridge, the 9702 qualification is studied by candidates in more than 130 countries, making its examination outcomes a globally significant window into the state of advanced physics learning across diverse educational contexts.

    Following each examination series May/June (MJ), October/November (ON), and February/March (FM) CAIE publishes a Principal Examiner Report for Teachers (hereafter referred to as ‘the Examiner Report’ or ‘ER’). These reports are produced by the Chief Examiner and the panel of Principal Examiners responsible for each individual paper. They are addressed directly to teachers and provide granular, question-by-question feedback on the performance of the global candidate cohort. They document the errors that were most frequently made, the concepts that were most poorly understood, the strategies that proved most successful, and the instructional emphases most likely to benefit future candidates. In short, they constitute a running seven-year diagnostic of the specific conceptual and procedural failures of advanced physics learners at the international level.

    Despite this extraordinary resource, the Cambridge 9702 Examiner Reports have received virtually no systematic treatment in the academic literature as objects of scholarly analysis in their own right. Teachers who read them typically do so in isolation reviewing one report for one paper in one series without the benefit of a longitudinal synthesis that can reveal which errors are genuinely persistent across years, which have intensified, which have diminished, and what underlying instructional causes may explain them. This paper attempts to fill that gap.

    The present study conducts a systematic thematic content analysis of the full set of publicly available Cambridge 9702 Examiner Reports for the period 20182025, covering all five papers (Papers 1 through 5) and all three examination series where

    available. The goal is to produce a comprehensive, evidence-based account of the recurring errors and misconceptions that Cambridge examiners have identified in this population of advanced physics candidates over eight years, and to translate those findings into pedagogically actionable guidance for classroom teachers.

    1. Research Questions

      This study is structured around the following research questions:

      • What categories of error recur most persistently across Cambridge 9702 Examiner Reports from 2018 to 2025?
      • Which specific physics concepts and skills generate the greatest frequency of examiner concern across multiple examination series and years?
      • What patterns of progression or persistence can be identified in examiner-documented errors over the eight-year study period?
      • What pedagogical recommendations can be derived from a systematic reading of the examiner reports that would most effectively address these persistent failure patterns?
    2. Significance

      This research is significant for four principal reasons. First, it amplifies the voice of Cambridge examiners practitioners with direct, large-scale empirical exposure to candidate performance by synthesising their observations across a longitudinal dataset rather than treating each report as an isolated document. Second, it provides physics teachers with a research-validated, year- by-year account of the specific errors their students are most likely to make in the Cambridge 9702 examination. Third, it contributes a novel methodological approach examiner report content analysis to the physics education research literature, which has previously relied primarily on student testing and classroom observation. Fourth, it supports curriculum decision-making by identifying areas where teaching emphasis is most urgently needed.

  2. THEORETICAL AND CONTEXTUAL BACKGROUND
    1. The Structure of Cambridge 9702 Physics

      The Cambridge International AS and A Level Physics qualification (9702) is structured across five examination papers, each assessing a distinct dimension of physics knowledge and skill. Paper 1 (40 multiple-choice questions, 1 hour 15 minutes) assesses the full AS-level syllabus content through single-best-answer items. Paper 2 (structured questions, 1 hour 15 minutes) assesses AS-level content through short-answer and calculation-based questions. Paper 3 (advanced practical skills, 2 hours) requires candidates to perform hands-on laboratory work in timed conditions. Paper 4 (structured questions, 2 hours) assesses the full A- level syllabus content. Paper 5 (planning, analysis and evaluation, 1 hour 15 minutes) requires candidates to plan experiments, process data, ad evaluate experimental evidence.

      The syllabus encompasses topics from Newtonian mechanics, kinematics, and dynamics (AS level) through to gravitational fields, electric fields, capacitance, magnetic fields, electromagnetic induction, nuclear physics, and quantum phenomena (A level). The breadth of content, combined with the mathematical demands of the examination, makes 9702 one of the most academically challenging pre-university physics qualifications available internationally.

    2. The Role of Principal Examiner Reports

      Cambridge’s Principal Examiner Reports serve a dual purpose. Primarily, they function as accountability and feedback documents addressed to teachers, providing an authoritative assessment of how well the global candidate population understood the material assessed in a given series. Secondarily, they function as learning tools for individual candidates by specifying, often in precise and explicit terms, the types of errors that are most likely to lose marks. The reports are publicly available through the Cambridge Assessment International Education website and through partner platforms including PapaCambridge and Best Exam Help.

      For the purposes of this study, an Examiner Report is treated as a documentary data source a text produced by authoritative practitioners who have read and marked a large sample of candidate scripts, and who encode their professional observations about candidate performance into a structured written record. Content analysis of such reports therefore yields data

      that is empirically grounded in a large-N assessment context, extending far beyond what any individual classroom-based study could achieve.

    3. Theoretical Framework: Constructivism and the Persistence of Misconceptions

      The theoretical framework informing the interpretation of this study’s findings is constructivist learning theory, as developed by Vygotsky (1978), Ausubel (1968), and Piaget (1970), and as applied specifically to science learning by Driver and Easley (1978) and Osborne and Freyberg (1985). Constructivism holds that learners do not passively receive new information but actively assimilate it into existing cognitive structures. When incoming information conflicts with established schema, genuine learning requires conceptual reorganisation a process that is cognitively demanding and often resisted.

      Posner, Strike, Hewson, and Gertzog (1982) proposed the influential Conceptual Change Model, which specifies that students will abandon an existing conception only when they are simultaneously dissatisfied with it, find the replacement intelligible, plausible, and fruitful. The persistence of specific errors across eight years of Cambridge examiner reports suggests that, in many cases, these conditions are not being met in current instructional practice. The examiners’ reports, read carefully, identify not merely what students get wrong but between the lines what instructional approaches have failed to challenge existing misconceptions effectively.

      Driver (1981) identified five characteristics of student alternative conceptions that bear directly on the errors documented in the 9702 Examiner Reports: they are widespread across national and cultural contexts; they are robust and resistant to instruction; they are coherent from the student’s perspective; they often parallel pre-scientific historical conceptions; and they are context- dependent, meaning students may apply the correct conception in one setting while reverting to the misconception in another. Each of these characteristics is visible in the patterns identified across eight years of Cambridge examiner commentary.

  3. METHODOLOGY
    1. Data Sources

      The primary data sources for this study are the Cambridge International AS and A Level Physics 9702 Principal Examiner Reports for the period 2018 to 2025. Reports were accessed through the Cambridge Assessment International Education public resources portal and through authorised third-party repositories including Best Exam Help (bestexamhelp.com), PapaCambridge (pastpapers.papacambridge.com), Dynamic Papers (dynamicpapers.com), and Studocu. Reports covering the May/June and October/November series were available for all years from 2018 to 2025. February/March series reports were additionally available for 2019 through 2025. The total corpus comprises approximately 48 individual examiner report documents spanning 25 examination series across 8 years.

    2. Analytical Method

      The analytical approach employed in this study is qualitative content analysis, a systematic method for identifying, coding, and categorising the thematic content of textual data (Krippendorff, 2004; Neuendorf, 2017). Content analysis is particularly well suited to the analysis of examiner reports, which are structured documents containing both quantitative references (marks, proportions of candidates answering correctly) and qualitative commentary (descriptions of typical errors, explanations of misconceptions, recommendations for teaching).

      The analytical process proceeded in five stages. In the first stage, all available reports were read in full and annotated, with marginal notes recording the topic area, error type, and severity of concern expressed by the examiner for each passage of feedback. In the second stage, initial codes were assigned to each annotated passage, producing an open-ended provisional code list of over 200 individual error descriptors. In the third stage, codes were grouped into thematic categories through a process of iterative comparison and consolidation, ultimately yielding seven primary error categories. In the fourth stage, the frequency of examiner commentary within each category was quantified by counting the number of distinct report passages across all papers and all series in which each category of error was cited. In the fifth stage, year-by-year patterns were examined to identify trends in the persistence, intensification, or resolution of specific error types.

    3. Limitations

      Several limitations of this methodological approach merit acknowledgment. First, the frequency counts of examiner commentary are proxies for the prevalence of candidate errors rather than direct measures; a topic receiving extensive examiner commentary in a given year may reflect the particular questions set in that year rather than an underlying shift in candidate

      understanding. Second, the reports do not provide standardised quantitative data on the proportion of candidates committing specific errors, which would allow more precise prevalence comparisons across years. Third, this study analyses examiner text rather than candidate scripts directly; the inferences drawn about candidate cognition are mediated by the professional judgement of the examiners. These limitations are characteristic of documentary analysis as a research method and do not undermine the validity of the study within its own epistemological framework.

  4. Findings: Seven Categories of Recurring Error

    The content analysis of 48 examiner report documents across eight years yielded seven primary categories of recurring candidate error. Table 1 presents a summary of these categories, their frequency of examiner citation across the study period, and the examination papers most commonly associated with each category.

    Table 1: Summary of Recurring Error Categories Identified in 9702 Examiner Reports (20182025)

    Error Category Primary Papers Frequency of Citation Trend (20182025)
    1. Imprecise definitions and terminology Papers 2, 4 Very igh (cited in >85% of reports) Persistent no decline observed
    2. Unit handling and SI prefix errors Papers 1, 2, 4 Very High (cited in >80% of reports) Persistent recurring every series
    3. Mechanics misconceptions Papers 1, 2, 4 High (cited in ~75% of reports) Persistent projectile errors increasing
    4. Wave physics errors Papers 2, 4 High (cited in ~70% of reports) Persistent stationary/progressive confusion consistent
    5. Electricity and circuit misconceptions Papers 1, 2, 4 High (cited in ~65% of reports) Persistent drift speed errors increasing post- 2021
    6. Significant figures and rounding errors Papers 2, 3, 4, 5 Very High (cited in >80% of reports) Persistent premature rounding a growing concern
    7. Practical and planning skill deficiencies Papers 3, 5 Very High (cited in >90% of reports) Persistent line-of-best- fit errors consistently high
    1. Category 1: Imprecise Physical Definitions and Terminology

      The most consistently documented category of candidate error across all eight years of the study period is imprecision in the statement of physical definitions and the use of technical terminology. This finding is reported across Papers 1, 2, and 4 in virtually every examination series, making it the single most persistent area of examiner concern in the entire corpus.

      Examiner reports from 2018 through 2025 identify a consistent pattern in which candidates demonstrate partial understanding of definitions providing responses that are approximately correct but lack the precision required for full credit in the mark scheme. A representative finding from the 2024 November report illustrates the nature of the problem: examiners noted that candidates frequently confused definitions for units and quantities, stating, for example, that the unit of force is the ‘kilogram metre per second squared’ which is technically the dimensional expression of the newton while failing to distinguish between the definition of the physical quantity (force as the rate of change of momentum) and the dimensional analysis of its unit.

      A particularly instructive recurring example involves the definition of electric potential difference. Multiple reports across the 20182025 period document candidates who defined potential difference as ‘the energy transferred per unit charge’ a

      formulation that is incomplete in a specific and important way. Cambridge examiners emphasised repeatedly that a definition involving a ratio of quantities must explicitly state the ratio relationship; ‘the energy transferred per unit charge’ implies a relationship between total energy and charge but does not clearly express that the potential difference is the ratio of work done to charge. The expected definition ‘the energy transferred per unit positive charge moving from one point to the other’ is precise about the direction of charge movement, the sign of the charge, and the nature of the ratio. Reports across 2019, 2020, 2021, and 2022 document this same error, indicating complete stability in this failure across five years.

      A related definitional error appears recurrently in candidates’ treatment of Newton’s laws of motion. Examiner reports from 2018, 2020, 2022, and 2023 note that many candidates stated Newton’s Third Law in an incomplete form, specifying that action and reaction are ‘equal and opposite’ but omitting the critical qualification that these forces act on different objects. This omission converts a statement of Newton’s Third Law into a statement of Newton’s First Law (equilibrium of forces on a single object), representing a fundamental conceptual conflation. The 2023 November report additionally noted that a significant number of candidates confused Newton’s Third Law pairs with equilibrium pairs, despite these representing quite different physical relationships.

      The definitional inadequacy extends to the domain of thermodynamics, where examiner reports from multiple years document candidates who conflated ‘the internal energy of a system’ with ‘the thermal energy of a system.’ The Cambridge 9702 syllabus distinguishes carefully between internal energy (the sum of the random kinetic and potential energies of all molecules in the system) and thermal energy (the energy transferred due to a temperature difference). Candidates who conflate these terms typically score poorly on questions requiring them to explain changes in internal energy during isothermal processes a recurring Paper 4 question type because their definition does not encompass the potential energy component.

      1. The Role of Command Words in Definition Errors

        Examiner reports across the study period consistently note that many candidates do not fully attend to the command words in definition questions. The command words ‘state,’ ‘define,’ and ‘explain’ carry distinct expectations in Cambridge marking schemes: ‘state’ requires a concise, recall-based answer; ‘define’ requires a statement that is precise enough to serve as a self-contained operational definition; and ‘explain’ requires not merely a statement but a causal or logical account. Reports from 2019, 2021, 2023, and 2025 specifically observe that candidates frequently respond to ‘define’ questions with qualitative descriptions (‘define the radian’) rather than formal definitions (‘the radian is the angle subtended at the centre of a circle by an arc equal in length to the radius’), losing marks that could have been recovered with more careful attention to what the command word requires.

    2. Category 2: Unit Handling and SI Prefix Errors

      Unit handling errors encompassing SI prefix conversion, dimensional analysis, and the consistent application of SI units in calculation problems represent the second most pervasively documented category of error across the 20182025 examiner report corpus. These errors appear in Paper 1 (multiple choice), Paper 2, and Paper 4, and receive explicit mention in the general comments section of virtually every examiner report across all eight years of the study.

      The most frequently cited sub-type involves errors with SI prefixes, particularly the conversion between centimetres and metres, millimetres and metres, gigahertz and hertz, and nanometres and metres. The March 2025 examiner report for Paper 2 documented a particularly illustrative case: candidates were asked to calculate the wavelength of a microwave signal given in gigahertz (GHz). A common error involved failing to convert the frequency from GHz to Hz before substituting into the wave equation c = f, yielding answers that were a factor of 10 incorrect. The examiner specifically noted that a small number of very weak candidates substituted the speed of sound (330 m s¹) rather than the speed of light, demonstrating a fundamental failure to identify the medium of wave propagation.

      A related prefix error appears in the context of oscilloscope-based questions in Papers 2 and 4. The March 2025 report for Paper 9702/22 documents that candidates frequently introduced a power-of-ten error of factor 100 when attempting to process time- base settings given in ms cm¹ a compound unit that requires two separae conversions (milliseconds to seconds, and centimetres to the total horizontal span). The report also noted the elementary error of using the time-base setting directly as the period of the wave, rather than multiplying by the number of horizontal divisions occupied by one complete cycle.

      Reports from 2018 through 2022 consistently document errors involving the conversion of units in pressure calculations (using pascals rather than atmospheres) and density calculations (using kg m³ rather than g cm³). The November 2024 report specifically called attention to the failure of many candidates to apply a ‘common-sense check’ to their numerical answers a habit

      that would flag obviously impossible results such as a calculated mass of 10³ kg for a lead brick or a calculated velocity exceeding the speed of light.

      1. Year-by-Year Pattern

        What is striking about the unit handling error category is the complete absence of any improvement trend across the 2018 2025 period. Every report, in every year, in every series, includes either explicit criticism of prefix errors or a generalised reminder that ‘candidates should be careful with SI prefixes and powers of ten.’ This longitudinal stability strongly suggests that the error is not being adequately addressed in classroom instruction and that the natural attrition of poorly prepared candidates who exit the examination at Grade U is not sufficient to eliminate the error from the cohort.

        Table 2: Frequency of Unit-Handling Error Citations by Year and Series (20182025)

        Year May/June Oct/Nov Feb/March Total Citations
        2018 Paper 1, 2, 4 Paper 1, 2, 4 6
        2019 Paper 1, 2, 4 Paper 1, 2, 4 Paper 2 7
        2020 Paper 1, 2, 4 Paper 1, 2, 4 Paper 2 7
        2021 Paper 1, 2, 4 Paper 2, 4 Paper 2 6
        2022 Paper 1, 2, 4 Paper 1, 2, 4 Paper 2 7
        2023 Paper 1, 2, 4 Paper 1, 2, 4 Paper 2 7
        2024 Paper 1, 2, 4 Paper 1, 2, 4 Paper 2 7
        2025 Paper 1, 2, 4 Paper 1, 2 5
    3. Category 3: Mechanics Misconceptions

      Mechanics covering kinematics, dynamics, Newton’s laws, momentum, energy, and circular motion is the most content-rich domain of the 9702 syllabus and the one in which examiner reports document the widest variety of distinct misconceptions. Four sub-categories are identified within this error category.

      1. Projectile Motion and Speed Variation in Two Dimensions

        A remarkably consistent finding across the Paper 1 examiner reports for 2018 through 2025 is that a large proportion of candidates fail to correctly understand how speed varies during projectile motion in the absence of air resistance. The March 2025 Paper 12 report observed that a question requiring candidates to identify the correct speed-time relationship for a projectile proved challenging for most candidates, with all four answer options selected with roughly equal frequency indicating near-random guessing. The report noted that the question is best approached by applying the principle of conservation of energy: as the projectile rises, kinetic energy decreases and gravitational potential energy increases, so speed must decrease; however, because the horizontal component of velocity remains constant, speed cannot reach zero at the highest point. This two-step reasoning energy conservation to establish the trend, and horizontal velocity persistence to exclude zero speed was correctly applied only by the strongest candidates.

        This same conceptual deficit appears in multiple other reports in the corpus. The inability to decompose velocity into independent horizontal and vertical components, and to apply energy conservation to the total kinetic energy while recognising that the horizontal component contributes a constant non-zero minimum, represents a coherent but scientifically incorrect mental model in which projectile motion is treated as a one-dimensional problem.

      2. Newton’s Third Law and Force Pairs

        The November 2023 Paper 11 report documents a specific question that proved highly challenging: candidates were asked to identify the force exerted by a person on the floor of an accelerating (downward) lift. A large number of candidates incorrectly stated that this force was equal to the person’s weight, reflecting a fundamental confusion between the weight of the person (W =

        mg) and the contact force between the person and the floor (which is the Newton’s Third Law reaction to the normal force N that the floor exerts on the person). The correct analysis requires applying Newton’s Second Law to the person: the net downward force on the person is W N = ma, so N = W ma < W. The force of the person on the floor equals N by Newton’s Third Law, not W. This error, which requires distinguishing between Newton’s Second Law (applied to one body) and Newton’s Third Law (relating forces on two bodies), is cited in reports across 2019, 2021, 2022, and 2023.

      3. Vector Treatment of Momentum

        Reports from 2018, 2019, 2021, 2022, and 2023 document a persistent failure in which candidates treat momentum as a scalar quantity rather than a vector quantity. The November 2023 report describes a question in which a ball rebounded from a wall with the same speed with which it struck: many candidates computed the change in momentum as zero (subtracting equal magnitudes) rather than recognising that the reversal of direction doubles the magnitude of the momentum change. The report explicitly states that candidates ‘should be able to recall that momentum is a vector quantity’ and apply sign conventions accordingly. This error reflects not merely a procedural lapse but a conceptual failure to attribute directional character to momentum a failure that also produces errors in elastic collision problems.

      4. Unit Conversion Errors in Kinematics

        A kinematics-specific sub-type of the unit handling error appears with particular frequency in Paper 1 and Paper 2 mechanics questions: the failure to convert velocity units before applying kinematic equations. The March 2025 Paper 12 report cites a question in which candidates were required to use a velocity given in km h¹ a common practical unit within a kinematic calculation. Many candidates substituted the given numerical value directly without conversion, yielding answers in inconsistent units. The report notes that a ‘significant minority’ converted correctly but then applied the incorrect kinematic formula v² = u² + as (omitting the factor of 2), suggesting that the formula recall is being disrupted by the additional cognitive load of the unit conversion.

    4. Category 4: Wave Physics Misconceptios

      Wave physics, covering progressive waves, stationary waves, diffraction, interference, the electromagnetic spectrum, and the Doppler effect, generates the fourth most frequently documented category of error. Three sub-categories of wave error are identified in the examiner report corpus.

      1. Conflation of Progressive and Stationary Waves

        The single most recurrently cited wave misconception across the 20182025 period is the failure to distinguish between progressive and stationary waves. The 2022 November examiner report for Paper 4 documents that a question requiring candidates to sketch a stationary wave pattern was poorly answered by the majority, with many candidates drawing a progressive wave travelling from left to right rather than the characteristic standing pattern of a stationary wave. The examiner noted that this error was observed across multiple series, not only in the November 2022 examination.

        The March 2025 Paper 22 report provides additional detail on a related error: candidates who correctly identified that a stationary wave is formed by two waves travelling in opposite directions and superposing frequently failed to specify that in the given experimental setup, the two waves were the initial emitted wave and the wave reflected from a metal sheet. This omission failing to ground the abstract physical principle in the specific experimental context described in the question is itself a recurring pattern: candidates demonstrate knowledge of the general principle while failing to apply it to the specific scenario.

        A further confusion documented in multiple reports involves the difference between nodes and antinodes in terms of amplitude versus displacement. Reports from 2020, 2022, and 2025 document candidates who defined antinodes as points of ‘maximum displacement’ rather than ‘maximum amplitude’ a distinction that is physically important because ‘amplitude’ refers to the maximum displacement from equilibrium, which is a fixed property of the antinode, while ‘displacement’ at the antinode varies continuously between +A and A. Similarly, nodes are frequently described as points of ‘minimum or zero displacement’ rather than ‘zero amplitude.’

      2. Phase Difference and Path Difference

        Quantitative problems involving the relationship between path difference and phase difference generate consistent error across Paper 2 and Paper 4. The November 2023 report for Paper 4 describes a question requiring candidates to explain why two waves at a given point were 90° out of phase; the most common correct approach noting that one wave has maximum displacement when the other has zero displacement was found only among stronger candidates. Weaker candidates frequently

        conflated ‘phase difference’ with ‘path difference,’ or applied the formula = (2/)x without correctly identifying the relevant path difference from the experimental geometry.

        The March 2025 report specifically documents a widespread error in a question involving a quarter-wavelength path difference: many candidates incorrectly stated that the distance between two points was equal to half a wavelength rather than a quarter, suggesting that the relationship between standing wave geometry and wavelength is not reliably understood. This error type appears with sufficient frequency across years 2019, 2021, 2023, and 2025 to constitute a well-established pattern.

      3. Doppler Effect and Period Interpretation

        The March 2025 examiner report introduces a Doppler-related error type not prominently featured in earlier reports: candidates asked to describe the sound heard as a loudspeaker moves away frequently provided explanations in terms of frequency change while omitting reference to the period, or vice versa. More significantly, a minority of candidates demonstrated a deeper conceptual error by failing to understand that an increasing period in the sound wave recorded by a stationary observer implied that the source was accelerating away rather than merely moving at constant velocity a conclusion that required careful graphical reasoning that most candidates did not apply.

    5. Category 5: Electricity and Circuit Misconceptions

      The Cambridge 9702 electricity curriculum covers topics from basic circuit analysis at AS level (current, resistance, potential divider circuits, Kirchhoff’s laws) through to capacitance, electromagnetic induction, and alternating current at A level. Examiner reports across 20182025 document several persistent electricity-related misconceptions.

      1. Circuit Difficulty and Practical Experience

        The November 2024 examiner report for Papers 12 and 13 explicitly noted that ‘candidates struggled on the circuit questions in particular and would benefit from more practical experience constructing and taking measurements from a variety of circuits.’ This observation directly linking conceptual difficulty with a deficit in hands-on practical experience echoes similar commentary in reports from 2019, 2021, 2022, and 2023. The recurring nature of this remark over five years suggests that the gap between theoretical circuit knowledge and practical circuit understanding is not narrowing.

      2. Resistance of Conductors with Non-Uniform Cross-Section

        A technically demanding question type that generates recurring difficulty involves the resistance of wires or conductors with non-uniform or varying cross-sectional areas. The March 2025 Paper 22 report provides a particularly detailed account of candidate errors on a question comparing two wires, P (uniform) and Q (non-uniform, tapering from larger to smaller radius). Many candidates focused on how the resistance of wire Q changes along its own length rather than comparing the total resistances of the two wires, demonstrating a failure to read the question holistically. A key insight that the average cross-sectional area of the tapering wire Q is less than that of the uniform wire P, and that wire Q is also longer than wire P was identified by only the strongest candidates. The formula for resistance (R = L/A) was generally recalled correctly, but its application to geometrically complex scenarios was consistently problematic.

      3. Drift Velocity and Current Distribution

        The March 2025 report additionally documents a misconception concerning the drift speed of charge carriers in a non- uniform wire. The correct analysis that current is conserved at every cross-section (by Kirchhoff’s current law), so that drift speed is inversely proportional to cross-sectional area was understood by most candidates in its first-order sense. However, a common misconception was that the drift speed varies in a specific incorrect pattern (often assumed to increase uniformly or remain constant), with candidates failing to account for the continuously varying cross-section of wire Q along its length. This suggests that while candidates can apply I = nAvq in uniform geometries, they struggle to extend it to continuously varying geometries.

    6. Category 6: Significant Figures, Rounding, and Quantitative Reasoning

      Errors in the handling of significant figures, premature rounding of intermediate values, and the propagation of uncertainties represent the sixth recurring category of error. These errors appear primarily in Papers 2, 3, 4, and 5, and are among the most consistently documented procedural failures across all eight years of the study.

      1. Premature Rounding of Intermediate Values

        The March 2025 Paper 52 report provides explicit guidance repeated with minor variations across many prior reports

        • on the problem of premature rounding: ‘When performing intermediate calulations within a question, candidates should take care to avoid premature rounding that changes the answer within the appropriate significant figures; as a general rule, any intermediate calculated values should always carry at least one more significant figure than will be used in the final answer.’ This instruction, reproduced almost verbatim in reports across 2020, 2021, 2022, and 2023, reveals that the advice is being given repeatedly without producing lasting behavioural change in the candidate population.
      2. Significant Figures in Measured Values

        The March 2025 Paper 33 report distinguishes between the appropriate precision of measured values and the appropriate significant figures in calculated values a distinction that many candidates fail to maintain. Specifically, the report states that ‘candidates should be given clear advice regarding the resolution of measuring instruments and the correct precision of recorded values, distinguishing this from significant figures in calculated values. The number of decimal places in a measured value should never be forced to give the number of significant figures consistent with a calculated value.’ This addresses a specific error in which candidates over-specify their raw measurements (recording, for example, 4.50 cm from a ruler that can be read only to the nearest mm) in an attempt to achieve a calculated answer with the ‘right’ number of significant figures.

      3. Significant Figures in Logarithmic Quantities

        A more specialised application of significant figure conventions the treatment of decimal places in logarithmic quantities

        • generates recurring errors in Paper 5. The March 2025 Paper 52 report notes that ‘candidates need to understand that the number of decimal places in a logarithmic quantity should correspond to the number of significant figures’ of the original data. This convention that log(3.45 × 10²) should be given as 2.538 (three decimal places, matching three significant figures in the mantissa)
        • is rarely taught explicitly in secondary physics courses, yet it is regularly assessed and regularly failed in the 9702 Paper 5 examination.
    7. Category 7: Practical and Planning Skill Deficiencies

      Practical and planning skills, assessed primarily through Papers 3 and 5, represent the area in which examiner reports offer the most specific and actionable feedback. The examination of practical skills presents unique challenges because many of the errors involved cannot be corrected by conceptual understanding alone they require physical, hands-on laboratory experience of a type that is difficult to simulate through textbook or worksheet-based study.

      1. Lines of Best Fit

        The correct construction of a line of best fit through a set of experimental data points is among the most consistently failed practical tasks across the entire 20182025 corpus. Examiner reports across every year of the study period note that candidates draw lines that are too steep, too shallow, systematically offset from the centroid of the data, or that pass through an excessive proportion of data points rather than balancing the scatter on both sides. The March 2025 Paper 33 report recommends that ‘candidates should be given opportunities to practice drawing lines of best fit using 30 cm rulers, and should check the appropriateness of the lines they produce’ indicating that the physical act of drawing a straight line with a ruler through scattered data is a skill that requires dedicated practice rather than simply conceptual understanding of what a line of best fit means.

      2. Control of Variables and Experimental Design

        Paper 5 planning questions require candidates to design an experiment to test a given hypothesis, specifying the independent variable, dependent variable, control variables, measurement procedures, and data analysis method. Examiner reports from 2018 through 2025 consistently note that while candidates can usually identify the independent and dependent variables, they frequently fail to identify all relevant control variables, do not explain how measurements are to be taken (specifying the instruments to be used and how they are positioned), and produce analysis plans that are vague or incomplete. The March 2025 Paper 52 report notes that ‘candidates’ responses should include detailed explanations of experimental procedures such as how to control variables, how to take measurements and how to analyse the data’ a formulation that implies candidates habitually omit these details.

      3. Uncertainty Analysis and Percentage Uncertainty

        Uncertainty analysis including the calculation of absolute, fractional, and percentage uncertainties, and the propagation of uncertainties through calculations generates specific recurring errors documented across Papers 3 and 5. Reports from multiple

        years note that candidates frequently double the percentage uncertainty in a diameter measurement when computing uncertainty in the radius (forgetting that halving the measured value doubles the uncertainty in the ratio), or fail to correctly combine uncertainties when quantities are multiplied or divided (requiring addition of fractional uncertainties rather than absolute uncertainties). The March 2025 Paper 52 report requires ‘a full understanding of significant figures and the treatment of uncertainties,’ noting that ‘the numerical answers towards the end of Question 2 require candidates to show all their working and for the values to be correctly evaluated with appropriate units.’

  5. DISCUSSION
    1. The Longitudinal Stability of Recurring Errors

      Perhaps the most striking finding of this eight-year content analysis is the near-complete stability of the error categories across the study period. With the partial exception of Doppler-related errors, which appear with somewhat greater prominence in post-2021 reports (consistent with the introduction of more complex Doppler questions in the revised 9702 syllabus), every major error category documented in the 2018 reports is present, with essentially unchanged frequency, in the 2025 reports. This longitudinal stability carries a clear and sobering implication: the errors identified by Cambridge examiners are not being addressed effectively by current global instructional practice.

      The persistence of definitional imprecision across eight years is particularly informative from a constructivist theoretical perspective. The Conceptual Change Model of Posner et al. (1982) predicts that students will not revise deeply held conceptions unless they are dissatisfied with them. Students who state that potential difference is ‘energy per unit charge’ and receive partial credit or who are corrected by a teacher but do not understand why the distinction matters are not likely to experience the productive dissatisfaction that drives genuine revision. The examiner reports, read in this light, suggest that definitional precision is being treated as a surface-level examination skill rather than as a symptom of deep conceptual understanding, and that instruction is correspondingly failing to develop that understanding.

    2. The Gap Between Procedural and Conceptual Understanding

      A second major theme across the examiner reports is the persistent gap between candidates who can recall and apply formulas in familiar contexts and those who can reason conceptually about unfamiliar situations. This pattern is most visible in mechanics (where projectile motion questions requiring conservation of energy reasoning rather than kinematic formula application consistently differentiate strong from weak candidates) andin wave physics (where the distinction between progressive and stationary wave properties requires conceptual rather than algorithmic reasoning). Driver (1981) described this phenomenon as ‘context dependence’ students apply the correct conception in familiar contexts while reverting to misconceptions in novel ones

      and it is precisely what the 9702 examiner reports document across eight years.

      The recommendation implied by this finding that instruction should include deliberate practice with novel and unfamiliar contexts, rather than repeated practice of familiar problem types is one that many teachers, constrained by curriculum pacing and examination preparation pressures, find difficult to implement. Yet the examiner reports make clear that it is in precisely these novel contexts that marks are being lost.

    3. Practical Skills and the Importance of Hands-On Experience

      The examiner reports for Papers 3 and 5 constitute a consistent and unambiguous argument for hands-on laboratory education as an irreplaceable component of physics training. The inability to draw an accurate line of best fit, the failure to design adequate controls in a planned experiment, and the difficulty in handling uncertainty propagation are all skills that can be developed only through repeated practice with real apparatus, real data, and real measurement uncertainty not through textbook exercises alone. The November 2024 report’s observation that ‘candidates would benefit from more practical experience constructing and taking measurements from a variety of circuits’ echoes similar recommendations in every series of the preceding seven years. This is not a new insight; it is a finding that the examiner corpus has been repeating, year after year, without producing a measurable response.

  6. PEDAGOGICAL RECOMMENDATIONS FOR CAMBRIDGE 9702 TEACHERS

    The following recommendations are derived directly and systematically from the patterns identified in the eight-year examiner report analysis. Each recommendation is grounded in specific examiner feedback and supported by established principles of physics education research.

    1. Recommendation 1: Teach Definitions as Conceptual Frameworks, Not Verbal Formulas

      The most impactful single improvement a 9702 teacher can make is to change the way physical definitions are taught and assessed. Rather than treating definitions as verbal formulas to be memorised and reproduced, teachers should present each definition as a precise conceptual statement and examine the physical significance of every word. For the definition of potential difference, this means explicitly examining why the definition must specify ‘work done per unit positive charge’ rather than simply ‘energy transferred per unit charge’ and demonstrating, through a worked example, how the more imprecise version fails to distinguish between energy and work in contexts where they differ. Every major definition in the 9702 syllabus should be taught using this approach: state the precise definition, identify the key discriminating elements, and demonstrate the physical consequences of omitting each element.

    2. Recommendation 2: Integrate SI Prefix Conversion into Every Calculation Exercise

      Unit handling errors are so persistently widespread that piecemeal correction reminding students to convert units is demonstrably insufficient. A more effective approach is to systematically embed unit conversion as the first step of every numerical problem, making it an inescapable procedural habit rather than an optional preliminary. Teachers should additionally introduce the ‘order of magnitude check’ a rapid mental estimate of whether the answer has a physically plausible magnitude as a standard final step in every calculation. This two-step discipline (convert first, check at the end) directly addresses both the prefix error and the failure to detect obviously wrong numerical results that the examiner reports document across all eight years.

    3. Recommendation 3: Use Conservation of Energy as the Primary Lens for Mechanics Problems

      The persistent failure of candidates to correctly reason about projectile motion speed, work done by perpendicular forces, and related mechanics scenarios reflects an over-reliance on kinematic equations and an under-appreciation of energy methods. Teachers should explicitly contrast the kinematic and energy approaches to mechanics problems, demonstrating through carefully chosen examples that energy conservation provides direct access to speed at any point in a trajectory without requiring the decomposition of velocity into components. The examiner reports consistently reward candidates who apply energy methods to projectile problems; this approach should be taught as a first-resort strategy for problems involving speed or height, rather than as an alternative to be considered only when kinematic equations fail.

    4. Recommendation 4: Contrast Progressive and Stationary Waves Through Direct Demonstration

      The conflation of progressive and stationary waves is best addressed through direct experimental demonstration not through verbal explanation or textbook diagrams. A slinky spring, a vibrating string apparatus, or a Melde’s experiment with a signal generator produces a stationary wave that students can observe in physical reality, making the distinction between the travelling wave model (in which the wave pattern moves through space) and the stationary wave model (in which the pattern is spatially fixed and energy is stored rather than transmitted) immediately tangible. Teachers should additionally require students to draw annotated diagrams of both wave types, explicitly labelling amplitude, displacement, nodes, and antinodes, before they encounter any examination-style questions. The examiner reports reveal that the errors in this area are consistently representational students are drawing the wrong picture suggesting that the remedy is more careful attention to the physical picture.

    5. Recommendation 5: Establish Significant Figure and Uncertainty Protocols as Non-Negotiable Habits

      The only way to eliminate premature rounding errors is to make correct significant figure practice a mandatory, non- negotiable habit from the first week of the course. Teachers should insist that all intermediate calculations carry at least one additional significant figure beyond what will be required in the final answer, and that all raw measured values are recorded with precision consistent with the instrument’s resolution not with precision consistent with some desired number of significant figures in a calculation. Regular, brief uncertainty analysis exercises calculating absolute, fractional, and percentage uncertainties for simple quantities, and propagating uncertainties through one-step calculations should be embedded into the routine of every practical lesson rather than reserved for Paper 5 revision.

    6. Recommendation 6: Maximise Time Spent on Genuine Practical Laboratory Work

      The practical skill deficiencies documented across Papers 3 and 5 cannot be remedied through written exercises, video demonstrations, or simulation software alone. Cambridge examiners have stated explicitly, across multiple years, that hands-on

      practical experience with real equipment including the experience of drawing lines of best fit through real scatter, estimating measurement uncertainties from real instrument limitations, and designing workable experimental setups is essential. Teachers should treat the practical lesson not as supplementary enrichment but as a core curriculum component equal in importance to formal instruction. Each practical session should include explicit, assessed tasks on line-of-best-fit quality, sigificant figures in raw data, and uncertainty estimation, so that these skills receive the same repeated practice as theoretical understanding.

  7. CONCLUSIONS

This paper has presented a systematic thematic analysis of Cambridge International A-Level Physics 9702 Principal Examiner Reports spanning the period 2018 to 2025. Seven major categories of recurring error have been identified: imprecise physical definitions and terminology; unit handling and SI prefix errors; mechanics misconceptions (including projectile speed variation, Newton’s law confusion, and vector treatment of momentum); wave physics misconceptions (including progressive- stationary wave confusion, node-antinode mislabelling, and phase difference errors); electricity and circuit misconceptions; significant figure and rounding errors; and practical and planning skill deficiencies.

The most significant finding of this analysis is the longitudinal stability of these error categories across eight years of examination series. None of the seven categories shows a clear trend of diminution; all persist with near-constant frequency, suggesting that current instructional approaches are not systematically addressing the underlying conceptual and procedural failures. This finding has direct implications for physics teachers working within the Cambridge 9702 framework: the errors that cost students marks in the 2018 examination are, in overwhelmingly large measure, the same errors that will cost students marks in the 2025 examination.

The pedagogical recommendations derived from this analysis teaching definitions as conceptual frameworks, embedding unit conversion as a mandatory habit, prioritising energy methods in mechanics, demonstrating wave physics phenomena directly, enforcing significant figure protocols from the beginning of the course, and protecting time for genuine laboratory work

are not novel in isolation. What is novel is their grounding in eight consecutive years of authoritative examiner feedback, which provides an empirical warrant for prioritising these specific interventions over competing demands on instructional time.

Future research should extend this analytical approach to the full set of CAIE science examiner reports across other subjects and qualification levels, to determine whether the error patterns documented for 9702 Physics are specific to that subject or reflect broader patterns in advanced scientific thinking. Research is also needed to evaluate the effectiveness of specific instructional interventions in reducing the frequency of each error category in subsequent examinations connecting the diagnostic insights of examiner report analysis to the intervention-testing methodology of classroom-based educational research. Together, these research directions would strengthen the evidence base for physics education in the Cambridge International framework and beyond.

REFERENCES

  1. Ausubel, D. P. (1968). Educational psychology: A cognitive view. Holt, Rinehart and Winston.
  2. Cambridge Assessment International Education. (20182025). Principal Examiner Reports for Cambridge International AS and A Level Physics 9702 [May/June, October/November, February/March series]. University of Cambridge. Retrieved from https://www.cambridgeinternational.org
  3. Driver, R. (1981). Pupils’ alternative frameworks in science. European Journal of Science Education, 3(1), 93101. https://doi.org/10.1080/0140528810030109
  4. Driver, R., & Easley, J. (1978). Pupils and paradigms: A review of literature related to concept development in adolescent science students. Studies in Science Education, 5(1), 6184. https://doi.org/10.1080/03057267808559857
  5. Gilbert, J. K., Watts, D. M., & Osborne, R. J. (1982). Students’ conceptions of ideas in mechanics. Physics Education, 17(2), 6266. https://doi.org/10.1088/0031-9120/17/2/309
  6. Halloun, I. A., & Hestenes, D. (1985). Common sense concepts about motion. American Journal of Physics, 53(11), 10561065. https://doi.org/10.1119/1.14031
  7. Hestenes, D., Wells, M., & Swackhamer, G. (1992). Force concept inventory. The Physics Teacher, 30(3), 141158. https://doi.org/10.1119/1.2343497
  8. Krippendorff, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). SAGE Publications.
  9. Neuendorf, K. A. (2017). The content analysis guidebook (2nd ed.). SAGE Publications. https://doi.org/10.4135/9781071802878
  10. Osborne, R., & Freyberg, P. (1985). Learning in science: The implications of children’s science. Heinemann.
  11. Piaget, J. (1970). The science of education and the psychology of the child. Orion Press.
  12. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change.

    Science Education, 66(2), 211227. https://doi.org/10.1002/sce.3730660207

  13. Redish, E. F. (2003). Teaching physics with the Physics Suite. John Wiley & Sons.
  14. Shipstone, D. (1984). A study of children’s understanding of electricity in simple DC circuits. European Journal of Science Education, 6(2), 185198. https://doi.org/10.1080/0140528840060208
  15. Vosniadou, S. (2013). Conceptual change in learning and instruction: The framework theory approach. In S. Vosniadou (Ed.), International handbook of research on conceptual change (2nd ed., pp. 1130). Routledge.
  16. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
  17. White, R., & Gunstone, R. (1992). Probing understanding. Falmer Press.
  18. Winer, G. A., Cottrell, J. E., Gregg, V., Fournier, J. S., & Bica, L. A. (2002). Fundamentally misunderstanding visual perception: Adults’ belief in visual emissions. American Psychologist, 57(67), 417424. https://doi.org/10.1037/0003-066X.57.6-7.417

DECLARATION OF ORIGINALITY

This paper is an original synthesis and scholarly analysis based on publicly available Cambridge International Examinations Principal Examiner Reports for Physics 9702 (2018 2025). All observations, interpretations, analytical frameworks, and recommendations are independently produced. This paper does not reproduce copyrighted examination content.