🏆
International Scientific Platform
Serving Researchers Since 2012

A Variational Framework for Budgeted Information-Preserving Subgraph Selection on Weighted Relational Structures

DOI : 10.17577/IJERTV15IS040545
Download Full-Text PDF Cite this Publication

Text Only Version

A Variational Framework for Budgeted Information-Preserving Subgraph Selection on Weighted Relational Structures

Existence, Stability, and Structural Properties of Optimal Budgeted Subgraphs

Chandrasekhar Gokavarapu (1)

ORCID ID: https://orcid.org/0009-0006-5306-371X

Dr Duggirala V N S R Murthy (2) Dr Ch Srinivasulu (3)

(1,2,3) Lecturer in Mathematics, Government College (Autonomous), Rajahmundry, Andhra Pradesh, India, PIN: 533105

Abstract – We develop a finite variational framework for budgeted information-preserving subgraph selection on weighted relational structures. For a fixed ambient structure , an admissible s ubstructure is evaluated by an information-retention functional I () under a complexity budget constraint C () . Within this framework we prove discrete existence of optimizers, a relaxed compact existence theorem, existence of inclusion-minimal optimal representatives, perturbation stability of the optimal value under uniform continuity hypotheses, and local

constancy of a unique optimizer under an isolation-gap condition. We also formulate a quantitative relaxation-and- recovery principle and prove two obstruction results showing that value stability does not imply optimizer stability and that ambient connectedness does not force the existence of a connected optimizer. Finally, we verify the abstract axioms in additive edge-potential and coverage-type submodular model classes. The emphasis throughout is on exact hypothesis tracking, clear theorem boundaries, and a proof-secure finite theory rather than on heuristic graph-pruning procedures.

  1. INTRODUCTION

    ‌Budget-constrained subgraph selection appears, in one form or another, across several established parts of graph theory and discrete optimization. Foundational work on weighted graph structure, Laplacians, connectivity, and isoperimetric phenomena provides a rich language for measuring how much of an ambient graph is retained by a smaller one [1, 2, 3, 4]. Closely related optimization principles underlie ratio-cut, normalized-cut, and spectral partitioning methods [5, 6, 7, 8], while local clustering methods show that one can often extract a small subgraph carrying a substantial part of the local expansion or diffusion profile of the ambient graph [9, 10, 11, 12]. At the same time, community-detection and network-decomposition literatures treat the problem of selecting informative subnetworks from a large weighted structure from a more algorithmic or heuristic perspective [13, 14].

    A second body of work concerns graph reduction, spar- sification, and coarsening. Spectral sparsification con- structs sparse surrogates that preserve quadratic-form or cut information of the original graph in a provable sense [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Related lines

    of research study vertex sparsifiers, cut sparsifiers, directed sparsifiers, hypergraph sparsifiers, and reweighted Cheeger-type principles [27, 28, 29, 30, 31]. Other reduction frame-works seek smaller graphs with controlled spectral, resistive, or structural distortion [32, 33, 34, 35,

    36, 37, 38]. A third

    neighboring literature comes from graph-based data repre- sentation and graph signal processing, where one attempts to retain the relevant information of a weighted relational object under transformation or compression [39, 40, 41, 42]. Finally, whenever the retained information is encoded by a monotone or submodular set functional, one enters the classical theory of budgeted submodular maximization and knapsack-constrained selection [43, 44, 45, 46, 47].

    The present paper does not repackage any one of these theories. It isolates a different mathematical question. Exist- ing literatures provide important preservation mechanisms, approximation paradigms, and optimization principles, but they do not by themselves yield a unified variational the- ory of budgeted information-preserving subgraph selection on finite weighted relational structures. In particular, they do not furnish, in a single theorem-secure framework, an axiomatic treatment that simultaneously formalizes

    1. an information-retention functional,

    2. a complexity or budget functional,

    3. existence of optimizers over a discrete nonconvex family of substructures,

    4. structural properties of optimal subgraphs,

    5. perturbation stability of optimal values and near-optimizers, and

    6. obstruction principles showing where such a theory must fail.

    That missing variational core is the object of this article. Let = ( , , wt) be a finite weighted relational struc-

    ture. We write Sub() for the family of admissible weighted substructures of . A feasible substructure Sub() is required to satisfy the budget bound

    C() ,

    where C is a prescribed complexity functional. The retained information of relative to is measured by a functional I(; ), and the corresponding loss is

    L(; ) := I(; ) I(; ).

    The associated optimization problem is sup {I(; ) : Sub(), C() } .

    At this level, the problem is elementary to state but not math- ematically automatic. The family Sub() is finite for each fixed , but the structure of optimizers is delicate because the admissible family is discrete, the budget restriction is combinatorial, and the retention functional may interact with weights, connectivity, and support constraints in a highly nonlocal way. Moreover, without a disciplined hypothesis package, the formulation quickly becomes either vacuous or overfitted to a single preservation mechanism.

    The aim of the paper is therefore not to propose a new heuristic pruning rule. The aim is to formulate and study an axiomatic variational problem on finite weighted relational structures. The central question is the following.

    Question I.1. For which classes of information-retention functionals I and complexity functionals C does the budgeted optimization problem

    sup {I(; ) : Sub(), C() }

    admit (a) optimizers, (b) structural descriptions of opti- mizers, (c) perturbation stability of optimal values and near-optimizers, and (d) approximation principles under suitable relaxations?

    This formulation is motivated by, but not reducible to, several earlier themes. Spectral partitioning studies cut and conductance objectives [5, 7, 3, 8]. Local clustering studies

    localized diffusion and expansion surrogates [9, 10, 11, 12]. Sparsification theory studies preservation of cuts, resistances, or quadratic forms under edge reduction [15, 16, 17, 19, 21, 26]. Graph reduction and coarsening study smaller surrogates with controlled distortion [32, 33, 34, 35, 36, 37]. Submodular optimization studies budgeted maximization at the level of abstract set functions [43, 44, 47]. None of these theories, however, directly gives the present object: an ambient-graph-dependent, axiomatic optimization theory on subgraphs themselves, with simultaneous attention to existence, structure, perturbation, and failure modes. That is the gap filled here.

    The paper is organized around four theorem-bearing layers.

    1. an axiomatic definition of admissible information- retention and complexity functionals;

    2. an existence theory for optimal budgeted subgraphs;

    3. structural results describing how optimal solutions interact with monotonicity, satration, connectivity, modularity, or curvature-type hypotheses;

    4. a stability and approximation layer controlling the dependence of optimal values and near-optimizers on perturbations of the ambient weighted structure.

    The article also includes an obstruction layer. This is necessary. A theory of this type is not credible unless it records precisely which natural hypotheses cannot be removed without destroying existence or stability.

    Position of the paper in the literature

    The paper lies at the intersection of finite variational theory, weighted graph theory, and discrete optimization on struc- tured families of subgraphs. Its novelty claim is modest and specific. It does not claim a complete classification of all information-preserving subgraph problems. It does not claim to subsume the full sparsification literature, the full graph reduction literature, or the full submodular optimiza- tion literature. Instead, it isolates a proved core that is not presently available in those theories taken separately.

    The graph-theoretic side of the background comes from weighted connectivity, Laplacian structure, directed Lapla- cians, spectral embeddings, and cut principles [1, 2, 3, 4,

    39, 7, 5, 6, 8]. These works show that weighted substruc- tures may preserve mathematically meaningful information, but they do not formulate a general budgeted optimization theory over abstract subgraph families. The local parti- tioning and community-detection literature gives further evidence that small, informative subgraphs are natural ob- jects [9, 10, 11, 12, 13, 14], but again the main emphasis there is algorithmic extraction, not axiomatic variational structure.

    The preservation side of the background comes from spar- sification, cut preservation, resistance preservation, stream- ing subgraph sketches, vertex sparsifiers, directed sparsifiers, and hypergraph sparsifiers [15, 16, 17, 18, 19, 20, 21, 22,

    23, 24, 25, 27, 28, 29, 30, 31, 26]. These works prove strong preservation theorems for specific objectives. They are indispensable background. But their objects are usually sparse surrogates preserving a designated analytic quantity, not optimizers of a general ambient-dependent information- retention functional under a separate complexity budget. Our framework is therefore adjacent to this literature, but not a restatement of it.

    The reduction side of the background comes from Kron reduction, spectral simplification, spectral reduction, graph coarsening, and recent graph-reduction surveys [32, 33, 34,

    35, 36, 37, 38]. These works show that one can replace a large graph by a smaller one while controlling selected distortions. They do not, however, by themselves provide an abstract existencestructurestability theory for optimal subgraphs defined by a retention functional and a budget functional on a general weighted relational structure.

    The optimization side of the background comes from sub- modular maximization and knapsack-constrained selection [43, 44, 45, 46, 47]. This literature supplies an important comparison point. If I(·; ) reduces, after encoding, to a monotone submodular set function on edges or vertices, then approximation phenomena become accessible. But the present paper does not assume submodularity a priori, and

    ‌several of the structural and stability questions considered here do not belong to classical submodular maximization in that form. Submodularity, when it appears, is treated as one sufficient mechanism among others, not as the definition of the subject.

    A further neighboring background comes from graph signal processing and graph-based representation theory [40, 41, 42]. This literature reinforces the idea that a weighted relational structure carries information that may or may not survive reduction. What it does not supply is a finite variational theory of optimal retained subgraphs under an explicit complexity budget.

    Accordingly, the paper should be read as a foundational article. Its contribution is to define a mathematically con- trolled class of budgeted information-preserving subgraph problems and to prove a first layer of general theorems for that class. The scope is intentionally narrower than a univer- sal theory and broader than a single preservation mechanism. That scope control is deliberate.

    Main provisional contributions

    The following list is stated in referee-facing form. Each item is to be retained only at the level fully supported by the proofs in the final manuscript.

    (C1) A precise axiomatic class of information-retention functionals I and budget functionals C on finite weighted relational structures, formulated so that the optimization problem is mathematically nontrivial yet theorem-accessible.

    (C2) A proved existence theorem for optimal budgeted subgraphs under a transparent hypothesis package. In the finite setting, existence is not the issue by itself; the issue is the correct formulation of the admissible family and retention functional so that the optimization problem is nondegenerate and structurally meaningful.

    (C3) A structure theorem under additional assumptions identifying when optimizers may be chosen inside a distinguished subclass, for example connected, inclusion-minimal, support-saturated, or otherwise canonical representatives.

    (C4) A perturbation theorem for optimal values, together with a near-optimizer stability statement when the hypothesis package is strong enough to justify such a conclusion. The theorem is intended to make explicit which continuity or semicontinuity assumptions are actually used.

    (C5) An approximation or relaxation theorem connecting the discrete budgeted problem to a relaxed variational problem under additional assumptions such as mono- tonicity, modularity, or submodular-type control, in dialogue with classical budgeted set-function opti- mization [43, 44, 47].

    (C6) An obstruction theorem showing that without an ap- propriate monotonicity, continuity, or admissibility hypothesis, existence, canonical structure, or stability can fail. This negative result is part of the mathemati- cal contribution, not a peripheral remark.

    (C7) Verification results in one or two substantial natural model classes. This is needed to prevent the ax-

    iomatic framework from becoming formally correct but mathematically detached.

    Methodological remarks

    The paper is written as a pure theoretical mathematics arti- cle. It contains no empirical section, no simulation-based validation, and no claims of practical superiority over ex- isting graph-pruning methods. Algorithmic material, when included, is admitted only at the level of formal approxi- mation mechanisms attached directly to proved statements. The article also avoids inflated language. Terms such as optimal, sharp, minimal, or classification are used only where the corresponding proof justifies them.

    The basic strategy is to separate three layers throughout:

    1. proved results and corollaries,

    2. interpretive remarks explaining their relation to exist- ing theories, and

    3. open problems recording what the present framework does not yet settle.

    This separation is essential for theorem integrity.

    Road map

    Section II fixes notation for weighted relational structures, substructures, admissible families, and the underlying order- theoretic language. Section III defines the budgeted opti- mization problem and isolates the admissible hypothesis packages for I and C. Section IV proves the first existence theorem and identifies the exact role of finiteness, admissibil- ity, and upper semicontinuity. Section V studies structural properties of optimizers under additional monotonicity, satu- ration, and canonical-representative hypotheses. Section VI proves perturbation results for optimal values and, where justified, for near-optimizers. Section VII introduces re- laxed problems nd approximation principles. Section VIII records failure modes and separation phenomena showing that the main hypotheses are not formal decoration. Sec- tion IX verifies the abstract axioms in natural model classes informed by preservation and reduction mechanisms from the surrounding literature [16, 17, 32, 36]. The final section separates proved conclusions from open problems.

  2. ‌PRELIMINARIES AND NOTATION‌

    This section fixes the discrete framework used throughout the paper. The purpose is not merely terminological. The later existence, structure, stability, and approximation theo- rems depend on a careful separation between the ambient weighted object, the admissible family of substructures, the complexity budget, and the information-retention functional. We therefore fix the notation once and keep it stable.

    The graph-theoretic background is standard at the level of weighted graphs, Laplacians, connectivity, and cut-based structure; see, for example, [1, 2, 3, 4, 8]. Several neigh- boring literatures motivate particular choices of retention functionals or admissible substructures, including local graph partitioning [9, 10, 11, 12], spectral sparsification and

    cut preservation [15, 16, 17, 19, 21, 26], graph reduction

    and coarsening [32, 33, 34, 35, 36, 37], and budgeted set-

    function optimization [43, 44, 47]. The present section does not import the conclusions of those theories wholesale. It uses them only as background for reasonable hypothesis packages.

    1. Ambient weighted structures and admissible substruc- tures

      Definition II.1 (Weighted relational structure). A finite weighted relational structure is a triple

      = ( , , wt),

      where is a finite nonempty vertex set, Ă— is a finite relation set, and

      wt : [0, )

      is a weight function. If (, ) , the quantity wt(, ) is called the weight of the relation (, ).

      When no ambiguity arises, we write (), (), and wt for the vertex set, relation set, and weight function of

      .

      Remark II.2. The formulation above includes finite weighted directed graphs as a special case. If is sym- metric, then may be regarded as a weighted undirected graph. This flexibility is deliberate. Directed Laplacian methods and directed Cheeger-type phenomena show that some natural retention principles are genuinely asymmetric [4, 14, 42, 31]. On the other hand, many of the prototype examples in later sections arise from undirected weighted graphs, in the spirit of [1, 3, 16, 36].

      Definition II.3 (Weighted substructure). Let = ( , , wt) be a finite weighted relational structure. A weighted sub- structure of is a triple

      = (, , wt)

      such that

      , ( Ă— ),

      and

      wt = wt| .

      We write if is a weighted substructure of .

      Remark II.4. The requirement wt = wt| means that substructures inherit the ambient weights rather than re- optimizing them. This choice matches the intended interpre- tation of as a retained part of , not as an independently reweighted surrogate. In contrast, some sparsification and reduction frameworks allow changed edge weights to pre- serve spectral or cut quantities [17, 19, 32, 34, 35]. Those weighted-reduction models will appear later only as compari- son mechanisms or as natural classes for abstract verification, not as the default meaning of substructure.

      Notation II.5 (Substructures and feasibility). For a fixed ambient structure , let Sub() denote a prescribed ad- missible class of weighted substructures of . The class Sub() must be fixed before the first main theorem. Typical choices include:

      1. all weighted spanning subgraphs of ;

      2. all vertex-induced weighted subgraphs of ;

      3. all relation-induced weighted substructures of ;

      4. all connected weighted substructures of one of the preceding types;

      5. all substructures satisfying an additional closure or support condition.

        Whenever 1, 2 Sub(), the notation

        1 2

        means that 1 is a weighted substructure of 2 in the sense of Definition II.3.

        ‌Remark II.6. The choice of Sub() is part of the math- ematical data. It is not a cosmetic convenience. If one works with all spanning subgraphs, then edge deletion is the primitive operation. If one works with vertex-induced subgraphs, then vertex selection is primary. If one im- poses connectivity or saturation, the feasible family is no longer downward closed. These distinctions affect exis- tence, canonical-representative results, and approximation behavior. The local partitioning, sparsification, coarsening, and community-detection literatures each privilege different admissible families [9, 10, 17, 13, 36, 38].

    2. Complexity, retention, loss, and optimal value Definition II.7 (Complexity functional). Let be fixed. A

      complexity functional on Sub() is a map

      C : Sub() [0, ).

      For a budget level 0, the associated feasible family is

      F () := { Sub() : C() } .

      Remark II.8. The functional C may encode different math- ematically meaningful notions of budget. Prototypical examples are:

      1. edge budget: C() = | () |;

      2. vertex budget: C() = | () |;

        L.

      3. weighted support budget: C() =

        () (wt()) for a fixed nonnegative function

        ;

      4. description-length surrogate: C() = | () | +

      | () | with fixed constants , 0.

      In later sections, the choice of C will be treated axiomat- ically unless a theorem requires a specific form. Budget monotonicity is natural for most such examples, but it must be stated rather than assumed implicitly.

      Definition II.9 (Information-retention functional). Let

      be fixed. An information-retention functional is a map

      I(·; ) : Sub() R,

      where I(; ) is interpreted as the amount of ambient information retained by the candidate substructure .

      Definition II.10 (Information-loss functional). Given an information-retention functional I(·; ), define the associ- ated information-loss functional by

      L(; ) := I(; ) I(; ), Sub().

      Remark II.11. The framework deliberately leaves I abstract at the foundational stage. This is necessary because different

      ‌neighboring theories privilege different notions of retained information. Depending on the model class, one may take I(; ) to measure retained cut information, spectral information, local diffusion information, resistance-type information, signal energy, or a monotone set-function surrogate; compare [7, 16, 17, 40, 41, 42, 43, 47]. The present paper does not assume from the outset that these examples belong to one common analytic class. One of the aims is to isolate which common axioms suffice for theorem-level conclusions.

      Definition II.12 (Optimal value and maximizers). For fixed

      (, ), define the optimal value

      Opt(, ) := sup {I(; ) : F ()} .

      The set of maximizers is

      argmax(, ) := { F () : I(; ) = Opt(, )} .

      Remark II.13. If Sub() is finite and F () , then the supremum in Definition II.12 is automatically a maximum. That elementary fact will be used, but it is not the end of the theory. The real issues are:

      1. whether the optimization problem is formulated on a natural admissible family;

      2. whether the maximizing set admits structural descrip- tion;

      3. whether the value function and the optimizer set are stable under perturbation of the ambient weights;

      4. whether relaxed formulations aproximate the discrete problem;

      5. whether the chosen axioms are verified in natural classes.

      Finite-choice existence alone does not address these ques- tions. This distinction is important in view of the richer stabil- ity and reduction phenomena studied in [17, 19, 32, 36, 37].

    3. ‌Order structure and basic set-valued notation

      Definition II.14 (Inclusion order). Let be fixed. The admissible family Sub() is partially ordered by the relation

      from Notation II.5. If 1, 2 Sub() and 1 2, we say that 1 is contained in 2.

      Definition II.17 (Near-optimizers). For 0, define the

      -near-optimizer set by

      argmax(, ) := { F () : I(; ) Opt(, ) } .

      ‌Remark II.18. The set argmax (, ) is useful even in the finite setting. Under perturbation, true maximizers may change discontinuously even when optimal values vary mildly. Near-optimizer sets are therefore the natural set- valued objects in stability statements, just as approximation and perturbation theory in neighboring graph-reduction and submodular settings often tracks approximate rather than exact solutions [44, 47, 36, 38].

    4. Metrics and perturbations

      Definition II.19 (Ambient perturbation metric). Fix a finite vertex set and relation set Ă— . If

      = ( , , wt) and = ( , , wt)

      are weighted relational structures on the same support, define

      (, ) := max |wt() wt() |.

      More generally, if one wishes to vary supports as well, one must specify a support-compatibility rule together with a metric or pseudometric on the resulting class. In the present paper, the default perturbative setting keeps the underlying support fixed and perturbs only the weights.

      Remark II.20. The fixed-support perturbation model is the cleanest one for a first general theory. It isolates the dependence on weights and avoids conflating weight insta- bility with support instability. This choice is also consistent with several spectral and signal-processing frameworks in which the combinatorial scaffold is fixed while weighted operators vary [39, 40, 41, 42]. Later sections may indicate how the framework could be enlarged to support-changing perturbations, but such an enlargement is not part of the present base theorem package.

      Definition II.21 (Weight continuity of functionals). A com- plexity functional C and an information-retention functional I are said to be weight-continuous on a model class if for every fixed support ( , ) there exist moduli of continuity

      C, I : [0, ) [0, ), lim C() = lim I() = 0,

      0 0

      Definition II.15 (Minimal and maximal feasible elements). Let and be fixed.

      1. A feasible substructure F () is called inclusion- minimal feasible if whenever F () and , one has = .

      2. A feasible substructure F () is called inclusion- maximal feasible if whenever F () and , one has = .

      Remark II.16. Minimality or maximality in the inclusion order is unrelated, a priori, to optimality for I(·; ). Later structure theorems will identify additional assumptions un- der which maximizers may be chosen from distinguished subclasses, such as inclusion-minimal maximizers, con- nected maximizers, or support-saturated maximizers. One should not assume such conclusions for free.

      such that for all weighted structures = ( , , wt) and

      = ( , , wt) in the class and all admissible substructures

      Sub() Sub(),

      |C () C () | C((, ))

      and

      |I () I () | I((, )).

      Remark II.22. The continuity notion above is intentionally uniform over admissible substructures on fixed support. This is stronger than pointwise continuity, but it is the form naturally used in perturbation estimates for optimal values. Similar uniform-control ideas appear, in more specialized settings, in spectral sparsification, effective- resistance preservation, and graph reduction [16, 26, 34, 36].

    5. Hypothesis packages

      We now record the basic axioms that will be invoked later. They are deliberately separated because different theorems use different subsets of them.

      (H1) Feasibility finiteness. For each fixed pair (, ), the feasible family F () is nonempty and finite.

      (H2) Monotonicity of information. If 1, 2 Sub() and 1 2 , then

      I(1; ) I(2; ).

      Example II.25 (Vertex-induced budget model). Let Sub() be the class of all vertex-induced weighted subgraphs [] with (), and let

      C([]) = | |.

      This model is close in spirit to local graph partitioning and community-extraction settings, where the selected object is a small vertex set together with its induced internal structure [9, 10, 11, 13, 14].

      Example II.26 (Additive surrogate model). Suppose

      (H3) Budget monotonicity. If 1, 2 Sub() and

      1 2, then

      I(; ) =

      ( )

      (), C() =

      ( )

      (),

      C(1) C(2).

      (H4) Weight continuity. The functionals I and C are weight-continuous in the sense of Definition II.21 on the model class under consideration.

      (H5) Supplementary structural axiom. A further con- dition is imposed when needed. Depending on the theorem, this may be a modularity, submodularity, curvature-type, connectivity-preservation, or satura- tion axiom.

      Remark II.23. These hypotheses are not declared globally necessary. Their status will be specified theorem by theorem. In particular:

      1. (H1) is proved sufficient for the elementary existence

        for nonnegative edge scores () and (). Then the op- timization problem reduces to a weighted discrete selection problem on edges. This model is too simple to represent the full intended theory, but it serves as a sanity check and as a comparison point for later obstruction results.

        Remark II.27. The natural-class verification section later in the paper will examine whether specific retention mecha- nisms arising from spectral, cut, reduction, or submodular settings satisfy the abstract hypotheses recorded here. At the preliminary stage we only need the formal framework, not the final verification.

        Notation table

        The following notation will remain fixed.

        statement but is not presented as conceptually mini- mal.

      2. (H2) and (H3) are natural order-theoretic assumptions that support canonical-representative arguments; they are not automatic in all model classes.

      3. (H4) is the basic perturbative hypothesis for stability of optimal values and near-optimizers.

        = ( , , wt)

        Sub() C()

        F () I(; )

        L(; )

        Opt(, )

        finite weighted relational structure admissible weighted substructure of admissible class of substructures of complexity of

        complexity budget

        feasible family under budget information retained by relative to information loss relative to

        optimal retained information under budget

      4. (H5) is a placeholder for extra structure used only where needed. In examples inspired by budgeted set- function optimization, this may take a submodularity form [43, 44, 45, 46, 47]. In examples inspired by spectral or reduction theory, it may take a support- saturation or operator-control form [17, 19, 32, 36].

      Necessity questions are left open unless explicitly settled by an obstruction theorem later in the paper.

    6. Prototype model classes

      ‌The axiomatic framework is abstract, but it is useful to keep several prototype classes in view.

      Example II.24 (Edge-budget spanning-subgraph model). Let Sub() be the class of all spaning weighted subgraphs of , and define

      C() := | () |.

      This is the basic edge-budget model. It is the closest discrete analogue of graph sparsification, though the present theory does not assume that I is given by a quadratic-form or cut-preservation objective [15, 16, 17, 19].

      argmax(, ) set of maximizers of I(·; ) on F ()

      argmax (, ) -near-optimizer set

      (, ) sup-norm distance between weight functions on fixed support

      Remark II.28. This section fixes only the base framework. It does not yet impose the final hypothesis package for any flagship theorem. The next section defines the optimiza- tion problem in its theorem-bearing form and classifies the admissibility assumptions more sharply.

  3. ‌THE VARIATIONAL PROBLEM AND ADMISSIBLE AXIOMS‌

    This section converts the optimization problem from the introduction into a precise discrete variational framework. The guiding point is simple. One should not speak about optimal information-preserving subgraphs until the ambi- ent class of admissible substructures, the budget functional, the retention functional, and the perturbation regime have all been fixed. In neighboring literatures these ingredients are often present implicitly but in different combinations: spar- sification emphasizes preservation of quadratic forms, cuts, or resistances [15, 16, 17, 19, 21, 26]; graph reduction and coarsening emphasize smaller surrogates with controlled

    distortion [32, 33, 34, 35, 36, 37]; local partitioning em- phasizes small subgraphs with strong local conductance or diffusion content [9, 10, 11, 12]; submodular optimiza- tion emphasizes budgeted maximization of set functions [43, 44, 45, 46, 47]. The present paper does not identify any one of these existing objectives with the definition of the subject. Instead, it isolates a common axiomatic variational core.

    1. Admissible systems

      Definition III.1 (Admissible information-retention system).

      Definition III.5 (Budgeted primal problem). Let , , and

      be as above. The budgeted primal variational problem is Opt(, ) := sup {I () : F ()} .

      Its maximizer set is

      argmax(, ) := { F () : I () = Opt(, )} .

      Definition III.6 (Loss minimization form). Assume that

      I () is finite. Define the associated loss functional

      An admissible information-retention system assigns to each finite weighted relational structure

      L () := I () I (), Sub().

      the following data:

      = ( , , wt)

      Then the budgeted maximization problem may equally be regarded as a loss-minimization problem over the same feasible family.

      (A1) a family Sub() of admissible weighted substructures of ;

      (A2) a complexity functional

      C : Sub() [0, );

      (A3) an information-retention functional

      I : Sub() R;

      (A4) a perturbation metric or pseudometric Rel on ambient weighted structures of the same combinatorial type.

      These data are subject to additional axioms imposed accord- ing to the depth of the theorem under consideration.

      Remark III.2. The system-level formulation is necessary because later statements compare optimization problems across different ambient weighted structures. In particular, perturbation results do not concern a single pair (, ) in isolation; they concern the behavior of the assignment

      Remark III.7. The loss formulation is conceptually useful for two reasons. First, it normalizes the ambient object as the zero-loss reference point. Second, many preserva- tion theories are naturally phrased in terms of controlled distortion, error, or deficit rather than retained value; com- pare cut distortion, spectral distortion, resistance error, and approximation ratios in sparsification and reduction theory [16, 17, 19, 32, 36]. The present paper does not identify L with any one of those notions, but the loss language is structurally aligned with them.

      ‌Proposition III.8 (Equivalent primal forms). Suppose

      I () is finite. Then

      argmax {I () : F ()} = argmin {L () : F ()} .

      In particular,

      sup {I () : F ()} = I ()inf {L () : F ()} .

      Proof. For every F () one has L () = I ()

      I (), so the map I () is strictly decreasing and

      (Sub(), C

      , I

      ,

      Rel).

      therefore converts maximizers of I on F () exactly into minimizers of L on F ().

      This point is routine in more specialized graph-theoretic settings where the ambient operator or weighted structure varies [3, 4, 40, 41, 42], but it must be stated explicitly here because the present framework is more abstract.

      Remark III.3. Once an admissible system has been chosen, the symbols Sub(), C, and I are no longer free notation. They are part of the mathematical model. Dif- ferent admissible systems may yield different optimization problems even on the same ambient weighted structure . For example, one may keep the same admissible family and change the retention functional from a cut-based quantity to a spectral surrogate, or keep the same retention functional and change from edge-budget to vertex-budget complexity. Such changes alter the variational problem and should never be treated as harmless notational variants.

    2. ‌Primal optimization and loss minimization

      Definition III.4 (Feasible family). Let be an admissible information-retention system, let be a finite weighted relational structure, and let 0. The associated feasible family is

      F () := { Sub() : C () } .

      Remark III.9. Proposition III.8 is only an order-preserving reformulation. It is not a duality theorem, and the paper should not use dual language unless a later section intro- duces a genuine dual problem with a proved relation. The distinction matters because much of the surrounding opti- mization literature employs primaldual or convex-analytic terminology in settings where the present discrete subgraph problem has not yet been relaxed [44, 47].

    3. ‌Admissible axioms

      We now isolate the axiom packages used later. These are not declared globally necessary. They are recorded as theorem-dependent hypotheses.

      Definition III.10 (Base admissibility axioms). Let be an admissible information-retention system. We say that satisfies:

      (B1) Feasible nonemptiness and finiteness if for every ambient weighted structure and every budget 0 under consideration, the family F () is nonempty and finite.

      7

      ‌(B2) Information monotonicity if for every fixed and all 1, 2 Sub() with 1 2 ,

    4. Value function and budget dependence

      Definition III.15 (Value function). For a fixed admissible

      I (1) I (2).

      system and a fixed ambient weighted structure , define the value function

      (B3) Budget monotonicity if for every fixed and all

      1, 2 Sub() with 1 2,

      C (1) C (2).

      (B4) Normalization if for every ,

      I () I () for all Sub().

      (B5) Weight continuity if for every fixed combinatorial type there exists a modulus of continuity controlling the change of I () and C () under perturbation of the ambient weights, uniformly over admissible on that support.

      (B6) Supplementary structural control if one imposes, where needed, a further axiom such as modularity, submodularity, curvature-type control, connectivity inheritance, saturation, or a canonical-representative property.

      Remark III.11. The normalization axiom is convenient but should not be confused with monotonicity. If Sub() and information monotonicity holds, then normalization is automatic. However, the framework allows admissible families that do not contain every intermediate enlargement, or that impose structural side conditions. In such settings it

      <> Opt(, )

      on the set of admissible budgets by

      Opt(, ) := sup {I () : Sub(), C () } .

      Proposition III.16 (Monotonicity in the budget). For every fixed , the value function Opt(, ) is nondecreasing on its domain.

      Proof. If 0 1 2, then F1 () F2 () by defi- nition. Taking suprema of the same objective over nested feasible sets gives

      Opt(, 1) Opt(, 2).

      Remark III.17. Proposition III.16 is elementary, but it identifies the first variational regularity property of the model. Later sections ask for stronger control, such as plateau structure, canonical breakpoints, or stability of near- optimizers as the ambient weights vary. Such finer questions are not automatic even in finite discrete problems.

      Proposition III.18 (Upper bound under normalization). Assume the normalization axiom of Definition III.10. Then for every feasible budget ,

      is cleaner to isolate normalization as a separate statement. This is consistent with the general discipline of the paper: hypotheses that are logically distinct are recorded separately rather than bundled together without comment.

      Opt(, ) I ().

      Equivalently,

      Remark III.12. The supplementary structural axiom is deliberately open-ended at this stage. In examples mo- tivated by submodular maximization, one may require a diminishing-returns condition after encoding the chosen substructure by an edge or vertex set [43, 44, 45, 46, 47]. In examples motivated by sparsification or reduction, one may instead require a support-saturation or operator-comparison condition [16, 17, 19, 32, 34, 36]. The current section does not fix one universal choice because later theorems use different structural mechanisms.

      Definition III.13 (Strong admissibility package). An admis- sible information-retention system is said to satisfy the strong admissibility package on a model class if it satisfies all axioms in Definition III.10, together with the requirement

      inf {L () : F ()} 0.

      Proof. By normalization, I () I () for all Sub(), hence in particular for all F (). Taking the supremum over F () gives the first inequality. The second follows from Definition III.6.

    5. ‌Relaxed feasible regions

      Definition III.19 (Relaxed feasible region). Suppose that admissible substructures in a chosen model class admit an encoding

      : Sub()

      into a finite-dimensional real vector space . A relaxed feasible region for the budget level is a set

      that the perturbation metric Rel be complete on each fixed finite support class under consideration.

      such that

      F

      rel

      ()

      Remark III.14. The completeness clause in Defini- tion III.13 is irrelevant for the elementary finite-choice

      (F

      ()) F

      rel

      () .

      existence theorem. It becomes potentially useful only when relaxed feasible regions, parameterized families, or limiting

      If convex-analytic language is used later, the ambient linear space , the embedding , and the precise definition of

      rel

      perturbation arguments enter. It is therefore included as a forward-looking strengthening, not as part of the minimal

      F () must be fixed before any such argument is invoked.

      base package.

      Remark III.20. The relaxed layer is optional and should not be introduced merely for stylistic breadth. It is justified only if the paper proves a theorem that genuinely uses it. This caution is important because the surrounding literature

      ‌contains several different relaxation mechanismsspectral, semidefinite, convex-hull, diffusion-based, and set-function multilinear extensionsand these are not interchangeable [7, 8, 44, 47, 36]. The present article will use relaxation only at the exact level required by proved statements.‌

      ‌Remark III.21. Typical encodings include edge-incidence vectors, vertex-incidence vectors, weighted adjacency ten- sors, or block-structured operator encodings. In sparsifi- cation and reduction theory, weighted subgraphs are often encoded by matrices or Laplacian operators [17, 19, 21, 32]. In submodular optimization they are encoded by subsets of a ground set [43, 47]. The present framework accommo- dates both viewpoints, provided the encoding is fixed before theorems are stated.

    6. Parameter dependence and perturbative admissibility Definition III.22 (Same combinatorial type). Two weighted relational structures

      = ( , , wt), = ( , , wt)

      are said to have the same combinatorial type if there is a prescribed identification of with and of with under which the only varying datum is the weight function.

      Definition III.23 (Perturbatively admissible family). Let G be a class of weighted relational structures of the same combinatorial type. An admissible information-retention system is perturbatively admissible on G if:

      1. the admissible family Sub() is canonically identified across all G;

      2. the metric Rel is defined on G;

      3. the functionals C and I satisfy the weight- continuity axiom on G.

      Remark III.24. Definition III.23 is the correct framework for value-stability results in later sections. Without canonical identification of admissible substructures across nearby ambient structures, even the statement of near-optimizer stability becomes ambiguous. This is one reason the paper fixes perturbation of weights on constant support as the default base regime.

    7. ‌Hypothesis-status paragraph for the next section Hypothesis-status note III.25. For the basic existence the-

      ory in the next section, feasible nonemptiness and finiteness are sufficient. No continuity hypothesis is needed for that elementary result. Information monotonicity, budget mono- tonicity, and supplementary structural control are likewise unnecessary at the first existence level. Weight continuity becomes relevant only when one passes to perturbative state-

  4. EXISTENCE OF OPTIMAL BUDGETED

    SUBGRAPHS

    This section records the first existence results for the bud- geted optimization problem. The first theorem is elementary but necessary. It isolates the exact role of finiteness of the feasible family and avoids overstating the argument as a compactness theorem. The second theorem is a genuine compactness statement for an optional relaxed model. It is included separately because its proof uses different hy- potheses and belongs to a different level of generality. This distinction is important. In discrete optimization, finite- choice existence is immediate once the feasible class is nonempty and finite. By contrast, relaxed existence requires a topological model, a compact feasible region, and an upper semicontinuity hypothesis for the objective. The two arguments should not be conflated [43, 44, 47, 36].

    Hypothesis-status note IV.1. For Theorem IV.2, the only substantive hypothesis is that the feasible family F () is nonempty and finite. No compactness argument is used. No continuity, upper semicontinuity, monotonicity, modularity, or perturbation hypothesis is needed. The proof is a finite- attainment argument and nothing more.

    For Theorem IV.5, the proof uses compactness of the relaxed feasible region and upper semicontinuity of the relaxed information-retention functional. The Hausdorff assumption is used only to ensure the standard closed-set separation framework in which compactness behaves as expected; it is not part of the discrete theorem. Any stronger convexity or linearity assumption is merely convenient unless explicitly invoked later. No necessity claim is made in this section. If necessity is to be discussed, it must be justified later by an obstruction theorem.

    1. ‌Discrete existnce

      We begin with the finite feasible class. This is the natural first theorem because the entire framework is built on finite weighted relational structures. The result is mathematically correct, but its force lies mainly in fixing the baseline from which deeper structural and perturbative results depart.

      Theorem IV.2 (Existence in the discrete feasible class). Let

      be a finite weighted relational structure, let 0, and assume that F () is nonempty and finite. Then there exists at least one optimal substructure F () such that

      I () = Opt(, ).

      Equivalently, there exists a minimizer of L on F (). Proof. Since F () is nonempty and finite, the set

      {I () : F ()}

      ments or to relaxed feasible regions. Completeness of the ambient perturbation metric is not used in the finite-choice existence argument. No necessity claim is made here. Any necessity statement must be justified later by an explicit obstruction theorem or separation result.

      Remark III.26. The purpose of the present section is not to overstate the framework. It is to separate clearly the discrete primal problem, the loss formulation, the admissibility packages, and the optional relaxation layer. With these ingredients fixed, the next section can prove the first existence theorem at the exact level supported by the hypotheses.

      is a nonempty finite subset of R. Hence it attains its supremum. Therefore there exists F () such that

      I () = sup {I () : F ()} = Opt(, ).

      The loss-minimization statement is equivalent by Proposi- tion III.8.

      Remark IV.3. Theorem IV.2 is correct but not deep. It should not be presented as the main mathematical contribu- tion of the paper. The real content begins when one studies the geometry of optimizer sets, canonical representatives,

      perturbation stability, or relaxed comparison principles. The present theorem serves only as the correct baseline statement. This distinction is standard in finite discrete optimization, where attainment may be immediate while structure and approximation remain nontrivial [43, 44, 45, 47].

      ‌Remark IV.4. Theorem IV.2 does not require information monotonicity, budget monotonicity, or normalization. Those hypotheses may later sharpen the description of maximizers, but they are irrelevant for bare existence. This is precisely why the paper separates hypothesis packages rather than bundling them into one undifferentiated assumption list.

    2. Relaxed existence

      We now state the relaxed analogue. This theorem belongs to a different layer of the theory. It is included only because later approximation arguments may require an ambient compact relaxation. The statement below should be used only after the relaxed model from Definition III.19 has been fixed concretely for the class under study.

      Theorem IV.5 (Existence in a relaxed compact model). Let

      be a finite weighted relational structure and let 0. Assume that:

      (i) the feasible set F () admits a relaxation

      so is a relaxed optimizer.

      Remark IV.6. Theorem IV.5 is a genuine compactness theorem, but it should not be read as stronger than it is. It proves existence only in the relaxed model. It says nothing yet about uniqueness, extremal structure, integral realizability, or recovery of a discrete optimizer. Those require additional arguments. Similar distinctions arise throughout relaxed combinatorial optimization, spectral approximation, and coarsening theory, where one first proves attainment in a compact surrogate model and only later studies discrete recovery or approximation quality [7, 8, 32, 36].

      ‌Remark IV.7. Upper semicontinuity is the exact continuity hypothesis needed in Theorem IV.5. Full continuity is sufficient but unnecessary. Lower semicontinuity would not suffice for maximization. This is why the theorem is stated with upper semicontinuity and not with a stronger generic continuity assumption.

    3. Recovery from the relaxed problem

      The relaxed theorem becomes relevant to the discrete theory only when one has a recovery mechanism. We therefore isolate the precise hypothesis before stating the corollary.

      Definition IV.8 (Integral recovery map). Let the hypotheses

      F

      rel

      ()

      of Theorem IV.5 hold. An integral recovery map is a function

      in a Hausdorff topological vector space ;

      Rec

      : F

      rel

      () F

      ()

      rel

      1. the relaxed feasible set F () is nonempty and

        such that for every relaxed feasible point F

        rel

        ,

        ()

        compact;

      2. there exists a relaxed information-retention functional

        I (Rec ()) Irel ().

        Irel : F ()

        rel

        R

        If equality holds for all relaxed optimizers, we say that the recovery map is value-preserving on relaxed optimizers.

        .

        which extends the discrete objective along the chosen

        Remark IV.9. A recovery map is an additional structure, not

        encoding and is upper semicontinuous on F

        Then the relaxed budgeted problem

        rel

        ()

        a formal consequence of relaxation. In many combinatorial settings one has only approximate rounding, not exact value preservation. Exact recovery should therefore be stated

        sup

        F ()

        admits at least one optimizer.

        rel

        Irel ()

        only when it is actually proved. This caution is standard in approximation and relaxation theory [44, 47, 36].

        Corollary IV.10 (Discrete existence via relaxed recovery).

        Assume the hypotheses of Theorem IV.5. Assume in addition

        Proof. Because F

        () is nonempty and compact, it suf-

        that there exists an integral recovery map

        rel

        fices to show that an upper semicontinuous real-valued function on this set attains its supremum. Let

        Rec

        : F

        rel

        () F

        ()

        := sup Irel ().

        which is value-preserving on relaxed optimizers. Then the

        rel

        F ()

        discrete budgeted problem admits an optimizer obtained from the relaxed problem.

        Choose a sequence ( ) in F

        () such that

        Proof. Let F

        () be a relaxed optimizer given by

        rel

        rel

        Irel () .

        Theorem IV.5. Set

        := Rec () F ().

        By compactness, there exists a subnet ( ) converging to some F () rel. Since Irel is upper semicontinuous,

        By value preservation on relaxed optimizers,

        Irel

        rel

        I () = Irel ().

        ( ) lim sup I

        ( ) = .

        Since Irel () is the relaxed optimal value and the discrete

        By definition of , one also has Irel () . Hence

        Irel () = ,

        feasible set embeds into the relaxed feasible set, no discrete feasible point can have strictly larger value. Hence is a discrete optimizer.

        ‌Remark IV.11. Corollary IV.10 is intentionally narrow. It requires exact value-preserving recovery on relaxed opti- mizers. If later sections only prove approximate recovery, then the conclusion must be weakened accordingly. The paper should not state an exact discrete consequence from a merely approximate rounding principle.

    4. First consequences for the optimizer set

      The existence statements above already justify the optimizer

      1. An optimizer argmax(, ) is called support- saturated if every admissible strengthening Sub() with

        , ,

        either violates the budget constraint

        C () > ,

        notation from Section III. We record this explicitly.

        Corollary IV.12 (Nonemptiness of the maximizer set). Un- der the hypotheses f Theorem IV.2, one has

        argmax(, ) .

        Under the hypotheses of Theorem IV.5, the relaxed optimizer set is nonempty.

        Proof. The first assertion is exactly the content of The- orem IV.2. The second follows directly from Theo- rem IV.5.

        ‌Remark IV.13. At this point the theory has only established attainment. The next section must begin the genuinely structural part of the paper: when optimizers can be chosen inside canonical subclasses, how inclusion interacts with optimality, and which hypotheses are sufficient for such structure. Without that next step, the paper would remain formally correct but mathematically thin.‌

  5. STRUCTURAL PROPERTIES OF OPTIMIZERS

    This section begins the genuinely structural part of the paper. The aim is limited. We do not attempt a classification of all optimizers. We isolate a small number of structural conclusions that are actually provable from transparent ax- ioms. This restraint is necessary. In neighboring literatures, strong structural conclusions often require highly specific objectives, such as cut-type functionals, spectral surrogates, or submodular set functions [7, 8, 16, 17, 43, 44, 47]. The present paper works at a more abstract level, so only those structural statements that genuinely follow from the stated axioms should be claimed.

    Hypothesis-status note V.1. The first theorem below uses only nonemptiness of the optimizer set together with finite- ness of the admissible family, or more generally the exis- tence of a complexity-minimizing optimizer. It does not

    or fails to belong to the admissible class under con- sideration.

    1. An optimizer argmax(, ) is called inclusion- minimal optimal if there is no proper admissible substructure argmax(, ) with

    , .

    Remark V.3. The two notions are independent in general. An optimizer may be inclusion-minimal among optimal objects without exhausting the available budget, and it may be support-saturated without being inclusion-minimal inside the optimizer set. The first notion is order-theoretic; the second is budget-theoretic. This distinction is useful because later structural arguments may produce one type of canonical optimizer but not the other.

    Remark V.4. The search for minimal optimal representa- tives is standard in finite discrete optimization and com- binatorial structure theory: one often cannot classify all optimizers, but one can still select representatives that are minimal with respect to inclusion, support, or cost. In the present framework this is the correct first structural target. It is substantially weaker than a characterization theorem and should be presented as such.

    ‌2. Existence of inclusion-minimal optimizers

    The first structural theorem is purely order-theoretic.

    Theorem V.5 (Existence of an inclusion-minimal optimizer). Assume that argmax(, ) is nonempty. Assume further that Sub() is finite. Then there exists an inclusion-minimal optimizer in argmax(, ).

    In particular, the conclusion holds whenever the feasible family F () is finite.

    Proof. Because argmax(, ) Sub() and Sub()

    use monotonicity of I, budget monotonicity, continuity, submodularity, or any curvature-type inequality.

    The connected-representative proposition below uses a distinct hypothesis package. In particular, it uses: (i) an admissible family stable under component amalgamation,

    (ii) a component-amalgamation monotonicity property of the retention functional, and (iii) a nonexpansive complexity condition under amalgamation. It is not a classification theorem. It proves only existence of an optimizer inside a distinguished subclass. No broader claim should be inferred.

    1. Minimal and saturated optimizers

    ‌We begin by fixing two basic structural notions.

    Definition V.2 (Support-saturated and inclusion-minimal optimizers). Let be a finite weighted relational structure and let 0.

    is finite, the set argmax(, ) is finite. Choose

    argmax(, ) such that

    | () | + | () |

    is minimal among all elements of argmax(, ). We claim that is inclusion-minimal optimal.

    Suppose not. Then there exists argmax(, ) such that

    , .

    Since is a proper weighted substructure of , one has either () () or () (). Hence

    | () | + | () | < | () | + | () |,

    which contradicts the choice of . Therefore no such

    exists, and is inclusion-minimal optimal.

    Remark V.6. Theorem V.5 is modest but useful. It gives a canonical subclass inside the optimizer set without requiring any monotonicity of the objective. The proof uses only finiteness. It does not identify all minimal optimizers, and it does not imply uniqueness. Those stronger conclusions generally require additional structure and should not be suggested here.

    Corollary V.7 (Minimal optimal representatives under fi-

    Remark V.10. Proposition V.9 should be read carefully. Strict budget monotonicity alone selects an optimizer that is maximal inside the optimizer set with respect to complex- ity. To conclude full support saturation against all feasible strengthenings, one needs an additional principle ruling out feasible nonoptimal strengthenings that remain admissible under the same budget and yet preserve the structural role of the optimizer. In many applications such a principle

    nite feasibility). If F

    optimizer set argmax

    () is nonempty and finite, then the contains an inclusion-minimal

    ‌is unavailable. One should therefore avoid overstating the conclusion.

    element.

    (, )

    4. Connected representatives under component amalga-

    Proof. By Theorem IV.2, the hypotheses imply that argmax(, ) is nonempty. Since argmax(, ) F () and F () is finite, Theorem V.5 applies.

    ‌Remark V.8. The original intuition behind Theorem V.5 may be phrased in terms of nested optimizer chains, but in the finite setting no separate chain-closure axiom is needed. Finiteness alone is enough. If one later enlarges the theory to infinite admissible families or compact relaxed models, then chain-closure or Zorn-type arguments may become relevant, but those belong to a different theorem package.

    3. Support saturation under strict budget monotonicity We next record a second basic structural observation. Unlike the previous theorem, this one uses a genuine budget axiom.

    Proposition V.9 (Support saturation of budget-maximal optimizers). Assume that argmax(, ) is nonempty and that Sub() is finite. Assume also that the complexity functional is strictly budget-monotone on admissible proper strengthenings, in the sense that whenever

    , , , Sub(),

    one has

    C () < C ().

    Then there exists an optimizer argmax(, ) which is support-saturated.

    Proof. Because argmax(, ) is finite, there exists an opti- mizer argmax(, ) such that

    C () = max {C () : argmax(, )} .

    We claim that is support-saturated.

    Suppose not. Then there exists an admissible strengthen- ing Sub() with

    , , C () .

    If argmax(, ), then strict budget monotonicity gives

    C () < C (),

    contradicting the maximality of C () among optimizers. Thus no proper admissible strengthening of can remain both feasible and optimal. In particular, among the optimizer set, is saturated with respect to feasible optimal strength- enings. If the admissible system has the additional property that every feasible strengthening of an optimizer is again optimal only when it preserves the optimal value, then this is exactly support saturation in the sense of Definition V.2.

    mation

    We now turn to connectedness. This is precisely the place where overstatement is dangerous. Many natural objectives do not force conneced optimizers. Cut-based and local objectives may prefer disconnected support, while some additive or coverage-type objectives can split mass across components. The literature strongly suggests caution here [7, 13, 14, 36]. Accordingly, we state only a conditional representative theorem.

    We first isolate the exact hypothesis.

    Definition V.11 (Component amalgamation property). As- sume that Sub() is a class of spanning weighted subgraphs of a fixed weighted graph . We say that the admissible system has the component amalgamation property if for every disconnected admissible subgraph

    = 1 · · · , 2,

    there exists a connected admissible subgraph Sub() such that:

    (a) is obtained from by adjoining only relations of the ambient graph that connect distinct components of ;

    (b)

    C (-) C ();

    (c)

    I (-) I ().

    Remark V.12. Definition V.11 is strong. It should not be assumed casually. It packages the entire content needed to replace a disconnected feasible optimizer by a connected one without worsening value or budget. Some objectives with strong monotonicity under edge addition may satisfy it, especially when the connecting edges carry negligible or controlled complexity cost. Many others do not. This is why the proposition below is only conditional.

    Proposition V.13 (Connected representative under compo- nent amalgamation). Assume that admissible substructures are spanning weighted subgraphs of a fixed weighted graph

    . Assume that argmax(, ) is nonempty. Assume further that the admissible information-retention system has the component amalgamation property from Definition V.11. Then argmax(, ) contains a connected representative.

    Proof. Let argmax(, ) be any optimizer. If is connected, there is nothing to prove.

    ‌Assume that is disconnected. By the component amalgamation property, there exists a connected admissible‌

    subgraph – Sub() such that

    C (-) C ()

    and

    I (-) I ().

    Thus – F (). Since is optimal, one has

  6. PERTURBATION STABILITY OF VALUES AND

    OPTIMIZERS

    This section treats perturbation stability. The theory naturally separates into two layers. The first concerns stability of the optimal value. This is the easier part, because it depends only on uniform control of the objective under perturbation of the ambient weights. The second concerns stability of the optimizer set. This is harder, because optimizer identities can change discontinuously even when optimal values vary

    Therefore

    I

    () = Opt(, ).

    mildly. For this reason the section treats value stability first and optimizer stability second.

    The distinction is standard across neighboring litera-

    tures. In spectral approximation, sparsification, graph re-

    I () Opt(, ).

    By definition of the optimal value, the reverse inequality

    I (-) Opt(, )

    also holds. Hence

    I (-) = Opt(, ),

    so – argmax(, ) and is connected.

    Remark V.14. Proposition V.13 does not classify all opti- mizers, and it does not say that every optimizer is connected. It proves only that the optimizer set contains at least one connected representative under a strong amalgamation hy- pothesis. That is the exact level of claim justified by the proof.

    Remark V.15. The preceding proposition should not be

    ‌duction, and graph signal processing, one often obtains quantitative control of energies, cuts, or objective val- ues under perturbation long before one can guarantee in- variance of the minimizing or maximizing object itself [16, 17, 19, 32, 34, 36, 40, 41, 42]. A similar separation ap- pears in combinatorial and submodular optimization, where objective stability and solution stability generally require different hypothesis packages [43, 44, 45, 47]. The present section adopts exactly that discipline.

    1. Perturbation regime

      Throughout this section we work on a fixed combinatorial support. Thus only the ambient weights vary.

      Definition VI.1 (Weight perturbation metric). Fix a finite vertex set and relation set Ă— . For two weighted relational structures

      interpreted as a generic fact about budgeted graph optimiza- tion. In many natural models disconnected optimizers exist and cannot be replaced by connected ones without either increasing complexity or decreasing retained information. If the present paper later treats such models, the corresponding failure examples belong in the obstruction section rather than being hidden by overgeneralized positive statements.

      1. ‌Consequences and limits

      The results of this section establish two kinds of structural control:

      1. an order-theoretic minimality statement that requires only finiteness;

      2. a representative theorem for connectedness under an explicit amalgamation hypothesis.

      This is the correct level of generality at the present stage. More ambitious conclusions such as uniqueness, lattice structure of optimizer sets, full connectedness criteria, or canonical decomposition formulas would require signifi- cantly stronger assumptions, often tied to special objec- tives from spectral, cut, or submodular optimization theory [16, 17, 7, 43, 47]. Those statements should not be antici- pated unless later sections genuinely prove them.

      Remark V.16. With existence and first structural represen- tatives now established, the next natural layer is perturbation theory: how the optimal value and the near-optimizer set change when the ambient weights are perturbed. That ques- tion requires continuity hypotheses absent from the present section, and it is treated separately for exactly that reason.

      = ( , , wt), = ( , , wt ),

      define

      Rel (, ) := max |wt () wt () | .

      Remark VI.2. The fixed-support regime is the correct first perturbative model. It isolates the dependence of the variational problem on ambient weights and avoids conflating weight instability with support instability. This is consistent with several surrounding theories where the combinatorial scaffold is fixed while operators, edge weights, or signals vary [3, 4, 40, 41, 42]. A support-changing theory would require additional identifications and is not part of the present base package.

      Notation VI.3 (Canonical identification of feasible families). Fix a combinatorial support ( , ) and a budget rule that depends only on the chosen admissible substructure and the ambient weight function through the prescribed admissible system. Whenever two ambient structures = ( , , wt) and = ( , , wt ) have the same combinatorial type, we identify Sub() and Sub() through the common underlying support. In particular, a weighted substructure

      is viewed as the same combinatorial candidate in both ambient structures, with objective and complexity values evaluated relative to or as needed.

      Remark VI.4. Without the identification in Notation VI.3, stability of the optimizer set is not even well posed. One can- not compare maximizers across nearby ambient structures unless the admissible candidates themselves are canonically matched. This is why Section III isolated perturbative admissibility as a separate requirement.

      Hypothesis-status note VI.5. For value stability, it is suf- ficient to assume a uniform modulus of continuity for the information-retention functional over the common feasible family, together with budget invariance of that feasible fam- ily under the perturbation under consideration. A stronger

      Combining the two inequalities gives

      |Opt(, ) Opt(, ) | (Rel (, )).

      Lipschitz estimate is optional and should not be imposed unless later quantitative applications require it

      ‌For optimizer stability, a stronger hypothesis is needed. A positive isolation gap between the optimal value and the best competing feasible value is sufficient for local constancy of the unique optimizer. Uniqueness by itself is not enough unless it is accompanied by a positive gap, which in the finite setting is automatic once uniqueness holds. No necessity claim is made here.

    2. Value stability

      We begin with the optimal value. The statement is simplest

      Remark VI.7. Theorem VI.6 is the correct first stability theorem. It controls the optimal value without making any claim about the optimizer set. This separation is essential. In many discrete problems the identity of the optimizer can jump under arbitrarily small perturbations even though the optimal value is stable. That phenomenon is familiar in combinatorial optimization, spectral approximation, and graph reduction [16, 32, 36, 47].

      Corollary VI.8 (Lipschitz value stability). Under the hy- potheses of Theorem VI.6, if

      when the feasible family is unchanged under perturbation.

      Theorem VI.6 (Stability of the optimal value). Fix a com- binatorial support ( , ) and a budget level 0. Let

      = ( , , wt), = ( , , wt )

      be two ambient weighted relational structures of the same combinatorial type. Assume that:

      1. the feasible families coincide under the canonical identification,

        F () = F ();

      2. there exists a modulus of continuity : [0, ) [0, ) with () 0 as 0 such that for every admissible

      F () = F (),

      one has

      |I () I () | Rel (, ) for all F () = F ()

      for some constant 0, then

      |Opt(, ) Opt(, ) | Rel (, ).

      Proof. Apply Theorem VI.6 with () = .

      Remark VI.9. Theorem VI.6 assumes that the feasible family remains unchanged. This is automatic, for example, when the budget functional depends only on the combinato- rial size of the chosen substructure. If the budget functional itself depends on ambient weights, then one needs additional hypotheses ensuring that the feasible family is stable under perturbation, or else one must work with a common com- parison family contained in both feasible sets. The present theorem is intentionally stated at the cleanest level first.

      We now record a slightly more flexible version that allows the budget functional to vary, provided this variation is controlled and there is a common comparison family. The conclusion is correspondingly weaker in form but often

      Then

      |I () I () | (Rel (, )).

      |Opt(, ) Opt(, ) | (Rel (, )).

      more applicable.

      Proposition VI.10 (One-sided value comparison under common feasibility). Fix 0. Let and have the same combinatorial type. Assume there is a subset

      C F () F ()

      Proof. Let argmax(, ) and argmax(, ), whose existence is guaranteed by Theorem IV.2 because the common feasible family is finite and nonempty.

      Since F (), optimality of for gives

      I () I ( ) = Opt(, ).

      Hence

      such that:

      1. for every > 0 there exists C with

        I () Opt(, ) ;

      2. for all C,

      Opt(, )Opt(, ) = I ()I ( ) I ()I ().

      |I () I () | (Rel (, )).

      By the modulus-of-continuity hypothesis,

      Then

      Opt(, )Opt(, ) |I () I () | (Rel (, )).

      Opt(, ) Opt(, ) + (Rel (, )).

      Exchanging the roles of and yields

      Proof. Let > 0 and choose C as in (i). Since

      Opt(, )

      Opt

      (, ) (Rel (, )).

      F (), one has

      I () Opt(, ).

      Therefore

      uniqueness implies a positive gap automatically, but the proof works through the gap because that is the quantitatively

      Opt(, ) I () I ()+(Rel (, )) Opt(,sta)b+leo(bjReecl (t.T,his)f)o.rmulation is preferable for perturbation

      Letting 0 gives the claim.

      Remark VI.11. Proposition VI.10 is useful when exact perturbative invariance of the feasible family is unavailable, but one still has a sufficiently rich common comparison

      theory.

      Corollary VI.14 (Persistence under Lipschitz perturbations). Under the hypotheses of Theorem VI.12, assume further that there exists 0 such that

      ‌class. This type of argument is common in approximation frameworks where one compares objectives on overlapping admissible sets rather than on identical feasible regions

      [44, 47, 36]. If

      sup

      F ()

      |I () I () | Rel (, ).

    3. Optimizer stability under a gap condition

      We now turn to the optimizer set. The correct first theorem is a local constancy result under an isolation gap.

      Theorem VI.12 (Local constancy under an isolation gap). Fix a combinatorial support (, ) and a budget level 0. Assume that the feasible family is finite and unchanged under perturbation:

      Rel (, ) < ,

      2

      with the convention that the condition is vacuous when = 0, then remains the unique optimizer for .

      Proof. The hypothesis implies

      sup |I () I () | < /2.

      F ()

      F () = F ().

      Assume that argmax(, ) is the unique optimizer for

      , and define the value gap

      := I () sup {I () : F (), } .

      The conclusion therefore follows from Theorem VI.12.

      Remark VI.15. Without an isolation gap, optimizer stability can fail even under arbitrarily small perturbations. If two distinct feasible candidates tie at the optimum, then an arbitrarily small perturbation may select one over the other.

      Then > 0. If, in addition,

      sup |I () I () | < /2,

      F ()

      then remains the unique optimizer for the perturbed structure .

      Proof. Because F () is finite and is the unique opti- mizer, the set

      {I () : F (), }

      is finite and has maximum strictly smaller than I (). Hence > 0.

      Let

      This is exactly why the paper separates value stability from optimizer stability and avoids claiming continuity of the optimizer map under mere continuity of the objective.

    4. ‌Near-optimizer stability

      Exact optimizer stability is often too strong. Near-optimizer stability is usually the natural substitute.

      Proposition VI.16 (Transport of near-optimizers). Assume the hypotheses of Theorem VI.6, and write

      := (Rel (, )).

      Then for every 0,

      argmax(, ) argmax(, ).

      := sup |I () I () | ,

      F ()

      +2

      Proof. Let argmax (, ). Then

      so that < /2 by hypothesis.

      For the distinguished optimizer , one has

      I () I () .

      For every competing feasible , one has

      I () I () + I () + .

      Therefore

      I ()I () (I ())(I ()+) = 2 > 0. Thus

      I () Opt(, ) .

      By the continuity hypothesis,

      I () I () .

      By Theorem VI.6,

      Opt(, ) Opt(, ) + .

      Hence

      I () Opt(, ) Opt(, ) 2.

      I () > I () for every F (), .

      Therefore

      argmax(, ).

      Hence is the unique optimizer for .

      Remark VI.13. Theorem VI.12 uses a gap condition, not merely uniqueness in an abstract sense. In the finite setting

      +2

      ‌Corollary VI.17 (Symmetric near-optimizer comparison).

      Under the hypotheses of Theorem VI.6, one has for every

      0

      argmax(, ) argmax(, ) and argmax(, )

      1. Relaxed fesible models

        We begin by fixing the level of abstraction. A relaxation is meaningful only after the ambient space, the embedding of discrete substructures, the relaxed feasible region, and the

        argmrealxa(xed, o)b,jective are all specified.

        where

        +2

        := (Rel (, )).

        +2

        Definition VII.1 (Relaxed weighted substructure). Let be a finite weighted relational structure and let 0. A relaxed weighted substructure is an element

        Proof. Apply Proposition VI.16 twice, once from to

        and once from to .

        F

        rel

        ()

        Remark VI.18. Proposition VI.16 is often the more realistic perturbative statement. In many optimization problems ex- act maximizers are unstable, while near-optimality is robust. This is well aligned with the role of approximate preserva-

        of a prescribed compact admissible set contained in a finite- dimensional real vector space , together with a fixed embedding

        rel

        ‌tion in sparsification, graph reduction, and approximation

        : F () F () .

        algorithms [17, 19, 32, 36, 44, 47].

    5. Limits of the present section

      The results proved here establish three levels of perturbative control:

      1. modulus-of-continuity control of the optimal value;

      2. local constancy of the unique optimizer under a strict isolation gap;

      3. transport of near-optimizers under the same value- stability hypothesis.

      This is the correct first stability package. The section does not prove continuity of the full optimizer correspondence in any Hausdorff or Kuratowski sense, and it does not treat support-changing perturbations. Those stronger directions may be mathematically interesting, but they lie beyond the

      The intended interpretation is that the coordinates of represent fractional participation of edges, vertices, or higher- order relations. The exact ambient space and encoding must be fixed before any theorem in this section is invoked.

      Remark VII.2. There is no canonical relaxed space in the abstract theory. Depending on the model class, one may work with incidence vectors, weighted adjacency variables, operator encodings, or other finite-dimensional represen- tations. In graph partitioning and graph signal methods, relaxed objects often arise through real-valued embeddings or operator surrogates [7, 8, 40, 41]. In submodular optimiza- tion, one often passes to a convex or multilinear surrogate over a hypercube or matroid polytope [44, 47]. The present paper does not commit to one universal model.

      rel

      Definition VII.3 (Relaxed objective and relaxed value). Assume that F () is fixed. A relaxed information-

      present base theory and should not be suggested as already established.

      retention functional is a map

      Remark VI.19. With existence, structural representatives,

      Irel : F

      rel

      () R

      and perturbative value control in place, the next natural step is approximation or relaxation. That layer requires new

      such that

      hypotheses and should therefore be treated separately, not folded into the present perturbative statements.

  7. ‌APPROXIMATION, RELAXATION, AND‌

    RECOVERY

    This section is optional in the logic of the paper. It should be retained only if it carries genuine theorem-level content. The preceding sections already establish a discrete variational

    Irel ( ()) = I () for all F ().

    The associated relaxed optimal value is

    Optrel (, ) := sup Irel ( ).

    rel

    F ()

    The discrete optimal value is written

    theory with existence, structural representatives, and per- turbation stability. Approximation and relaxation become relevant only if one can prove a mathematically controlled

    Optdisc (, ) := sup

    F ()

    I ().

    passage from the discrete feasible family to a relaxed model and then back to discrete substructures. This is the point at which many surrounding literatures become informative: spectral and cut relaxations in graph partitioning [7, 8], sparsification and graph reduction via operator surrogates [17, 19, 32, 36], and approximation frameworks in knapsack-

    constrained or submodular optimization [43, 44, 45, 46, 47]. The present section, however, does not import any one of those frameworks automatically. It records only the theorem- secure consequences of an abstract relaxation-and-recovery mechanism.

    Remark VII.4. The extension requirement in Defini- tion VII.3 is essential. Without it, the relaxed problem is simply a different optimization problem, not a relaxation of the discrete one. This point is frequently hidden in in- formal algorithmic discussions but must be explicit in a referee-facing theoretical paper.

    1. Relaxed upper bounds

      ‌The first statement is elementary but necessary.

      Proposition VII.5 (Relaxed upper bound). If the discrete feasible family embeds into the relaxed feasible region and the relaxed objective extends the discrete objective, then

      Optdisc (, ) Optrel (, ).

      Proof. For each F (), the embedding gives

      rel

      ‌More often one has approximate recovery, budget blow-up, or rounding loss, which is the correct next level of theorem.

    2. Approximate recovery

      We now record the quantitative recovery statement that is typically available in relaxed optimization settings.

      Definition VII.9 (Recovery operator with quantitative con- trol). Let , 0. A (, )-recovery operator for the

      () F () ,

      and the extension property yields

      I () = Irel ( ()).

      relaxed problem is a map

      R : F ()

      rel

      F+ ()

      Therefore

      I () sup

      Irel ( ) = Optrel (, ).

      such that for every relaxed feasible point

      rel

      F () ,

      rel

      F ()

      Taking the supremum over all F () gives

      one has

      I (R( )) Irel ( ) .

      Opt

      disc

      (, ) Opt

      rel

      (, ).

      Remark VII.10. Definition VII.9 separates two different sources of approximation error:

      (i) value loss , which measures the deterioration of

      Remark VII.6. Proposition VII.5 says only that the relax- ation gives an upper bound for the discrete problem. It does not imply tightness, integrality, or even near-tightness. Those stronger conclusions require a recovery theorem, an integrality theorem, or a quantitative gap estimate.

      Corollary VII.7 (Exact equality under value-preserving recovery). Assume the hypotheses of Proposition VII.5. Assume further that every relaxed optimizer admits a discrete recovery

      = R( ) F ()

      retained information under recovery;

      (ii) budget blow-up , which measures how far the recov- ered discrete object may exceed the original budget.

      This separation is mathematically preferable to hiding both effects inside a single approximation ratio, especially in a paper whose central object is budgeted information retention.

      Theorem VII.11 (Approximate recovery from relaxed solu- tions). Assume that:

      1. the relaxed feasible region F () rel and relaxed

        such that

        I

        rel

        objective Irel are defined as in Definition VII.3;

      2. the relaxed problem admits an optimizer

        Then

        ( ) I ( ).

        F ()

        rel;

        Optdisc (, ) = Optrel (, ).

      3. there exists an (, )-recovery operator

      Proof. By Proposition VII.5,

      R : F

      rel

      () F+

      ().

      Optdisc (, ) Optrel (, ).

      Lt be a relaxed optimizer. Then

      Irel ( ) = Optrel (, ).

      By the recovery hypothesis,

      I (R( )) Irel ( ) = Optrel (, ).

      Since R( ) F (), one has

      I (R( )) Optdisc (, ).

      Hence

      Optrel (, ) Optdisc (, ).

      Then the recovered discrete substructure

      := R( )

      is an (, )-near optimizer in the sense that

      F+ ()

      and

      I () Optrel (, ) .

      In particular, by Proposition VII.5,

      I () Optdisc (, ) .

      Proof. Because is a relaxed optimizer,

      Combining both inequalities gives equality.

      Remark VII.8. Corollary VII.7 is strong and should not

      Irel ( ) = Opt

      rel

      (, ).

      be assumed in applications without proof. Exact value- preserving recovery is rare in broad combinatorial settings.

      Since R is an (, )-recovery operator, one has

      = R( ) F+ ()

      ‌and‌

      I () Irel ( ) = Optrel (, ) .

      5. Limits and retention criterion for this section

      The results above justify retaining this section only when the paper really provides a concrete relaxed model together

      The final inequality

      I () Optdisc (, )

      follows from Proposition VII.5, which gives

      Optdisc (, ) Optrel (, ).

      with a genuine recovery theorem. Without such content, the section should be omitted. A vague discussion of relaxation would weaken the paper rather than strengthen it. This is especially important in a Q1-facing article, where breadth language without theorem content is usually transparent to the referee.

      Remark VII.17. This section belongs in the final paper

      only if one of the following is actually proved:

      Remark VII.12. Theorem VII.11 is the correct theorem- level content of the approximation layer. It proves neither exact recovery nor a purely budget-feasible approximation unless = 0. It also does not prove a multiplicative approximation ratio. It gives an additive value guarantee together with a possible additive budget violation, exactly as stated.

      Corollary VII.13 (Feasible approximate recovery without budget blow-up). Under the hypotheses of Theorem VII.11, if = 0, then the recovered discrete substructure belongs to F () and satisfies

      I () Optdisc (, ) .

      Thus is an -near optimizer for the original discrete budgeted problem.

      ‌Proof. If = 0, then F () by definition of the recovery operator. The value estimate is exactly the last inequality in Theorem VII.11.

    3. Approximation gaps and integrality defects

    The next notion isolates how far the relaxed model may sit above the discrete one.

    Definition VII.14 (Relaxation gap). Assume the relaxed model is defined. The relaxation gap at (, ) is

    Gaprel (, ) := Optrel (, ) Optdisc (, ).

    Proposition VII.15 (Gap control from approximate recov- ery). Assume the hypotheses of Theorem VII.11. If, in addition, = 0, then

    0 Gaprel (, ) .

    Proof. The lower bound follows from Proposition VII.5. For the upper bound, let be a relaxed optimizer and let

    = R( ) F (). Then

    Optrel (, ) = Irel ( ) I () + Optdisc (, ) + .

    Hence

    Gaprel (, ) = Optrel (, ) Optdisc (, ) .

    Remark VII.16. Proposition VII.15 may be read as an additive integrality-gap estimate, but the paper should use that language only if the relaxation truly has an integrality interpretation in the chosen model. In some operator or fractional-participation frameworks, relaxation gap is safer and more precise.

    1. a concrete relaxed compact model together with an exact recovery theorem;

    2. a concrete relaxed compact model together with a quantitative (, )-recovery theorem;

    3. an explicit bound on the relaxation gap in a natural model class.

    If none of these is established, the correct editorial decision is to remove the section entirely.

    ‌Remark VII.18. Once approximation and recovery are formulated at the correct level, the next necessary layer is negative theory: failure of existence, stability, or recovery when natural hypotheses are dropped. That belongs to the obstruction section and should be kept logically separate from the present positive results.‌

  8. OBSTRUCTIONS, SEPARATION RESULTS,

    AND FAILURE MODES

    A positive theory of budgeted information-preserving sub- graphs is not credible unless it also records the limits of its hypotheses. This section provides that negative layer. Its role is not rhetorical. It shows that the axioms used in the preceding sections are not decorative. In particular, it shows:

    1. value stability does not by itself imply optimizer stability;

    2. connected representative results cannot be expected without an explicit amalgamation or replacement prin- ciple.

      These distinctions are well aligned with the surrounding literature. In graph optimization, partitioning, sparsifica- tion, reduction, and approximation theory, objective values often vary regularly while the identities of optimal or near- optimal structures change abruptly [7, 8, 17, 32, 36, 44, 47]. Likewise, connectivity of optimal objects is typically a model-dependent feature rather than a free consequence of ambient connectedness [13, 14, 36]. The present section formulates these facts inside the abstract framework of the paper.

      Remark VIII.1. This section is not ancillary. It marks the boundary of the positive theory. If a statement from Sections V or VI fails in the generality first imagined, then the corrected positive statement should remain there and the counterexample should appear here. That separation is important for theorem integrity.

      1. Failure of optimizer stability without a gap

        We begin with the perturbative side. The next theorem shows that continuity of the optimal value does not force

        1. if = 0, then

          I0

          (1) = I

          0

          (2) = 1,

          continuity, constancy, or any other regular behavior of the optimizer set.

          Theorem VIII.2 (Failure of optimizer stability without an isolation gap). There exists a one-parameter family of finite weighted graphs { } R and a budget level such that:

          1. the optimal value Opt( , ) varies continuously with ;

          2. the optimizer set argmax( , ) jumps discontinu- ously at a threshold parameter 0.

        In particular, value stability does not imply optimizer stabil- ity in the absence of an isolation gap.

        Proof. Let the common vertex set be

        = {1, 2, 3, 4},

        and let the ambient graph have edge set

        = {1, 2}, 1 = {1, 2}, 2 = {3, 4}.

        For each parameter R, define the weighted graph

        = ( , , wt )

        by

        wt (1) = 1 + , wt (2) = 1 .

        Let the admissible family Sub( ) consist of all spanning subgraphs of , and let the complexity functional be

        C () := | () |.

        Fix the budget

        = 1.

        Then the feasible family consists exactly of the three sub- graphs

        , 1, 2,

        where has no edges, 1 contains only 1, and 2 contains only 2.

        Define the information-retention functional by

        so both 1 and 2 are optimizers.

        Thus the optimizer set changes from {1} to {1, 2} to

        {2} as crosses 0. In particular, the optimizer correspon- dence is not locally constant at

        0 = 0.

        This proves the claim.

        Remark VIII.3. Theorem VIII.2 is the exact reason Sec- tion VI separated value stability from optimizer stability and imposed an isolation-gp hypothesis for local constancy. The counterexample is elementary, but its message is funda- mental: continuous variation of objective values does not prevent discrete optimizer switching.

        Corollary VIII.4 (Necessity of a separation hypothesis for local constancy). Any theorem asserting local constancy of the optimizer set under perturbation must impose a hypothesis that excludes ties at the optimum, such as a positive isolation gap or an equivalent strict separation condition.

        ‌Proof. If no such exclusion is imposed, Theorem VIII.2 provides a counterexample.

      2. Failure of connected-optimizer reduction

        We next show that connectedness of the ambient graph does not force connectedness of an optimizer.

        Proposition VIII.5 (Failure of monotonicity-based reduc- tion). There exist:

        1. a connected finite weighted graph ,

        2. a complexity functional C,

        3. an information-retention functional I,

        4. and a budget level ,

          such that an optimizer exists, but no connected optimizer exists.

          Proof. Let the ambient graph have vertex set

          Thus

          I () :=

          ( )

          wt ().

          and edge set

          = {1, 2, 1, 2, }

          I () = 0, I (1) = 1 + , I (2) = 1 .

          Hence

          Opt( , ) = max{0, 1 + , 1 } = 1 + | |.

          Therefore the optimal value varies continuously with . Now analyze the optimizer set:

          1. if > 0, then 1 + > 1 , so 1 is the unique optimizer;

          2. if < 0, then 1 > 1 + , so 2 is the unique optimizer;

            = {12, 12, 2, 1}.

            This graph is connected. Assign weights by

            wt(12) = 10, wt(12) = 10, wt(2) = 1, wt(1) = 1.

            Let Sub() be the class of all spanning subgraphs of . Define the complexity functional by

            C () := | () |

            and fix the budget

            = 2.

            Thus any feasible subgraph may contain at most two edges.

            Define the information-retention functional by Proof. Apply Theorem IV.2 and Theorem V.5 to the problem

            I ()

            :=

            ( )

            wt().

            constructed in Proposition VIII.5. Existence and inclusion- minimality follow from finiteness, while absence of con- nected optimizers is already proved there.

            That is, retained information is the total weight of the selected edges.

            Consider the disconnected subgraph

            = ( , {12, 12}).

            It is feasible, since

            ‌Remark VIII.9. Proposition VIII.8 clarifies the architecture of the paper. Existence, minimality, connected representa- tion, and perturbative stability are genuinely different layers. None should be smuggled into another.

            1. Boundary of the positive theory

              The examples above show precisely what the earlier sections

              C () = 2 ,

              and its value is

              I () = 10 + 10 = 20.

              Now let be any connected feasible subgraph. Since is a tree on five vertices, every connected spanning subgraph must contain all four edges. In particular, no connected spanning subgraph is feasible under the budget = 2.

              If one instead interprets connected admissible substruc- tures as connected subgraphs on arbitrary vertex subsets, then any connected feasible subgraph with at most two edges can contain:

              1. one heavy edge and one light edge, with value at most 11; or

              2. two light edges, with value at most 2; or

              3. one heavy edge alone, with value 10. In all cases its value is strictly less than 20.

            Hence is an optimizer, and no connected feasible subgraph is optimal. Therefore no connected optimizer exists, even though the ambient graph is connected.

            Remark VIII.6. Proposition VIII.5 shows that connected- representative results require a genuine replacement princi- ple, such as the component-amalgamation hypothesis from Section V. Ambient connectedness alone is irrelevant. The positive structural proposition there is therefore correctly conditional rather than universal.

            Corollary VIII.7 (Ambient connectedness is not a sufficient structural axiom). Any theorem asserting existence of a connected optimizer must use more than the assumption that the ambient graph is connected.

            Proof. This follows immediately from Proposition VIII.5.

      3. ‌A separation between existence and structural regu- larity

        The next observation is not deep, but it is conceptually useful.

        Proposition VIII.8 (Existence does not imply structural regularity). There exist budgeted optimization problems in the present framework for which:

        1. the discrete optimizer set is nonempty;

        2. inclusion-minimal optimizers exist;

        3. but no connected optimizer exists.

    do and do not prove.

    1. Section IV proves attainment, not stability.

    2. Section V proves minimal representatives in full gen- erality, but connected representatives only under a strong replacement hypothesis.

    3. Section VI proves value stability under a modulus of continuity, but optimizer stability only under strict separation.

    This is the correct theorem boundary.

    Remark VIII.10. For a paper of the present type, one honest negative section is mathematically healthy. It prevents inflated generality, clarifies which hypotheses are genuinely used, and helps the referee see that the framework has been stress-tested rather than merely decorated with formal axioms.

  9. ‌VERIFICATION OF THE AXIOMS IN NATURAL MODEL CLASSES‌

    The preceding sections introduced an abstract hypothesis package. Such a package is mathematically useful only if it is verified in concrete natural classes. The present section provides two such classes. The first is additive edge-weight retention, which is elementary but robust and gives a clean verification of monotonicity and perturbative continuity. The second is a coverage-type class, closer in spirit to structured neighborhood retention and set-function optimization, where monotonicity and submodularity arise from diminishing-return structure. These two model classes do not exhaust the theory, but they show that the axioms are not empty and that the earlier existence, stability, and approximation results apply in nontrivial settings. Compare the surrounding perspectives in weighted graph methods, graph signal processing, local clustering, and submodular optimization [3, 8, 9, 10, 40, 41, 43, 44, 47].

    Remark IX.1. This section is not meant to prove that every interesting information-retention functional belongs to one of the classes below. Its purpose is narrower. It shows that the abstract framework contains at least two substantial families in which the axioms can be checked directly and the earlier theorem package becomes operational.

    1. Additive edge-weight retention

      ‌We begin with the simplest natural class. Here the retained information is obtained by summing edgewise contributions.

      Definition IX.2 (Additive edge-potential model). Let =

      ( , , wt) be a finite weighted graph or weighted relational

      structure, and let Sub() be the class of admissible spanning weighted subgraphs of . For each edge , let

      : [0, ) [0, )

      be a prescribed function. Define the information-retention functional by

      Since each takes values in [0, ), every summand on the right-hand side is nonnegative. Hence

      I (1) I (2).

      Thus I is monotone under edge inclusion.

      If C () = | () |, then

      I () :=

      ( )

      (wt ()), Sub().

      F () = { Sub() : | () | } .

      Because is finite, there are only finitely many subsets of

      Typical complexity functionals in this model are:

      , hence only finitely many admissible spanning subgraphs.

      C

      () = | () |, or C

      () =

      ( )

      (wt

      ()),

      Therefore F () is finite.

      1. Let Sub() Sub(). Then

        where each : [0, ) [0, ) is nondecreasing.

        Remark IX.3. The model in Definition IX.2 is elementary,

        I () I () =

        ( )

        (wt ()) (wt ()) .

        but it is not vacuous. It includes weighted edge-importance

        Taking absolute values and using the triangle inequality,

        selection, certain score-based sparsification surrogates, and several coverage-like linearizations. It also serves as the cleanest baseline for perturbative analysis because the objec-

        |I () I () |

        ( )

        I (wt ())

        (wt ())I .

        tive decomposes into edgewise contributions. In surround- ing literatures, additive and nearly additive edge scores frequently appear as tractable surrogates for more global graph quantities [15, 16, 20, 23, 36].

        Proposition IX.4 (Verification for additive edge-weight

        Since each is Lipschitz with constant at most ,

        I (wt ()) (wt ())I |wt () wt () | .

        Hence

        retention). Assume the setting of Definition IX.2. Suppose each

        |I () I () |

        ( )

        |wt () wt () | | () | Rel (, ).

        : [0, ) [0, )

        is continuous and nondecreasing. Then:

        1. the induced information-retention functional I is monotone under edge inclusion;

        2. if C () = | () |, then the feasible family F ()

          is finite for every 0;

        3. if, in addition, each is Lipschitz with constant at most , then for ambient weighted structures , on the same support and every common admissible subgraph ,

          |I () I () | | () | Rel (, );

        4. consequently, if the budget is edge-cardinality con- strained so that

        C () = | () |

        on the feasible family, then

        |Opt(, ) Opt(, ) | Rel (, ).

        In particular, since | () | | | for all ,

        |Opt(, ) Opt(, ) | | | Rel (, ).

        Proof. (i) Let 1, 2 Sub() with 1 2. Then

        (1) (2), so

      2. Suppose now that the feasible family is defined by the edge-budget condition | () | . Then every feasible

        satisfies

        | () | .

        By part (iii),

        |I () I () | Rel (, )

        for every common feasible . The value-stability theorem from Section VI therefore yields

        |Opt(, ) Opt(, ) | Rel (, ).

        Since | | on any nontrivial feasible family, the coarser estimate

        |Opt(, ) Opt(, ) | | | Rel (, )

        also follows.

        Remark IX.5. Proposition IX.4 verifies the main axioms used earlier: finite feasibility, monotonicity, and perturba- tive continuity. Thus the abstract existence theorem, the minimal-optimizer theorem, and the value-stability theorem apply immediately in this model class. What it does not pro- vide automatically is any connected-representative theorem, because additivity alone does not imply the component- amalgamation property from Section V.

        Corollary IX.6 (Immediate applicability of the base theory

        I

        (2) I

        (1) =

        ( 2 )\ ( 1 )

        (wt

        ()) .

        in the additive model). In the additive edge-potential model with edge-cardinality budget, the following hold:

        1. the discrete budgeted problem admits an optimizer;

        2. the optimizer set contains an inclusion-minimal ele- ment;

        3. the optimal value is Lipschitz in the ambient sup- metric whenever the edge potentials are uniformly Lipschitz.

        ‌Proof. Part (i) follows from finiteness of the feasible family and Theorem IV.2. Part (ii) follows from Corollary V.7. Part

        (iii) is exactly the last assertion of Proposition IX.4.

        Proposition IX.10 (Verification for coverage-based reten- tion). Assume the monotone submodular coverage system from Definition IX.9. Then:

        1. the induced information-retention functional I is monotone on Sub();

        2. if the map

          Cov ()

          preserves unions and intersections in the sense that

    2. Submodular neighborhood-coverage retention

      We next consider a more structured model. Here retained

      Cov

      (12) = Cov

      (1)Cov

      (2), Cov

      (12) = Cov

      (1)Cov

      (2),

      information is not purely additive over edges. It is built from a coverage principle on ambient features. This places the model closer to neighborhood retention, motif retention, and coverage-based set-function optimization.

      Definition IX.7 (Coverage-based retention model). Let = ( , , wt) be a finite weighted graph or relational structure, and let U be a finite ambient feature set associated with

      . The elements of U may represent, depending on the model, vertices, weighted neighborhoods, motifs, relation patterns, or other local structures extracted from .

      For each admissible substructure Sub(), let

      Cov () U

      denote the set of features covered by . Let

      : 2U [0, )

      be a set function. Define

      I () := (Cov ()).

      Remark IX.8. Definition IX.7 is intentionally broad. It

      then I is submodular on the admissible family:

      I (1) + I (2) I (1 2) + I (1 2);

      1. if, in addition, the budget is edge-cardinality or vertex- cardinality based, then the feasible family is finite.

        Proof. (i) Let 1, 2 Sub() with 1 2. By mono- tonicity of the coverage assignment,

        Cov (1) Cov (2).

        Since is monotone,

        I (1) = (Cov (1)) (Cov (2)) = I (2).

        Thus I is monotone.

        1. Assume now that coverage preserves unions and intersections in the stated sense. Then

          I (1) + I (2) = (Cov (1)) + (Cov (2)).

          By submodularity of ,

          includes classical coverage functions, weighted coverage,

          (Cov (1))+ (Cov (2)) (Cov (1)Cov (2))+ (Cov (1)Cov (2))

          and more general monotone submodular feature scores. In graph-based settings, the covered feature set may encode which neighborhoods, motifs, or relation patterns remain visible after pruning. This type of model is close in spirit to information-coverage objectives used in structured subset selection and influence-style optimization [45, 46, 47].

          Definition IX.9 (Monotone submodular coverage system). The coverage model from Definition IX.7 is called monotone submodular if:

          1. the coverage assignment is monotone:

            1 2 = Cov (1) Cov (2);

          2. the set function is normalized,

            () = 0;

          3. the set function is monotone:

            = ( ) ();

          4. the set function is submodular:

          Using the union/intersection preservation property,

          (Cov (1)Cov (2)) = (Cov (12)) = I (12),

          and similarly,

          (Cov (1)Cov (2)) = (Cov (12)) = I (12).

          Hence

          I (1) + I (2) I (1 2) + I (1 2).

        2. If the budget is based on edge or vertex cardinality, then the admissible family consists of subsets of a finite ground set, hence is finite. Therefore the feasible family under any fixed budget is finite as well.

          Remark IX.11. Proposition IX.10 is the main reason to keep this second model class. It shows that the abstract supplementary structural axiom from earlier sections is not merely formal. In this class, submodularity is genuine and mathematically checkable. This places the framework in real contact with classical submodular optimization while keeping the object of study at the level of subgraphs or

          ( )+ () ( )+ ( )

          for all , strUuctu.red substructures rather than arbitrary subsets [43, 44, 45, 46, 47].

          Corollary IX.12 (Existence and minimality in the coverage model). Under the hypotheses of Proposition IX.10, if the budget is cardinality-based, then:

          1. the discrete budgeted problem admits an optimizer;

          2. the optimizer set contains an inclusion-minimal ele- ment;

          3. the information-retention functional satisfies the monotonicity hypothesis used in the structural section.

      ‌Proof. Finiteness of the feasible family is given by Proposi- tion IX.10(iii), so existence follows from Theorem IV.2 and inclusion-minimality follows from Corollary V.7. Mono- tonicity is exactly Proposition IX.10(i).

    3. Comparison of the two model classes

    The two classes above serve different roles.

    Remark IX.13. The additive edge-potential class is analyti- cally cleaner and is the better class for perturbative continuity estimates. The coverage-based class is structurally richer and is the better class for verifying submodularity-type hy- potheses. Neither class subsumes the other in full generality. Together, however, they demonstrate that the axioms used in the paper cover both:

    1. edgewise quantitative models close to weighted graph reduction and sparsification surrogates [15, 16, 17, 36], and

    2. structured coverage models close to monotone sub- modular selection principles [43, 44, 47].

    This is enough to anchor the abstract theory mathematically.

    Remark IX.14. The present section does not claim that all natural information-retention systems satisfy all earlier hypotheses. In particular, connected-representative princi- ples and exact recovery theorems remain genuinely model- dependent. The point is only that the abstract framework is populated by substantial classes, not that every desirable theorem holds in every class.

  10. ‌CONCLUDING REMARKS AND OPEN

PROBLEMS

This final section is divided strictly into three layers. The first records only what has actually been proved. The second clarifies the exact methodological scope of the paper. The third isolates open problems that emerge naturally from the proved theory. This separation is deliberate. It prevents the conclusion from blurring theorem-level achievements with directions that remain open.

Proved conclusions

The paper develops a finite variational framework for budgeted information-preserving subgraph selection on weighted relational structures. Within that framework, the following conclusions have been established.

  1. Discrete existence. For every finite weighted rela- tional structure and every budget level , if the feasible family F () is nonempty and finite, then the budgeted optimization problem admits at least one optimizer. Equivalently, the associated loss functional admits a minimizer on the feasible family. This is the basic attainment theorem for the discrete model.

  2. Relaxed existence. In the presence of a compact relaxed feasible region and an upper semicontinuous relaxed objective, the relaxed budgeted problem ad- mits an optimizer. This statement is distinct from the discrete finite-attainment theorem and belongs to a different hypothesis package.

  3. Order-theoretic optimizer structure. Whenever the optimizer set is nonempty and the admissible family is finite, there exists an inclusion-minimal optimizer. Thus the theory yields canonical minimal representatives without requiring monotonicity of the objective.

  4. Conditional connected representative theorem. Un- der an explicit component-amalgamation hypothesis, the optimizer set contains a connected representative. This is a representative theorem only. It is not a classification of all optimizers.

  5. Value stability. Under a uniform modulus-of- continuity assumption for the information-retention functional on a common feasible family, the opti- mal value is stable under perturbation of the ambient weights. In the Lipschitz case, one obtains a quantita- tive Lipschitz bound.

  6. Optimizer stability under separation. If the opti- mizer is unique and separated from all competitors by a positive isolation gap, then sufficiently small perturbations of the ambient weights preserve that optimizer uniquely. Thus local constancy of the opti- mizer requires a nondegeneracy condition.

  7. Near-optimizer transport. Even when exact opti- mizer stability is too strong, near-optimizer sets re- main controllable under perturbation, with an explicit loss in the near-optimality threshold.

  8. Approximation and recovery. In the presence of a relaxed model together with a quantitative recovery operator, every relaxed optimizer yields a discrete near optimizer, possibly with controlled loss of value and controlled budget blow-up. Exact equality of relaxed and discrete values follows only under exact value-preserving recovery.

  9. Negative results. Two obstruction principles have been proved. First, continuity of the optimal value does not imply stability of the optimizer set in the absence of an isolation gap. Second, connectedness of the ambient graph does not imply existence of a connected optimizer. These negative results show that the positive hypotheses used earlier are not formal decoration.

  10. Verification in natural classes. The abstract hy- pothesis package has been verified in at least two substantial model classes: additive edge-potential retention models and monotone submodular coverage- type retention models. Hence the framework is not empty and is mathematically anchored in con- crete families related to weighted graph optimization, graph reduction surrogates, and submodular selection [3, 8, 15, 16, 17, 32, 36, 43, 44, 47].

    Remark X.1. No stronger claim is made here. In particular, the paper does not prove a complete characterization of optimizer sets, does not establish generic uniqueness, and does not prove that connected representatives exist in all natural classes. Those matters remain outside the proved core.

    Methodological remarks

    The present framework has a deliberately restricted scope.

    1. Finite setting. The theory is developed for finite weighted relational structures. This restriction is not accidental. It is the level at which the discrete varia- tional problem, the optimizer set, and the perturbation layer can all be formulated cleanly without measure- theoretic or compactness complications beyond those explicitly introduced in the relaxed model.

    2. Discrete and relaxed layers are separated. The paper distinguishes sharply between the discrete finite- attainment theorem and the relaxed compact-existence theorem. The former uses finiteness only. The latter uses compactness and upper semicontinuity. These arguments are not merged.

    3. Perturbation theory is weight-based and support- fixed. The perturbation layer varies ambient weights on a fixed combinatorial support. The paper does not treat support-changing perturbations, graph edit perturbations, or structural instability under changing admissible families.

    4. Structure theory is intentionally modest. The struc- tural section proves existence of minimal represen- tatives and, under additional hypotheses, connected representatives. It does not attempt a full optimizer classification, a lattice description, or a uniqueness theorem.

    5. Stability of values and stability of optimizers are different phenomena. The paper proves value stabil- ity under a modulus-of-continuity assumption. Op- timizer stability requires more. In the finite setting, local constancy is derived only under a positive isola- tion gap. This separation is essential and is confirmed by the obstruction section.

    6. Approximation is conditional. The relaxation-and- recovery layer is retained only at the level justified by theorem content. No generic approximation claim is made without a concrete relaxed model and a proved recovery mechanism.

    7. Natural-class verification is partial rather than universal. The paper verifies the axioms in additive and coverage-type classes. It does not claim that every retention functional arising in graph theory, sparsi- fication, reduction, clustering, or signal processing belongs to these classes or satisfies the full positive theorem package [7, 9, 10, 40, 41, 42].

Remark X.2. Accordingly, the mathematical contribution of the paper is best described as a first finite variational theory with theorem-level existence, selected structural representatives, perturbative value control, a conditional relaxation layer, and explicit negative boundary results. It is

not a universal theory of all graph pruning or graph reduction mechanisms.

Open problems

The proved theory leaves several natural directions unre- solved.

Question X.3 (Abstract characterization of optimizer sets). Under what natural intrinsic conditions on I and C can one characterize the optimizer set without passing to case-by- case combinatorial analysis?

Remark X.4. Theorems proved here give existence and selected representatives, but not an intrinsic characterization of all optimizers. A satisfactory answer may require a deeper order-theoretic, variational, or submodular structure than what has been used in the present paper.

Question X.5 (Extension to higher-order structures). Can the present finite variational theory be extended to hyper- graphs, simplicial structures, or higher-order relational systems without losing the perturbation layer?

Remark X.6. This question is mathematically natural be- cause the formal language of weighted relational structures already suggests higher-order variants. However, the pertur- bative framework, the admissible-family formalism, and the recovery layer may all become more delicate in that setting. Related higher-order and generalized graph frameworks suggest that such an extension is plausible but nontrivial [30, 31, 29].

Question X.7 (Exact verification theorems beyond the model classes treated here). Which abstract hypothesis packages admit exact verification theorems in natural graph classes beyond additive or submodular models?

Remark X.8. The additive and coverage-based classes treated in Section IX are only first anchors. It remains open to determine which stronger positive hypothesesfor example connected-representative principles, exact recov- ery mechanisms, or sharper perturbative regularitycan be verified in graph classes tied more closely to spectral sparsification, graph reduction, or directed and higher-order cut structures [16, 17, 19, 32, 36, 28, 30].

Question X.9 (Sharp perturbative regularity of optimizer correspondences). Beyond local constancy under an iso- lation gap, what is the correct general regularity theory for the optimizer correspondence? In particular, can one prove quantitative set-valued continuity for near-optimizer families under natural nondegeneracy hypotheses?

Remark X.10. The present paper proves transport of near- optimizers but does not develop a full continuity theory for the optimizer correspondence. A sharper theory would likely require a more systematic metric or variational geometry on families of admissible substructures.

Question X.11 (Sharp relaxation gaps and recovery princi- ples). For which natural relaxed models can one compute or bound the relaxation gap explicitly, and under what struc- tural assumptions does one obtain exact or asymptotically exact recovery?

Remark X.12. The approximation section proves only conditional theorems. To make that layer decisive, one

would need concrete relaxed models in which the recovery operator is explicit and the resulting loss parameters can be estimated sharply. This problem lies at the interface of discrete variational theory, approximation, and graph reduction [7, 44, 47, 36].

Remark X.13. These open problems are not appended for breadth. They are the precise directions left unresolved by the proved core of the paper. Any later extension should preserve the same discipline used throughout the present article: exact hypotheses, explicit theorem boundaries, and no inflation of claims beyond what the proofs support.

ACKNOWLEDGEMENTS

‌The authors thanks Dr. Ramachandra R. K., Principal, Gov- ernment College (Autonomous), Rajahmundry, for institu- tional support and encouragement. No funding was received for this work. The author declares no conflict of interest. This article is purely theoretical. No datasets were generated, analysed, or used in this study. It contains no studies involv- ing human participants, animals, patient data, or personal data. Ethical approval was therefore not required.

REFERENCES

  1. ‌Miroslav Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(2):298305, 1973.

  2. ‌Russell Merris. Laplacian matrices of graphs: A survey. Linear Algebra and its Applications, 197:143 176, 1994.

  3. ‌Fan R. K. Chung. Spectral Graph Theory, volume 92 of CBMS Regional Conference Series in Mathematics. American Mathematical Society, 1997.

  4. Fan Chung. Laplacians and the cheeger inequality for directed graphs. Annals of Combinatorics, 9(1):119, 2005.

  5. Lars Hagen and Andrew B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 11(9):10741085, 1992.

  6. Ingemar J. Cox, Satish B. Rao, and Yu Zhong. “Ratio Regions”: A Technique for Image Segmentation. In Proceedings of the 13th International Conference on Pattern Recognition, 1996.

  7. Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888905, 2000.

  8. ‌Ulrike von Luxburg. A tutorial on spectral clustering.

    Statistics and Computing, 17:395416, 2007.

  9. ‌ Reid Andersen, Fan Chung, and Kevin Lang. Local graph partitioning using PageRank vectors. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 475486, 2006.

  10. ‌ Reid Andersen, Fan Chung, and Kevin Lang. Us- ing PageRank to locally partition a graph. Internet Mathematics, 4(1), 2007.

  11. ‌ Daniel A. Spielman and Shang-Hua Teng. A local clustering algorithm for massive graphs and its appli- cation to nearly-linear timegraph partitioning. SIAM Journal on Computing, 42(1):126, 2013.‌

  12. Fan Chung and Olivia Simpson. Computing heat kernel pagerank and a local clustering algorithm. In Combinatorial Algorithms (IWOCA 2014, LNCS 8986),

    2015.

  13. Santo Fortunato. Community detection in graphs.

    Physics Reports, 486(35):75174, 2010.

  14. ‌Fragkiskos D. Malliaros and Michalis Vazirgiannis. Clustering and community detection in directed net- works: A survey. Physics Reports, 533(4):95142, 2013.

  15. ‌András A. Benczúr and David R. Karger. Random- ized approximation schemes for cuts and flows in capacitated graphs. SIAM Journal on Computing, 44(2):290319, 2015.

  16. ‌Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing, 40(6):19131926, 2011.

  17. Daniel A. Spielman and Shang-Hua Teng. Spectral sparsification of graphs. SIAM Journal on Computing, 40(4):9811025, 2011.

  18. ‌Joshua Batson, Daniel A. Spielman, and Nikhil Sri- vastava. Spectral sparsification of graphs: Theory and algorithms. Communications of the ACM, 56(8):8794, 2013.

  19. Joshua Batson, Daniel A. Spielman, and Nikhil Srivas- tava. Twice-Ramanujan sparsifiers. SIAM Journal on Computing, 43(6):17041721, 2014.

  20. ‌Ioannis Koutis, Alex Levin, and Richard Peng. Faster spectral sparsification and numerical algorithms for SDD matrices. ACM Transactions on Algorithms, 12(2):17:117:16, 2015.

  21. Yin Tat Lee and He Sun. Constructing linear-sized spectral sparsification in almost-linear time. SIAM Journal on Computing, 47(6):23152336, 2018.

  22. Ioannis Koutis and Shen Chen Xu. Simple parallel and distributed algorithms for spectral graph sparsification. ACM Transactions on Parallel Computing, 3(2):8:1 8:14, 2016.

  23. Michael Kapralov and David P. Woodruff. Spanners and sparsifiers in dynamic streams. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing (PODC), pages 272281, 2014.

  24. Michael Dinitz, Robert Krauthgamer, and Tal Wagner. Towards resistance sparsifiers. In APPROX/RANDOM 2015, 2015.

  25. Kook Jin Ahn, Sudipto Guha, and Andrew Mc- Gregor. Graph sketches: Sparsification, spanners, and subgraphs. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), pages 514, 2012.

  26. ‌ Timothy Chu, Yu Gao, Richard Peng, Sushant Sachdeva, Saurabh Sawlani, and Junxing Wang. Graph sparsification, spectral sketches, and faster resistance computation via short cycle decompositions. SIAM Journal on Computing, 49(6):FOCS18185FOCS18 220, 2020.

  27. Gramoz Goranci, Monika Henzinger, and Pan Peng. The power of vertex sparsifiers in dynamic graph algorithms. In 25th Annual European Symposium on Algorithms (ESA 2017), pages 45:145:14, 2017.

  28. Zhiyang He, Jason Li, and Magnus Wahlström. Near- linear-time, optimal vertex cut sparsifiers in directed acyclic graphs. In 29th Annual European Symposium on Algorithms (ESA 2021), pages 52:152:14, 2021.

  29. Yu Chen, Sanjeev Khanna, and Ansh Nagda. Near- linear size hypergraph cut sparsifiers. In 61st IEEE Annual Symposium on Foundations of Computer Sci- ence (FOCS 2020), pages 6172, 2020.

  30. Tsz Chiu Kwok, Lap Chi Lau, and Kam Chuen Tung. Cheeger inequalities for vertex expansion and reweighted eigenvalues. In 63rd IEEE Annual Sym- posium on Foundations of Computer Science (FOCS 2022), pages 366377, 2022.

  31. ‌Lap Chi Lau, Kam Chuen Tung, and Robert Wang. Cheeger inequalities for directed graphs and hyper- graphs using reweighted eigenvalues. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing (STOC 2023), 2023.

  32. ‌Florian Dorfler and Francesco Bullo. Kron reduction of graphs with applications to electrical networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 60(1):150163, 2013.

  33. ‌Huaijun Qiu and Edwin R. Hancock. Spectral simpli- fication of graphs. In Computer Vision ECCV 2004, volume 3024 of Lecture Notes in Computer Science, pages 114126, 2004.

  34. ‌Zhiqiang Zhao and Zhuo Feng. Effective-resistance preserving spectral reduction of graphs. In Proceedings of the 56th Annual Design Automation Conference (DAC 2019), 2019.

  35. ‌Zhuo Feng. GRASS: Graph spectral sparsification leveraging scalable spectral perturbation analysis. IEEE Transactions on Computer-Aided Design of Inte- grated Circuits and Systems, 39(12):49444957, 2020.

  36. ‌Andreas Loukas. Graph reduction with spectral and cut guarantees. Journal of Machine Learning Research, 20(116):142, 2019.

  37. Yu Jin, Andreas Loukas, and Joseph F. JaJa. Graph coarsening with preserved spectral properties. In Pro- ceedings of The 23rd International Conference on Ar- tificial Intelligence and Statistics (AISTATS), volume 108 of Proceedings of Machine Learning Research, pages 44524462, 2020.

  38. Mohammad Hashemi, Shiyang Zaman, Seyed Mehran Kazemi, Sriraam Narayanan, Caiwen Cai, Rose Yu, Neil Shah, and Reihaneh Rabbany. A comprehensive survey on graph reduction, 2024. arXiv preprint.

  39. ‌Mikhail Belkin and Partha Niyogi. Laplacian eigen- maps for dimensionality reduction and data represen- tation. Neural Computation, 15(6):13731396, 2003.

  40. ‌David I. Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerg- ing field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):8398, 2013.

  41. ‌Aliaksei Sandryhaila and Jose M. F. Moura. Discrete signal processing on graphs. IEEE Transactions on Signal Processing, 61(7):16441656, 2013.

  42. ‌Stefania Sardellitti, Sergio Barbarossa, and Paolo Di Lorenzo. On the graph fourier transform for directed graphs. IEEE Journal of Selected Topics in Signal Processing, 11(6):796811, 2017.

  43. ‌George L. Nemhauser, Laurence A. Wolsey, and Mar- shall L. Fisher. An analysis of approximations for maximizing submodular set functionsi. Mathemati- cal Programming, 14:265294, 1978.

  44. ‌Maxim Sviridenko. A note on maximizing a sub- modular set function subject to a knapsack constraint. Operations Research Letters, 32(1):4143, 2004.

  45. ‌Andreas Krause, Ajit Singh, and Carlos Guestrin. Near- optimal sensor placements in gaussian processes. In Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), 2005.

  46. ‌Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms, and empirical studies. Journal of Machine Learning Research, 9:235284, 2008.

  47. Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Cambridge University Press, 2014.