12  Exchange Factor Transformation Analysis

12.1 Convergence Properties

This section presents the essential convergence guarantees of the proposed transformation through two fundamental theorems which characterize the steady state path matrix.

Theorem 12.1 Let \mathbf{F} be the row stochastic irreducible exchange factor matrix, let \mathbf{B} be the column constant single reflection-scattering matrix and let \mathbf{K}=\mathbf{F}\circ\mathbf{B} be the single reflection-scattering interaction matrix. The steady state path matrix is defined as:

\mathbf{S}_\infty = \lim_{N \to \infty} \sum_{i=0}^N \mathbf{K}^i \mathbf{F} = (\mathbf{I}-\mathbf{K})^{-1} \mathbf{F} \tag{12.1}

Then \mathbf{S}_\infty exists, is finite and irreducible if and only if the spectral radius \rho(\mathbf{K}) < 1. Moreover, the spectral radius satisfies:

0 \leq \min_j(\mathbf{B}) \leq \min_i\sum_j\mathbf{K} \leq \rho(\mathbf{K})\leq\max_i\sum_j\mathbf{K} \leq \max_j(\mathbf{B}) < 1 \tag{12.2}

Proof. Each entry of \mathbf{F} represents the probability of a ray travelling from emitter i to interact with element j. The accumulated probability after N reflections or scatterings is given by the finite sum:

\mathbf{S}_N = \mathbf{F} + \mathbf{K}\mathbf{F} + \mathbf{K}^2\mathbf{F} + \cdots + \mathbf{K}^N\mathbf{F} = \left(\sum_{i=0}^N \mathbf{K}^i \right) \mathbf{F}

By the Neumann series theorem, \sum_{i=0}^\infty \mathbf{K}^i = (\mathbf{I}-\mathbf{K})^{-1} converges if and only if \rho(\mathbf{K}) < 1 (Horn and Johnson 2013).

The middle part bounds on \rho(\mathbf{K}) follow from the Gershgorin theorem applied to the non-negative matrix \mathbf{K}, where \mathbf{B} represents the column constant matrix of reflection-scattering coefficients. The outer bounds follow from bounds on the row sums of \mathbf{K} with \mathbf{B}_j<1:

0 \leq \min_j(\mathbf{B}) \leq \sum_j\mathbf{K} \leq \max_j(\mathbf{B}) < 1

Since the infinite series for the inverse (\mathbf{I}-\mathbf{K})^{-1} always contains the identity matrix, (\mathbf{I}-\mathbf{K})^{-1}\mathbf{F} always contains \mathbf{F}, and since remaining terms in the series are non-negative, the connectivity in the domain can only increase with \mathbf{B}>\mathbf{0}, hence the irreducibility of \mathbf{F} is preserved in \mathbf{S}_\infty.

The physical interpretation ensures convergence: as long as at least at single element with reflection-scattering coefficient strictly less than unity is visible from all elements, infinite reflection-scattering paths are prevented.

Theorem 12.2 Let \mathbf{I}-\mathbf{K} be the matrix to be inverted in the calculation of the steady state path matrix. Then the determinant of \mathbf{I}-\mathbf{K} satisfies:

\prod_{i=1}^{m+n} \left( 1-\sum_j\mathbf{K}_{ij} \right) \leq \det(\mathbf{I}-\mathbf{K}) \leq \prod_{i=1}^{m+n} \left( 1-\mathbf{K}_{ii} + \sum_{j\neq i}\mathbf{K}_{ij} \right) \tag{12.3}

where m+n is the matrix dimension. For uniform reflection-scattering coefficient b, numerical feasibility requires:

\epsilon \leq (1-b)^{m+n} \implies b \leq 1-\epsilon^{1/(m+n)} \implies m+n \leq \left\lfloor\frac{\mathrm{ln}(\epsilon)}{\mathrm{ln}(1-b)}\right\rfloor \tag{12.4}

where \epsilon is the machine precision.

Proof. Gershgorin’s circle theorem is applied to \mathbf{I}-\mathbf{K}. The diagonal and off-diagonal elements are: (\mathbf{I}-\mathbf{K})_{ii} = 1-\mathbf{K}_{ii}, \quad (\mathbf{I}-\mathbf{K})_{i \neq j} = -\mathbf{K}_{i \neq j}

Each eigenvalue \lambda_i lies within at least one Gershgorin disk (Horn and Johnson 2013):

|\lambda_i - (1-\mathbf{K}_{ii})| \leq \sum_{j\neq i} \mathbf{K}_{ij}

Disk i is centered at 1-\mathbf{K}_{ii} with radius \sum_{j\neq i} \mathbf{K}_{ij}, giving leftmost point at 1-\sum_j \mathbf{K}_{ij} and rightmost at 1-\mathbf{K}_{ii} + \sum_{j\neq i}\mathbf{K}_{ij}.

Since \det(\mathbf{I}-\mathbf{K}) = \prod_i \lambda_i, the bounds follow from taking the product over extreme eigenvalue positions.

For uniform b and row-stochastic \mathbf{F}, the lower bound becomes (1-b)^{m+n}. Requiring that this exceeds machine precision \epsilon gives the numerical feasibility conditions.

12.2 Matrix Properties

This section will explore the properties of the matrices of the proposed exchange factor transformation. These properties will subsequently be used to establish conditions which guarantee physical correctness.

12.2.1 Spectral Radii of the Absorption and Reflection-Scattering Matrices

This section determines bounds on the spectral radii of \mathbf{A} and \mathbf{R}.

Theorem 12.3 Let \mathbf{F} be the row stochastic and irreducible exchange factor matrix, and let \mathbf{A} and \mathbf{R} be the absorption and multiple reflection-scattering matrices defined by the exchange factor transformation, and let \mathbf{P} be the diagonal matrix with entries \mathbf{P}_{ii} = 1-\mathbf{b}_i where \mathbf{b}_i are the reflection-scattering coefficients. Then:

  • The sum \mathbf{A} + \mathbf{R} can be expressed as

\mathbf{A} + \mathbf{R} = \mathbf{P}(\mathbf{I} - \mathbf{K})^{-1}\mathbf{F} \tag{12.5}

  • The spectral radius satisfies \rho(\mathbf{A} + \mathbf{R}) = 1
  • The vector \mathbf{v} = (\mathbf{1} - \mathbf{b}) is the Perron eigenvector corresponding to the positive real eigenvalue \lambda = 1

Proof. The result is established through a sequence of key relationships.

Step 1: The fundamental vector relationship is established:

[\mathbf{F}(\mathbf{1} - \mathbf{b})]_i = \sum_j \mathbf{F}_{ij}(1 - \mathbf{b}_j) = \sum_j \mathbf{F}_{ij} - \sum_j \mathbf{F}_{ij} \mathbf{b}_j = 1 - \sum_j \mathbf{K}_{ij}

Step 2: The corresponding property for (\mathbf{I} - \mathbf{K}) is established:

[(\mathbf{I} - \mathbf{K})\mathbf{1}]_i = \sum_j (\delta_{ij} - \mathbf{K}_{ij}) = 1 - \sum_j \mathbf{K}_{ij}

Step 3: Steps 1 and 2 are combined to obtain the key identity:

\mathbf{F}(\mathbf{1} - \mathbf{b}) = (\mathbf{I} - \mathbf{K})\mathbf{1} \tag{12.6}

Step 4: Left-multiplying equation by the inverse (\mathbf{I} - \mathbf{K})^{-1}:

(\mathbf{I} - \mathbf{K})^{-1}\mathbf{F}(\mathbf{1} - \mathbf{b}) = \mathbf{1}

Step 5: Left-multiplication by the diagonal matrix \mathbf{P}:

\mathbf{P}(\mathbf{I} - \mathbf{K})^{-1}\mathbf{F}(\mathbf{1} - \mathbf{b}) = \mathbf{P}\mathbf{1}

Step 6: Simplification of the right-hand side of Step 5:

\mathbf{P}\mathbf{1} = (\mathbf{1} - \mathbf{b})

Step 7: Simplification of the left-hand side of Step 5:

\begin{aligned} \mathbf{A} + \mathbf{R} &= (\mathbf{1}-\mathbf{B})^T\circ\mathbf{S}_\infty\circ (\mathbf{1}-\mathbf{B})+(\mathbf{1}-\mathbf{B})^T\circ\mathbf{S}_\infty\circ \mathbf{B} \\ &= (\mathbf{1}-\mathbf{B})^T\circ\mathbf{S}_\infty \\ &= \mathbf{P}(\mathbf{I} - \mathbf{K})^{-1}\mathbf{F} \end{aligned}

Step 8: Conclude the eigenvalue relation by substitution:

(\mathbf{A} + \mathbf{R})(\mathbf{1} - \mathbf{b}) = (\mathbf{1} - \mathbf{b})

Therefore, \mathbf{v} = (\mathbf{1} - \mathbf{b}) is an eigenvector with eigenvalue \lambda = 1. Since all components of \mathbf{v} are positive (assuming \mathbf{b}_i < 1 for convergence), this is the Perron eigenvector. By the Perron-Frobenius theorem, the corresponding eigenvalue equals the spectral radius (Horn and Johnson 2013), hence \rho(\mathbf{A} + \mathbf{R}) = 1.

Theorem 12.4 Let \mathbf{A} be the absorption matrix of the exchange factor transformation and \mathbf{P} be the diagonal matrix with entries \mathbf{P}_{ii} = 1-\mathbf{b}_i. Then:

  • The absorption matrix can be expressed as:

\mathbf{A} = (\mathbf{A}+\mathbf{R})\mathbf{P} \tag{12.7}

  • The row sums of \mathbf{A} satisfy:

[\mathbf{A}\mathbf{1}]_i = 1-\mathbf{b}_i

  • The spectral radius is bounded by:

1-\max_i \mathbf{b}_i \leq \rho(\mathbf{A}) \leq 1-\min_i \mathbf{b}_i \tag{12.8}

In the limiting case \mathbf{b}_i = 0, this recovers the row-stochastic spectral radius of unity for \mathbf{F}.

Proof. Step 1: The matrix decomposition is established.

Since \mathbf{B} is column constant, the Hadamard product \mathbf{A} = (\mathbf{A}+\mathbf{R})\circ (\mathbf{1}-\mathbf{B}) is equivalent to right matrix multiplication by the diagonal matrix \mathbf{P}:

\mathbf{A} = (\mathbf{A}+\mathbf{R})\mathbf{P}

Step 2: The row sums are computed using the eigenvalue relationship.

From Theorem 12.3, it is known that (\mathbf{A}+\mathbf{R})(\mathbf{1} - \mathbf{b}) = (\mathbf{1} - \mathbf{b}) with \rho(\mathbf{A}+\mathbf{R}) = 1. Therefore: \begin{aligned} [\mathbf{A}\mathbf{1}]_i &= [(\mathbf{A}+\mathbf{R})\mathbf{P}\mathbf{1}]_i \\ &= [(\mathbf{A}+\mathbf{R})(\mathbf{1}-\mathbf{b})]_i \\ &= \rho(\mathbf{A}+\mathbf{R})(1-\mathbf{b}_i) \\ &= 1-\mathbf{b}_i \end{aligned}

Step 3: The spectral radius bounds are derived.

Since [\mathbf{A}\mathbf{1}]_i = 1-\mathbf{b}_i, the row sums range from 1-\max_i \mathbf{b}_i to 1-\min_i \mathbf{b}_i. By the Gershgorin circle theorem and properties of non-negative matrices, the spectral radius satisfies (Horn and Johnson 2013):

1-\max_i \mathbf{b}_i \leq \rho(\mathbf{A}) \leq 1-\min_i \mathbf{b}_i

When \mathbf{b}_i = 0 for all i, the exchange factor transformation yields \mathbf{A} = \mathbf{F}, and the bounds collapse to \rho(\mathbf{A}) = 1, consistent with the row-stochastic property of the exchange factor matrix.

Theorem 12.5 The spectral radius of the multiple reflection-scattering matrix \mathbf{R} of the exchange factor transformation satisfies:

\rho(\mathbf{R}) \leq (1-\min_i \mathbf{b}_i)\frac{\|\mathbf{K}\|_\infty}{1-\|\mathbf{K}\|_\infty} \tag{12.9}

For physical realizability, the bound on \rho(\mathbf{R}) must be less than unity, requiring: \|\mathbf{K}\|_\infty < \frac{1}{2-\min_i \mathbf{b}_i} \tag{12.10}

Proof. From the eigenvalue equation (\mathbf{A} + \mathbf{R})(\mathbf{1} - \mathbf{b}) = \mathbf{1} - \mathbf{b} and using \mathbf{A}\mathbf{1} = \mathbf{1} - \mathbf{b}:

\begin{aligned} (\mathbf{A} + \mathbf{R})(\mathbf{1} - \mathbf{b}) &= \mathbf{1} - \mathbf{b} \\ \mathbf{A}\mathbf{1} + \mathbf{R}\mathbf{1} - \mathbf{A}\mathbf{b} - \mathbf{R}\mathbf{b} &= \mathbf{1} - \mathbf{b} \\ \mathbf{R}\mathbf{1} - \mathbf{A}\mathbf{b} - \mathbf{R}\mathbf{b} &= \mathbf{0} \\ \mathbf{R}\mathbf{1} &= (\mathbf{A} + \mathbf{R})\mathbf{b} \end{aligned}

Substituting \mathbf{A} + \mathbf{R} = \mathbf{P}(\mathbf{I}-\mathbf{K})^{-1}\mathbf{F} and applying submultiplicativity: \begin{aligned} \mathbf{R}\mathbf{1} &= \mathbf{P}(\mathbf{I}-\mathbf{K})^{-1}\mathbf{F}\mathbf{b} \\ \|\mathbf{R}\|_\infty &\leq \|\mathbf{P}\|_\infty \|(\mathbf{I}-\mathbf{K})^{-1}\|_\infty \|\mathbf{K}\|_\infty \\ &\leq \max_i(1-\mathbf{b}_i) \frac{1}{1-\|\mathbf{K}\|_\infty} \|\mathbf{K}\|_\infty \end{aligned}

Since \rho(\mathbf{R}) \leq \|\mathbf{R}\|_\infty, the bound follows. The physical constraint comes from requiring that \mathbf{D}=\mathbf{I}-\mathbf{R}^T is invertible, which is necessary for \mathbf{M} of equation (Eq. 9.5) to be invertible. By the Neumann series theorem, the inverse \mathbf{D}^{-1}=(\mathbf{I}-\mathbf{R}^T)^{-1} exists when \rho(\mathbf{R})<1 (Horn and Johnson 2013).

Theorem 12.6 When the reflection-scattering coefficients are uniform, \mathbf{b}_i = \gamma for all i where \gamma \in [0,1), then:

  • The vector \mathbf{b} = \gamma\mathbf{1} is an eigenvector of \mathbf{R} with eigenvalue \gamma
  • The spectral radius satisfies \rho(\mathbf{R}) = \gamma

Proof. For uniform \gamma, it holds that \mathbf{P} = (1-\gamma)\mathbf{I} and \mathbf{K} = \gamma\mathbf{F}.

Step 1: The key relationship is developed:

(\mathbf{I}-\mathbf{K})\mathbf{b} = \gamma\mathbf{1} - \gamma\mathbf{F}\mathbf{b} = \gamma\mathbf{1} - \gamma^2\mathbf{F}\mathbf{1} = \gamma(1-\gamma)\mathbf{1} = (1-\gamma)\mathbf{b}

Step 2: The eigenvalue property is established.

From Step 1: (\mathbf{I}-\mathbf{K})^{-1}\mathbf{b} = \frac{\mathbf{b}}{1-\gamma}

Computing \mathbf{R}\mathbf{b}:

\begin{aligned} (\mathbf{A}+\mathbf{R})\mathbf{b} &= \mathbf{P}(\mathbf{I}-\mathbf{K})^{-1}\mathbf{F}\mathbf{b} \\ &= (1-\gamma)\frac{\gamma}{1-\gamma}\mathbf{1} = \gamma\mathbf{1} = \mathbf{b} \end{aligned}

Since \mathbf{A}\mathbf{b} = \gamma\mathbf{A}\mathbf{1} = \gamma(1-\gamma)\mathbf{1}:

\mathbf{R}\mathbf{b} = (\mathbf{A}+\mathbf{R})\mathbf{b} - \mathbf{A}\mathbf{b} = \gamma\mathbf{1} - \gamma(1-\gamma)\mathbf{1} = \gamma^2\mathbf{1} = \gamma\mathbf{b}

Step 3: The spectral radius is established.

The upper bound from Theorem 12.5 gives:

\rho(\mathbf{R}) \leq \frac{(1-\gamma)\gamma}{1-\gamma} = \gamma

Combined with \rho(\mathbf{R}) \geq \gamma (since \gamma is an eigenvalue), it is concluded that \rho(\mathbf{R}) = \gamma.

Theorem 12.7 For uniform reflection-scattering coefficients \mathbf{b}_i = \gamma, the spectral radii of the absorption and multiple reflection-scattering matrices satisfy:

\rho(\mathbf{A}) + \rho(\mathbf{R}) = \rho(\mathbf{A}+\mathbf{R}) = 1 \tag{12.11}

For uniform reflection-scattering coefficients, all eigenvalues corresponding to the spectral radii are real.

Proof. From Theorems Theorem 12.4 and Theorem 12.6:

  • \rho(\mathbf{A}) = 1-\gamma (uniform case)
  • \rho(\mathbf{R}) = \gamma (uniform case)
  • \rho(\mathbf{A}+\mathbf{R}) = 1

Therefore: \rho(\mathbf{A}) + \rho(\mathbf{R}) = (1-\gamma) + \gamma = 1 = \rho(\mathbf{A}+\mathbf{R}). Since \gamma=\mathbf{b}_i is real, all eigenvalues corresponding to the spectral radii are real.

12.2.2 Non-negativity of the Inverse of the Mixed Boundary Matrix

To ensure non-negative radiation for the proposed transformation, the inverse of the mixed boundary matrix \mathbf{M} of equation (Eq. 9.5) should be non-negative. This requires establishing that both component matrices \mathbf{C} and \mathbf{D} are M-matrices and that \mathbf{M} is non-singular.

Theorem 12.8 The matrix \mathbf{C} = \mathbf{I} - \mathbf{A}^T - \mathbf{R}^T is a singular M-matrix with the following properties:

  • \mathbf{C} has zero as the eigenvalue with the smallest real part, with left eigenvector (\mathbf{1} - \mathbf{b})^T
  • The diagonal entries satisfy \mathbf{C}_{ii} \geq 0 for all i
  • The off-diagonal entries satisfy \mathbf{C}_{ij} \leq 0 for all i \neq j

Proof. Step 1: The zero eigenvalue is established as the smallest real part eigenvalue.

From the row sum properties established in previous theorems:

\begin{aligned} \mathbf{A}\mathbf{1} + \mathbf{R}\mathbf{1} &= (\mathbf{1} - \mathbf{b}) + (\mathbf{A}+\mathbf{R})\mathbf{b} \\ (\mathbf{I} - \mathbf{A} - \mathbf{R})\mathbf{b} &= (\mathbf{I} - \mathbf{A} - \mathbf{R})\mathbf{1} \\ \mathbf{C}^T(\mathbf{1} - \mathbf{b}) &= \mathbf{0} \end{aligned}

Therefore, (\mathbf{1} - \mathbf{b})^T is a left eigenvector of \mathbf{C} with eigenvalue zero. Since \mathbf{C} = \mathbf{I} - \mathbf{A}^T - \mathbf{R}^T, if \lambda is an eigenvalue of \mathbf{A}+\mathbf{R}, then 1-\lambda is an eigenvalue of \mathbf{C}. From Theorem 12.3, \mathbf{1} - \mathbf{b} corresponds to the Perron eigenvector of \mathbf{A}+\mathbf{R} with spectral radius from the corresponding real eigenvalue of unity. This shows that zero is the smallest real part of any eigenvalue of \mathbf{C}.

Step 2: The non-negative diagonal entries are established.

From the eigenvalue equation \mathbf{C}^T(\mathbf{1} - \mathbf{b}) = \mathbf{0}:

\begin{aligned} 0 &= [\mathbf{C}^T(\mathbf{1} - \mathbf{b})]_i \\ &= (1 - (\mathbf{A}+\mathbf{R})_{ii})(1 - \mathbf{b}_i) - \sum_{j \neq i} (\mathbf{A}+\mathbf{R})_{ij}(1 - \mathbf{b}_j) \end{aligned}

Solving for the diagonal:

\mathbf{C}_{ii} = 1 - (\mathbf{A}+\mathbf{R})_{ii} = \sum_{j \neq i} (\mathbf{A}+\mathbf{R})_{ij} \frac{1 - \mathbf{b}_j}{1 - \mathbf{b}_i} \geq 0

since all terms are non-negative.

Step 3: Off-diagonal entries are non-positive by construction since \mathbf{C}_{ij} = -(\mathbf{A}^T+\mathbf{R}^T)_{ij} \leq 0 for i \neq j.

Theorem 12.9 The matrix \mathbf{D} = \mathbf{I} - \mathbf{R}^T is a non-singular M-matrix with the following properties:

  • All eigenvalues have real parts bounded below: \text{Re}(\lambda_{\mathbf{D}}) \geq 1 - \rho(\mathbf{R})
  • The diagonal entries satisfy \mathbf{D}_{ii} \geq 0 for all i
  • The off-diagonal entries satisfy \mathbf{D}_{ij} \leq 0 for all i \neq j

As long as \rho(\mathbf{R}) < 1, \mathbf{D} is guaranteed to be non-singular.

Proof. Step 1: Eigenvalue bounds.

Since \mathbf{D} = \mathbf{I} - \mathbf{R}^T, if \lambda is an eigenvalue of \mathbf{R}, then 1 - \lambda is an eigenvalue of \mathbf{D}. From Theorem 12.5:

\text{Re}(\lambda_{\mathbf{D}}) \geq 1 - \rho(\mathbf{R}) \geq 1 - (1-\min_i \mathbf{b}_i)\frac{\|\mathbf{K}\|_\infty}{1-\|\mathbf{K}\|_\infty}

Step 2: Non-negative diagonal entries.

Since \mathbf{D} = \mathbf{C} + \mathbf{A}^T and both \mathbf{C} (from Theorem 12.8) and \mathbf{A}^T have non-negative diagonal entries:

\mathbf{D}_{ii} = \mathbf{C}_{ii} + \mathbf{A}_{ii} \geq 0

Step 3: Off-diagonal entries are non-positive by construction since \mathbf{D}_{ij} = -\mathbf{R}^T_{ij} \leq 0 for i \neq j.

Theorem 12.10 Let \mathbf{A}, \mathbf{R}, \mathbf{C}, \mathbf{D} be the system matrices defined by the proposed exchange factor transform, let \mathbf{F} be the irreducible and row stochastic exchange factor matrix, and let \mathbf{1}-\mathbf{b} be the Perron eigenvector of \mathbf{A}+\mathbf{R}. Then any matrix \mathbf{M} obtained by replacing at least one row of \mathbf{C} with the corresponding row from \mathbf{D} is non-singular.

Proof. Step 1: Application of established spectral properties.

By Theorem 12.3, \mathbf{A}+\mathbf{R} has spectral radius \rho(\mathbf{A}+\mathbf{R}) = 1 with corresponding Perron eigenvector (\mathbf{1}-\mathbf{b}) > \mathbf{0}. Since \mathbf{F} and \mathbf{S}_\infty are both irreducible and multiplication by the positive diagonal \mathbf{P}_{ii}=1-\mathbf{b}_i>0 cannot change connectivity, the matrix \mathbf{A}+\mathbf{R}=\mathbf{P}\mathbf{S}_\infty is irreducible. By the Perron-Frobenius theorem, eigenvalue 1 has geometric multiplicity exactly 1 (Horn and Johnson 2013).

Step 2: Positive column sums of \mathbf{A}.

Irreducibility ensures that each column j of the row-stochastic \mathbf{F} has positive sum:

\sum_i \mathbf{F}_{ij} > 0

The infinite series form of \mathbf{S}_\infty = (\mathbf{I}-\mathbf{K})^{-1}\mathbf{F} shows that the matrix \mathbf{S}_\infty inherits the property that each column has positive sum.

Therefore, \mathbf{A}, which can formulated as \mathbf{A} = \mathbf{P}\mathbf{S}_\infty\mathbf{P} with positive diagonal \mathbf{P}, preserves this structure, ensuring that each column of \mathbf{A} has at least one positive entry.

Step 3: Non-singularity of \mathbf{M}.

By Theorem 12.8, \mathbf{C}^T has nullity (dimension of null space) exactly 1 with \text{null}(\mathbf{C}^T) = \text{span}\{\mathbf{1}-\mathbf{b}\}. Since \text{nullity}(\mathbf{C}) = \text{nullity}(\mathbf{C}^T) = 1, it is obtained that \text{rank}(\mathbf{C}) = n-1.

Let \mathbf{M} be obtained by replacing at least one row of \mathbf{C} with the corresponding row from \mathbf{D}. Without loss of generality, assume the j-th row is replaced. Then, since \mathbf{D}=\mathbf{C}+\mathbf{A}^T:

\mathbf{M} = \mathbf{C} + \mathbf{e}_j (\mathbf{A}^T)_j

where (\mathbf{A}^T)_j denotes the j-th row of \mathbf{A}^T and \mathbf{e}_j is the j-th standard basis vector.

Since (\mathbf{1}-\mathbf{b})^T \mathbf{C} = \mathbf{0}, it is obtained that:

(\mathbf{1}-\mathbf{b})^T \mathbf{M} = (\mathbf{1}-\mathbf{b})^T \mathbf{C} + (\mathbf{1}-\mathbf{b})^T (\mathbf{e}_j (\mathbf{A}^T)_j) = (1-\mathbf{b}_j)(\mathbf{A}^T)_j

Since \mathbf{A} \geq \mathbf{0}, (1-\mathbf{b}_j) > 0, and step 2 ensures \mathbf{A} has at least one positive entry in each column, it is obtained that (\mathbf{1}-\mathbf{b})^T \mathbf{M} \neq \mathbf{0}.

Therefore (\mathbf{1}-\mathbf{b}) \notin \text{null}(\mathbf{M}^T), which means \text{nullity}(\mathbf{M}^T) < \text{nullity}(\mathbf{C}^T) = 1.

Since \text{nullity}(\mathbf{M}^T) = \text{nullity}(\mathbf{M}), it is obtained that \text{nullity}(\mathbf{M}) = 0, so \mathbf{M} is non-singular.

The irreducibility of the exchange factor matrix \mathbf{F} is crucial for this result. In radiative transfer applications, \mathbf{F} represents connectivity within a physical domain. The matrix \mathbf{F} is irreducible if and only if the underlying domain has no isolated regions, that is, for any two regions i and j, there exists a sequence of direct sight lines connecting them. This physical connectivity condition naturally ensures the mathematical irreducibility required for the theorem.

12.3 Physical Correctness

For a physical model to be useful it should possess physical correctness. The conditions which guarantee physical correctness of results calculated with the proposed exchange factor transformation are described in the following subsections.

12.3.1 Non-Negative Radiation

For radiative transfer, physical correctness means that the calculated rate of radiation must be non-negative, since negative radiation is not a physically meaningful quantity.

Theorem 12.11 Under the conditions established in the previous theorems, the exchange factor transformation guarantees physical correctness through non-negative radiation quantities:

  • Non-negative total radiant power: \mathbf{j} = \mathbf{M}^{-1}\mathbf{h} \geq \mathbf{0} for non-negative source terms \mathbf{h} \geq \mathbf{0}
  • Non-negative reflected-scattered power: \mathbf{r} = \mathbf{R}^T\mathbf{j} \geq \mathbf{0}
  • Non-negative emissive power: \mathbf{e} = (\mathbf{I}-\mathbf{R}^T)\mathbf{j} = \mathbf{q} + \mathbf{A}^T\mathbf{j} \geq \mathbf{0}

Proof. Part 1: Non-negative total radiant power.

From Theorems Theorem 12.8, Theorem 12.9 and Theorem 12.10, the mixed boundary matrix \mathbf{M} is a non-singular M-matrix. By the fundamental property of M-matrices, \mathbf{M}^{-1} \geq \mathbf{0}. Therefore, for non-negative source terms \mathbf{h} \geq \mathbf{0}: \mathbf{j} = \mathbf{M}^{-1}\mathbf{h} \geq \mathbf{0}

Part 2: Non-negative reflected-scattered power.

Since \mathbf{R} \geq \mathbf{0} by construction (representing physical reflection-scattering processes) and \mathbf{j} \geq \mathbf{0} from Part 1: \mathbf{r} = \mathbf{R}^T\mathbf{j} \geq \mathbf{0}

Part 3: Non-negative emissive power.

The emissive power can be expressed in two equivalent forms due to internal consistency of the proposed exchange factor transformation: \mathbf{e} = (\mathbf{I}-\mathbf{R}^T)\mathbf{j} = \mathbf{q} + \mathbf{A}^T\mathbf{j} \tag{12.12}

Since \mathbf{A} \geq \mathbf{0} by construction, \mathbf{j} \geq \mathbf{0} from Part 1, and assuming non-negative prescribed source terms \mathbf{q} \geq \mathbf{0}: \mathbf{e} = \mathbf{q} + \mathbf{A}^T\mathbf{j} \geq \mathbf{0}

The equivalence in equation Eq. 12.12 reveals that the entries of \mathbf{j} perfectly balance the entries of \mathbf{I}-\mathbf{R}^T, ensuring physical consistency.

12.3.2 Conservation of Energy

At the steady state, energy conservation requires that the source fluxes \mathbf{q}=\mathbf{C}\mathbf{j} calculated from any total radiant power solution vector \mathbf{j} sum to a scalar of zero: \mathbf{1}^T\mathbf{C}\mathbf{j} = 0. This fundamental physical principle places constraints on the reflection-scattering coefficients and boundary conditions of the proposed exchange factor transformation.

Theorem 12.12 Let \mathbf{F} be the row-stochastic exchange factor matrix, and let \mathbf{b} = b\mathbf{1} be the uniform vector of coefficients for the column constant single reflection-scattering matrix \mathbf{B} for some scalar b \in [0,1), and define the remaining system matrices as given by the proposed exchange factor transformation.

Then for any mixed system \mathbf{M}\mathbf{j} = \mathbf{h} where \mathbf{M} is assembled from rows of \mathbf{C} and \mathbf{D}, the sum of the source fluxes equal zero, meaning the energy conservation property holds:

\mathbf{1}^T \mathbf{C} \mathbf{j} = 0

regardless of the choice of rows or non-negative boundary conditions in the mixed system.

Proof. For uniform \mathbf{b} = b\mathbf{1}, the eigenvalue condition (\mathbf{1} - \mathbf{b})^T \mathbf{C} = \mathbf{0}^T of Theorem becomes:

(\mathbf{1} - b\mathbf{1})^T \mathbf{C} = (1-b)\mathbf{1}^T \mathbf{C} = \mathbf{0}^T

Since 1-b > 0, the following property is obtained: \mathbf{1}^T \mathbf{C} = \mathbf{0}^T. Therefore: \mathbf{1}^T \mathbf{C} \mathbf{j} = \mathbf{0}^T \mathbf{j} = 0 for any vector \mathbf{j}. This establishes energy conservation automatically, regardless of how the mixed system is constructed.

Let \mathbf{F} be row stochastic, 0 \leq B_j < 1 be column constant, and define the system matrices as given by the proposed exchange factor transformation. Consider a mixed system \mathbf{M}\mathbf{j} = \mathbf{h} where:

  • Rows I of \mathbf{M} are taken from \mathbf{C} with corresponding uniform \mathbf{h}_I = \mathbf{0}
  • \mathbf{B} can be non-uniform on rows I: 0 \leq \mathbf{B}_i < 1 for all i \in I
  • Rows J of \mathbf{M} are taken from \mathbf{D} with corresponding potentially non-uniform \mathbf{h}_J \geq \mathbf{0}
  • \mathbf{B} is uniform on rows J: \mathbf{B}_j = \gamma < 1 for all j \in J

Then the sum of the source fluxes equal zero, meaning the energy conservation property holds:

\mathbf{1}^T \mathbf{C} \mathbf{j} = 0

Proof. The system \mathbf{M}\mathbf{j} = \mathbf{h} yields \mathbf{C}_I \mathbf{j} = \mathbf{0} and \mathbf{D}_J \mathbf{j} = \mathbf{h}_J. From Theorem 12.8, it is known that \mathbf{C}^T(\mathbf{1}-\mathbf{b}) = \mathbf{0}, which implies (\mathbf{1}-\mathbf{b})^T \mathbf{C} \mathbf{j} = 0. Expanding this:

(\mathbf{1}-\mathbf{b})^T \mathbf{C} \mathbf{j} = \sum_{i \in I}(1-\mathbf{b}_i)\mathbf{C}_{i \cdot}\mathbf{j} + \sum_{j \in J}(1-\gamma)\mathbf{C}_{j \cdot}\mathbf{j} = 0

The constraint \mathbf{C}_I \mathbf{j} = \mathbf{0} makes the first sum vanish, regardless of the values \mathbf{b}_i. Since 1-\gamma > 0, it is obtained that \sum_{j \in J}\mathbf{C}_{j \cdot}\mathbf{j} = 0. Combined with \sum_{i \in I}\mathbf{C}_{i \cdot}\mathbf{j} = 0 from \mathbf{C}_I \mathbf{j} = \mathbf{0}, this yields \mathbf{1}^T \mathbf{C} \mathbf{j} = 0. This establishes energy conservation through the eigenvalue structure, where the mixed constraints create effective uniformity.