Good question — this gets at why the conjecture is open despite strong computational evidence. Let me break down the complexity landscape.
Computing the arborescences (the foundation for all approaches):
- Edmonds' algorithm finds $k$ arc-disjoint arborescences in $O(kn^2)$ or $O(knm)$ time. For $k=12$ vs $k=6$, this is just $2\times$ slower — linear scaling in $k$.
- The Lemma 3 edge-disjoint partition (Lovász splitting-off) is also polynomial. No qualitative difference between $k=6$ and $k=12$.
Computing HDR forwarding tables (with $\lceil \log k \rceil$ header bits):
- Once arborescences are computed, the tables are deterministic: the algorithm (
HDR-Log-K) directly specifies the forwarding rule at each vertex. Table size is $O(nk)$.
- Total: polynomial, $O(knm)$ dominated by arborescence computation. $k=12$ is roughly $2\times$ the work of $k=6$. No qualitative difference.
Computing BSC forwarding tables (no header bits):
This is where things get interesting. There are three sub-problems:
| Problem |
Complexity |
$k=6$ vs $k=12$ |
| Constructing BSC tables (for $k \leq 5$) |
Polynomial (direct construction) |
N/A — only proven for $k \leq 5$ |
| Verifying that given BSC tables are $c$-resilient |
coNP-complete in general |
Harder for $k=12$ (more failure patterns) |
| Finding BSC tables that are $(k{-}1)$-resilient |
Open (existence unproven for $k > 5$) |
See below |
The verification result (coNP-completeness) is from Bentert, Ceylan-Kettler, Hübner, Schmid, and Srba (OPODIS 2025). They show that checking whether a given set of static rerouting rules provides "perfect resilience" (surviving all non-disconnecting failures) is coNP-complete. For $(k{-}1)$-resilience specifically, the input includes the failure bound, but the core hardness likely persists.
Why $k=12$ is qualitatively harder than $k=6$ for BSC:
The difficulty isn't in the arborescence computation — it's in the strategy space and adversary space:
-
Adversary space: The number of failure sets grows as $\binom{m}{k-1}$. For a graph with $m=100$ edges: $\binom{100}{5} \approx 7.5 \times 10^7$ for $k=6$, but $\binom{100}{11} \approx 1.0 \times 10^{13}$ for $k=12$. That's $10^5 \times$ more failure patterns to handle.
-
Strategy space: A BSC function at vertex $v$ with incoming arborescence $T_i$ must handle $2^{\deg(v)}$ possible active-edge configurations. For $k=12$ on a 12-connected graph, $\deg(v) \geq 12$, so at least $2^{12} = 4096$ configurations per $(v, T_i)$ pair, each requiring a forwarding choice. The total strategy space is roughly $k^{n \cdot k \cdot 2^{\deg}}$ — super-exponential.
-
Proof technique: For $k \leq 5$, the proofs work by exhaustive case analysis of how failures interact with arborescences. The number of cases grows super-exponentially with $k$. For $k=4$ the proof is already intricate (Theorem 4 in the paper); $k=5$ requires the full technical report. Extending this to $k=6$ would require analyzing all possible "bounce chains" — sequences of arborescence switches — which grow combinatorially. By $k=12$, this case analysis is completely infeasible by hand.
What my search actually showed:
My AND-OR reachability check answered an easier question: for each failure set independently, does there exist some valid forwarding choice? This is polynomial per failure set (fixpoint computation on $O(nk)$ states). It found that the answer is always YES.
But the hard question is: does a single BSC function work for all failure sets simultaneously? This requires that the forwarding choice at $(v, T_i, \text{active_edges})$ is consistent across all failure sets producing the same local view. This is a constraint satisfaction problem whose size scales with:
- Number of distinct local views: up to $O(nk \cdot 2^{\deg(v)})$
- Number of constraints: one per (failure set, source) pair
For $k=12$, both quantities are much larger than for $k=6$, making the consistency check exponentially harder.
Summary:
|
$k=6$ |
$k=12$ |
| HDR table computation |
~$6nm$ |
~$12nm$ (2× slower) |
| BSC existence |
Unproven |
Unproven |
| BSC verification |
coNP-complete |
coNP-complete (larger instances) |
| Failure sets to survive |
$\binom{m}{5}$ |
$\binom{m}{11}$ (~$10^5\times$ more) |
| Case analysis complexity |
Possibly feasible |
Infeasible |
The bottom line: computing HDR tables scales linearly and is easy at any $k$. Computing BSC tables is a fundamentally different beast — not because of the arborescence computation, but because of the combinatorial explosion in the consistency requirement across exponentially many failure patterns. The gap between $k=6$ and $k=12$ is not just quantitative ($2\times$) but qualitative (the adversary and strategy spaces blow up super-exponentially).