The limits of multifunctionality in tunable networks
Nature is rife with networks that are functionally optimized to propagate inputs in order to perform specific tasks. Whether via genetic evolution or dynamic adaptation, many networks create functionality by locally tuning interactions between nodes. Here we explore this behavior in two contexts: strain propagation in mechanical networks and pressure redistribution in flow networks. By adding and removing links, we are able to optimize both types of networks to perform specific functions. We define a single function as a tuned response of a single “target” link when another, predetermined part of the network is activated. Using network structures generated via such optimization, we investigate how many simultaneous functions such networks can be programmed to fulfill. We find that both flow and mechanical networks display qualitatively similar phase transitions in the number of targets that can be tuned, along with the same robust finite-size scaling behavior. We discuss how these properties can be understood in the context of a new class of constraint-satisfaction problems.
Many naturally occurring and synthetic networks are endowed with a specific and efficient functionality. For example, allosteric proteins globally adjust their conformation upon binding a ligand in order to control the activity of a distant active site Motlagh2014 (); Ribeiro2016 (). Gene regulatory networks express specific proteins Barabasi2004 (). Biological and artificial neural networks retrieve memories based on a limited number of inputs Hertz1991 (); McCulloch1943 (); Hopfield1982 (). In some cases, networks can adapt and change their function depending on the needs of the system; venation networks in plants Katifori2010 (); Pitterman2010 (), animals Hu2013 (); Ronellenfitsch2016 (); LaBarbera1990 (), and slime molds Tero2010 (); Alim2013 () can reroute the transport of fluids, enhancing or depleting nutrient levels in order to support local growth or activity. Modern power grids must precisely distribute electrical energy generated from a limited number of sources to a large number of consumers with widely varying consumption needs at different times Fang2012 (). All of these networks are optimized to some degree, either by evolution via natural selection, dynamic reconfiguration, or by human planning and ingenuity.
A key aspect of such functionality is the complexity of a specific task. We define a “function” as an optimized response of a localized component of a network when another predefined, localized component of the system is activated. A “task” is then defined as the collective response of a set of individual functions due to a single input. The number of functions representing a specific task is the task complexity.
In this work we address the limits of complexity for a single task: that is, how many functions comprising a single task can be programmed into a network? We consider two examples: (i) mechanical networks - in which nodes are connected by central-force harmonic springs - locally flexing in response to an applied strain and (ii) flow (or resistor) networks - in which nodes are connected by linear resistors - locally producing a pressure drop due to an applied pressure at the source. These systems are related; flow networks are mathematically equivalent to mechanical networks embedded in one spatial dimension - but with a nontrivial node topology Tang1987 ().
The macroscopic properties of mechanical networks, such as their bulk and and shear moduli, can be finely tuned by modifying only a select tiny fraction of the springs between nodes Goodrich2015 (); Hexner2018 (); Reid2018 () (in contrast to random removal Ellenbroek2009 ()). Previously, this idea was extended to show that such networks can be tuned to develop allosteric behavior via selective spring removal Rocks2017 (); Yan2017 (); Flechsig2017 (). Allostery in these systems corresponds to designing a task composed of a single function in which a randomly selected spring (the target) responds in a specified way to a strain imposed on a separate pair of nodes (the source). Here we further develop this idea by simultaneously tuning multiple targets controlled by a single source in both elastic and flow networks. We address the question of how many individual targets can be tuned successfully (i.e., what is the scaling of the maximal task complexity) as a function of the size of the network. We find that while highly complex tasks with many functions are possible, the difficulty of tuning increases with network size, making larger networks relatively more difficult to tune, contrary to intuition. We discuss our findings in the context of constraint-satisfaction problems and SAT-UNSAT transitions.
Network Tuning Protocol
Our method for tuning networks follows the general scheme described in our previous work Rocks2017 () with some slight modifications. We start with two-dimensional configurations of soft spheres with periodic boundary conditions created using standard jamming algorithms. We extract the particle contact structure by placing nodes at the centers of each sphere and links (edges) between nodes corresponding to overlapping particles. This ensemble of networks is used for both spring and flow networks. For spring networks, edges are unstretched central-force springs, while for flow networks, edges are resistive conduits. By using the same set of nodes and edges for both systems, we can directly compare results.
For each network, a pair of source nodes is chosen randomly, along with a set of target edges. Our goal is to tune the extension (or pressure drop) of each target edge, labeled by , in response to an extension (pressure drop) applied to the source nodes by adding and removing edges from the network. We explore two different types of sources: pairs of nodes connected by a randomly chosen edge and pairs of nodes that are each chosen randomly anywhere in the network (see Supporting Information for global compression and shear sources in mechanical networks).
To control the response of the targets, we define the response ratio for each target. Each response ratio is in general a collective property of the network; the response of each target is a function of the total network structure. Before tuning the network, we measure the initial extension (pressure drop) to obtain the initial response ratio of each target . We then tune the response ratio of each target so that its relative change as compared to the initial state is greater than or equal to a specified positive constant ; that is, we tune each response ratio to satisfy the constraint
Thus, for mechanical networks we require contracting edges to contract more, and expanding edges to expand more. For flow networks, we require the magnitude of the pressure drop to increase without changing the direction of the flow through each target link.
Our optimization scheme involves minimizing a loss function which penalizes deviations from the constraints in (see Methods and Materials). Each optimization step consists of either removing a single link, or reinserting a previously removed link to modify the network topology in discrete steps. More specifically, at each step we measure the resulting change in the loss function for each possible single link removal or reinsertion and then remove or reinsert the link which minimizes the loss function at that step.
Fig. 1 depicts examples of both flow and mechanical networks which have been tuned using our prescribed method for the two different types of applied sources. Fig. 1(A) and (B) show flow and mechanical networks, respectively, tuned to respond to a source applied to a pair of nodes connected by an edge. Fig. 1(C) and (D) show the same networks, but with a pair of source nodes that are not connected by an edge.
We investigate the ease with which networks can be tuned as a function of the number of targets, i.e. the task complexity. For both flow and mechanical networks, we explore the effects of various aspects of the tuning problem. Figs. 2(A) and (B) display typical results for the fraction of networks that can be tuned successfully, , for flow and mechanical networks, respectively. Data is shown for a randomly chosen edge source and randomly chosen target edges with a desired relative change in target response of . System sizes range from to nodes. Each value of is calculated by tuning at least 512 independent randomly generated networks. At low , is nearly unity while at large it drops to zero. Inset in each figure is shown with scaled by the system size , indicating a crossover between these two regimes which narrows with system size. This behavior suggests a transition which becomes sharp in the infinite system size limit.
Using the approximate interpolations of the data provided by smoothing splines shown in Fig. 2 (see Supporting Information), we estimate the number of targets where , marking the position of the transition, along with the width of the transition , taken as the length of the interval in where . Fig. 3, shows that both flow networks and mechanical networks have approximately the same power-law behavior for and . Both the transition location and width scale approximately as . Because the scaling exponent for is less than 1, the critical fraction of functions that can be tuned simultaneously approaches zero as goes to infinity, even though the number of simultaneously tuned functions diverges with system size. Thus small networks are relatively more tunable than large ones. In addition, the sub-linear scaling of the transition width confirms the sharpening of the transition when normalized by system size, as seen in the insets of Fig. 2. At the same time, Fig. 4 shows that the average number of links that need to be removed for a successful tuning operation grows approximately linearly with the number of targets. Thus, those networks that can be tuned successfully typically only require a constant fraction of edges to be removed.
The exponents governing the scalings of the position and width of the transition vary slightly depending on (i) the local or global nature of the source, (ii) the magnitude of desired change in target response , (iii) disorder in the link topology, (iv) initial coordination of the network, and (v) the choice of whether to tune the link tensions (currents) or extensions (pressure drops) (see Supporting Information for results). The exponents governing the divergence of the maximum number of tunable targets and the vanishing of the transition width lie in the range of 0.6-0.8, with the exception of one case of an exponent of 1.0 for a very large relative change in target response of (see Table S1). We find that the behavior is not well-described by a power law for tuning negative relative changes in target response () and for tuning small changes in current or tension. The former case is still under investigation private (), while the latter exception has a simple explanation (see Supporting Information). Overall, the divergence of the maximum number of tunable targets with system size and the corresponding vanishing of the transition width (indicating the existence of a phase transition) are very robust observations for positive and sufficiently large relative changes in target responses. We note also that both mechanical networks and flow networks exhibit very similar quantitative behavior despite the fact that flow networks are purely topological, requiring no explicit spatial embedding.
We framed the problem of the maximum number of target edges that can be tuned successfully in a mechanical or flow network as a type of discrete constraint-satisfaction problem, in which we asked how many inequality constraints can be satisfied simultaneously. This places the tuning of multifunctionality in the context of a variety of other problems in physics, mathematics, and computer science, including jamming Ohern2003 (), spin glasses Berthier2011 (), the -SAT problem Mezard2002 (), -core percolation Schwarz2006 (), or the perceptron Franz2017 (). Much progress has been made by linking such transitions to the statistical physics of critical phenomena. The hallmark of these systems is the emergence of a SAT-UNSAT transition between regions in parameter space where the constraints can always (or with high probability) be satisfied and regions where the system is frustrated, such that not all constraints can be satisfied simultaneously Franz2017 (). In mean-field, and in some cases in finite dimensions, the SAT-UNSAT transition is a random first-order transition, with a discontinuous jump in the order parameter (the fraction of satisfied configurations ) as in a first-order phase transition, but with power law scaling as in a second-order transition.
We have shown that there is a SAT-UNSAT transition in the complexity of a single task that can be tuned into disordered mechanical and flow networks. In both cases, the maximum task complexity diverges with a power law that is sublinear in the number of nodes in the network. The width of the SAT-UNSAT transition (relative to the system size) vanishes as the network size diverges, showing that the transition is a true phase transition.
The SAT-UNSAT transition of the task complexity problem actually represents a new class of discrete constraint-satisfaction transitions due to a new complication that arises in the form of the constraints. When tuning a mechanical network, the removal of links can introduce soft modes, making it impossible to uniquely evaluate the network response, and subsequently tune a given target. Similarly, in a flow network the tuning process can lead to regions being disconnected from the source, making it impossible to tune any target in that region. To avoid such cases, at each step of the tuning process we are forced to exclude specific link removals (see Methods and Materials). In both mechanical and flow networks, we find that it becomes more and more likely to introduce a soft mode/disconnected region as the task complexity increases. This makes the problem more difficult to tackle both numerically and analytically compared to previously-studied constraint-satisfaction transitions, and may lead to differences in the nature of the transition.
For mechanical functions, a perfectly engineered mechanism (e.g., a pair of chopsticks, which creates a large displacement at the tips in response to strain applied where they are held) may perform exactly one function superlatively well, but we have shown that more complex network structures are able to adapt to a number of functions that diverges with the system size. The same argument holds for flow networks: an optimally engineered distribution network is a topological tree, perfectly suited for a specified task but at the same time “rigid,” in the sense that it can not easily adapt to other tasks. The networks that we have studied are more complex than a pair of chopsticks or a topological tree, and this allows them to be tuned successfully to perform arbitrarily complex tasks.
Our finding that a reasonably complex disordered network topology allows for great tunability may have relevance to real biological networks. For example, the development of certain vascular structures in the body of animals is characterized by the initial appearance of a tightly meshed disordered network of veins (the vascular plexus) that is subsequently pruned and tuned to its function (Preziosi2003, , Chapter 1). The initial disordered network may be a prerequisite of the great variability and versatility seen in natural networks. In addition, our study gives insight into how to control, for example, blood and oxygen distribution in vascular systems, or power in an electrical network. Similarly, the tuned mechanical networks serve as simple models for multifunctional allostery in proteins (with a single regulatory site that can control more than one active site, e.g., Schlessinger1986 (); Light2013 ()) or multifunctional metamaterials.
Our results raise a number of questions for future investigation. The divergence in the task complexity and vanishing of the transition width with system size are reasonably well-approximated by power laws but may deviate from perfect power laws for larger system sizes (see Supporting Information). Further work should be carried out to elucidate this behavior. As noted earlier, the measured exponents appear to depend on many specific properties of the problems studied. Is this because the behavior may not be a simple power law and if not, is there any universality to these transitions? How do the results depend on network structure/topology and dimensionality? Do they depend on the tuning algorithm?
One further aspect of our results deserves mention: a simple function that controls only a single pair of target nodes can be achieved in an extremely large number of ways. We have shown that a task can be complex with randomly chosen target nodes controlled by a single source. However, if one is only interested in controlling a single target, one can create different paths for its control by choosing any of the other nodes in the system also to be a target. Likewise, one could specify a third node to be controlled as well, etc. That means that there are at least ways of creating that simple function. Because we find where lies in the range 0.6-0.8, this lower bound is smaller than the prediction of solutions in the large- limit Yan2017 ().
Here we studied the limits of the complexity of a single task. It would be interesting to understand how many different tasks can be designed successfully, and whether that is controlled by a similar SAT-UNSAT transition. Finally, we note that for the mechanical and flow networks studied here, the behavior is governed by a discrete Laplacian operator Redner2009 ()–mechanical networks obey force balance on each node and flow networks obey Kirchhoff’s law. However, many networks, such as gene regulatory networks, metabolic networks, social networks, etc. are non-conservative. Moreover, the problems we have studied are linear in their couplings but ecological networks or neural networks, for example, are typically nonlinear. Presumably even non-conservative and/or nonlinear networks can support SAT-UNSAT transitions and it would be interesting to see if they differ from the ones studied here.
Materials and Methods
Our networks are described by a set of nodes and edges. The response of a flow network to external stimuli is represented by a pressure on each node . Analogously, the response of a -dimensional mechanical network is the -dimensional displacement vector of each node. Each edge linking nodes and is characterized by either a conductance or stiffness, denoted in both cases. For mechanical networks, where is the stretch modulus per unit length and is the rest length. Initially, we set all stretch moduli identically to one. Similarly, for flow networks we set all conductivities to one. Removing an edge corresponds to setting to zero, whereas reinserting an edge corresponds to setting back to its original value.
To calculate the response of each type of network, we minimize the corresponding functional. In the case of flow networks, we minimize the power loss through the network,
where indicates a sum over all edges. For mechanical networks, we minimize the elastic energy
where is a unit vector pointing from node to node in the undeformed configuration. The power loss for a flow network can be mapped to the energy of a mechanical network for by mapping the pressure on each node to a one-dimensional displacement Tang1987 (). In this case, the unit vectors are scalars with values of either , which drop out when squared; the embedding of the network in space does not matter as is be expected for flow networks.
Minimizing (2) for a flow network in the presence of externally applied boundary currents on each node , we obtain a system of linear equations characterized by a graph Laplacian ,
where is an -dimensional vector of node pressures and is a -dimensional vector of external currents on nodes. We define the vector so that the pressure and current on the th node are and , respectively. Similarly for mechanical networks, minimizing (3) in the presence of externally applied forces, we obtain
where is an -dimensional vector of node displacements and is a -dimensional vector of external forces on nodes. Again we define the matrix to pick out the displacement and force on the th node, and . The matrix is the matrix of second derivatives known as the dynamical or Hessian matrix and can be interpreted as graph Laplacian where each element is a matrix. We define the -Laplacian, denoted , as a generalized version of the standard Laplacian matrix. The case corresponds to the Laplacian of a flow network (or a one-dimensional mechanical network) such that , while for , is a Hessian for a -dimensional mechanical network, i.e. . The th -block of the -Laplacian is
where is nonzero only if edge exists.
Consequently, the response of either type of network is calculated by solving the corresponding system of linear equations which we now write as
where and are the appropriate -dimensional response and source vectors, respectively. To apply a pressure drop or edge extension source, we use a bordered Laplacian formulation.
Bordered Laplacian Formulation
The calculations of the linear response requires solving (7). However, there are two complications. The first is that the Laplacian operator is in general not invertible due to the presence of global degrees of freedom. For a periodic network, in dimensions, there are global translational degrees of freedom that must be constrained. Second, we apply edge extension (pressure drop) sources, rather than tension (current) sources. These sources can also be applied as constraints on the system. To implement these constraints, we use a bordered Laplacian formulation in which we add a constraint for each global translation and for the source.
First, we define the extension (or pressure drop) of the source as
with source nodes and . The unit vector points from node to and is a scalar in the case of a flow network. The vector is defined to extract the extension of the source from the full vector of node displacements. We specify the desired extension as . Additionally, we define the vectors for , which correspond to translations of the entire system uniformly along the th axis. We define the Lagrangian
where the parameters and are Lagrange multipliers. We include the Lagrange multipliers as additional unknown parameters that must be determined in our calculations. Solutions are found by extremizing the Lagrangian with respect to both the displacements and Lagrange multiplier. We can rewrite the Lagrangian in matrix form as
The vector is size with elements and is a size matrix with columns . In this context we can further condense notation, writing the Lagrangian as
where we define the bordered Laplacian as a block matrix of second derivatives of the Lagrangian.
We also define the bordered displacement and force vectors and , respectively, each of size as
As a result, the system of equations we must solve is now . The bordered Laplacian is invertible due to the presence of the constraints and solving this equation is straightforward.
Tuning Loss Function
Framed according to (1), the problem of tuning a complex task can be viewed as a constraint-satisfaction problem. The goal is to find a set of stiffnesses (conductivities) that simultaneously satisfy each constraint in (1). To study this problem numerically, we recast it as an optimization problem in the style of Ref. Franz2017 (), in which we define an objective function that penalizes deviation of the system’s behavior from the desired multifunctionality. Thus, we introduce the loss function
which is a function of the set of all the spring constants (conductivities) , and is composed of a sum over the set of target edges to be tuned. For each target edge we define the residual
which measures how close each target is to being tuned successfully. The Heaviside function is included so that if , that is, the target response ratio has increased at least by the desired proportion , then the residual does not contribute to the loss function.
Our method for tuning a network involves minimizing the loss function in (14). In the spirit of Goodrich2015 (); Rocks2017 (), our optimization consists of removing or reinserting previously removed edges from the network one at a time, modifying the network topology in discrete steps. More specifically, we utilize a greedy algorithm in which we measure the resulting change in the loss function for each possible single edge removal or reinsertion and remove or reinsert the edge which minimizes the loss function at that step. This requires the calculation of the new response for each possible move.
Suppose we have a network whose stiffnesses at the current step are for all valid where some might already be zero, having been removed at previous steps. Our goal is to measure the change in the response when the stiffness of edge is changed by an amount . First we note that the Laplacian can be decomposed as
where the equilibrium (or incidence) matrix of size defines the mapping of nodes to edgesPellegrino1993 (); Redner2009 () and is a size diagonal matrix of edge stiffnesses such that . We can define a bordered incidence matrix by appending rows of zeros to , giving us a corresponding decomposition of the bordered Laplacian . The change in the response is then
with the corresponding change in the bordered Laplacian with the vector . We now need to calculate the inverse of the updated bordered Laplacian. This can be done using the Sherman-Morrison formula Sherman1950 ()
The change in response is then
The new response is then used to calculate an updated loss function.
In order to reduce numerical error and maintain the numerical invertibility of the bordered Laplacian, we define the quantity
If is less than , we do not remove an edge. This quantity can be shown to be the contribution of an edge to the states-of-self-stress in mechanical systems Sussman2016 (); Hexner2018 (). By ensuring that every removed edge has some contribution to the states-of-self-stress, then by Maxwell-Calladine counting, we are guaranteed that no zero modes are introduced Calladine1978 ().
We repeatedly add or remove edges until either one of two condition is met: either the loss function is explicitly zero, that is, all constraints are satisfied, or the relative change in the objective function is less than .
We thank S. Franz for instructive discussions, along with C. P. Goodrich, D. Hexner and N. Pashine. This research was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Awards DE-FG02-03ER46088 (S.R.N.) and DE-FG02-05ER46199 (J.W.R.), the University of Chicago MRSEC NSF DMR-1420709 (S.R.N.), the Simons Foundation for the collaboration Cracking the Glass Problem via awards 454945 (A.J.L.) and 348125 (S.R.N.), Simons Investigator Award 327939 (A.J.L.), the National Science Foundation under Award No. PHY-1554887 (E.K. and H.R.), the Burroughs Welcome Career Award (E.K. and H.R.), and by a National Science Foundation Graduate Fellowship (J.W.R).
- (1) Motlagh HN, Wrabl JO, Li J, Hilser VJ (2014) The ensemble nature of allostery. Nature 508:331–339.
- (2) Ribeiro AAST, Ortiz V (2016) A chemical perspective on allostery. Chem Rev 116(11):6488–6502.
- (3) Barabási AL, Oltvai ZN (2004) Network biology: understanding the cell’s functional organization. Nat Rev Genet 5:101–113.
- (4) Hertz JA, Krogh AS, Palmer RG (1991) Introduction To The Theory Of Neural Computation, Addison-Wesley Computation and Neural Systems Series. (Avalon Publishing).
- (5) McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133.
- (6) Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79(8):2554–2558.
- (7) Katifori E, SzöllÅsi GJ, Magnasco MO (2010) Damage and fluctuations induce loops in optimal transport betworks. Phys Rev Lett 104(4):048704.
- (8) Pitterman J (2010) The evolution of water transport in plants: an integrated approach. Geobiology 8(2):112–139.
- (9) Hu D, Cai D (2013) Adaptation and optimization of biological transport networks. Phys Rev Lett 111(13):138701.
- (10) Ronellenfitsch H, Katifori E (2016) Global optimization, local adaptation, and the role of growth in distribution networks. Phys Rev Lett 117(13):138301.
- (11) LaBarbera M (1990) Principles of design of fluid transport systems in zoology. Science 249(4972):992–1000.
- (12) Tero A, et al. (2010) Rules for biologically inspired adaptive network design. Science 327(5964):439–442.
- (13) Alim K, Amselem G, Peaudecerf F, Brenner MP, Pringle A (2013) Random network peristalsis in physarum polycephalum organizes fluid flows across an individual. Proc Natl Acad Sci 110(33):13306–13311.
- (14) Fang X, Misra S, Xue G, Yang D (2012) Smart grid – The new and improved power grid: A survey. IEEE Commun Surv Tutor 14(4):944–980.
- (15) Tang W, Thorpe MF (1987) Mapping between random central-force networks and random resistor networks. Phys Rev B 36(7):3798–3804.
- (16) Goodrich CP, Liu AJ, Nagel SR (2015) The principle of independent bond-level response: Tuning by pruning to wxploit disorder for global behavior. Phys Rev Lett 114(22):225501.
- (17) Hexner D, Liu AJ, Nagel SR (2018) Role of local response in manipulating the elastic properties of disordered solids by bond removal. Soft Matter 14:312–318.
- (18) Reid DR, et al. (2018) Auxetic metamaterials from disordered networks. Proc Natl Acad Sci.
- (19) Ellenbroek WG, Zeravcic Z, van Saarloos W, van Hecke M (2009) Non-affine response: Jammed packings vs. spring networks. EPL 87(3):34004.
- (20) Rocks JW, et al. (2017) Designing allostery-inspired response in mechanical networks. Proc Natl Acad Sci 114(10):2520–2525.
- (21) Yan L, Ravasio R, Brito C, Wyart M (2017) Architecture and coevolution of allosteric materials. Proc Natl Acad Sci.
- (22) Flechsig H (2017) Design of elastic betworks with evolutionary optimized long-range communication as mechanical models of allosteric proteins. Biophys J 113(3):558–571.
- (23) Rocks JW, Chen BG, Katifori E (2018) Personal communications.
- (24) O’Hern CS, Silbert LE, Liu AJ, Nagel SR (2003) Jamming at zero temperature and zero applied stress: The epitome of disorder. Phys Rev E 68(1):011306.
- (25) Berthier L, Biroli G (2011) Theoretical perspective on the glass transition and amorphous materials. Rev of Mod Phys 83(2):587–645.
- (26) Mezard M (2002) Analytic and algorithmic solution of random satisfiability rroblems. Science 297(5582):812–815.
- (27) Schwarz JM, Liu aJ, Chayes LQ (2006) The onset of jamming as the sudden emergence of an infinite k-core cluster. EPL 73(4):560–566.
- (28) Franz S, Parisi G, Sevelev M, Urbani P, Zamponi F (2017) Universality of the SAT-UNSAT (jamming) threshold in non-convex continuous constraint satisfaction problems. SciPost Phys 2(3):019.
- (29) Preziosi L, ed. (2003) Cancer Modelling and Simulation. (Chapman & Hall/CRC Mathematical Biology and Medicine Series), p. 454.
- (30) Schlessinger J (1986) Allosteric regulation of the epidermal growth factor receptor kinase. J Cell Biol 103(6):2067–2072.
- (31) Light SH, Anderson WF (2013) The diversity of allosteric controls at the gateway to aromatic amino acid biosynthesis. Protein Sci 22(4):395–404.
- (32) Redner S (2009) Fractal and multifractal scaling of electrical conduction in random resistor networks in Encyclopedia of Complexity and Systems Science, ed. Meyers RA. (Springer New York, New York, NY), pp. 3737–3754.
- (33) Pellegrino S (1993) Structural computations with the singular value decomposition of the equilibrium matrix. Int J Solids Struct 30(21):3025–3035.
- (34) Sherman J, Morrison WJ (1950) Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann Math Stat 21(1):124–127.
- (35) Sussman DM, Goodrich CP, Liu AJ (2016) Spatial structure of states of self stress in jammed systems. Soft Matter 12:3982–3990.
- (36) Calladine CR (1978) Buckminster Fuller’s ”tensegrity” structures and Clerk Maxwell’s rules for the construction of stiff frames. Int J Solids Struct 14(2):161–172.
- (37) Wilson EB (1927) Probable inference, the law of succession, and statistical inference. J Am Stat Assoc 22(158):209–212.
- (38) Wahba G (1990) Spline Models for Observational Data. (Society for Industrial and Applied Mathematics).
- (39) Press WH (2007) Numerical Recipes 3rd Edition: The Art of Scientific Computing. (Cambridge University Press).
Supporting Information (SI)
Variations of the network tuning problem
We performed many variations of the standard network tuning problem presented in the main text. The default simulation parameters we used were a pressure (flow networks) or extension (mechanical networks) source, a target relative change in response of , and an average node coordination of . For both flow and mechanical networks, we studied the two cases in which the two source nodes were connected by a single edge and where the two source nodes were chosen randomly from all the nodes, giving 4 cases altogether that we discuss in the main text. Table S1 shows the many variations on these parameters that we explored, along with the resulting power law exponents where applicable and the corresponding figures showing the satisfiability transitions and scaling of the transition position and width . Each set of simulations had at least 128 simulations per data point to calculate the satisfaction probability. The first section of the table shows the data for the four cases discussed in the main text. The second and third sections show configurations for flow and mechanical networks, respectively for higher values of the target relative change . The fourth and fith sections show results for random networks with (close to isostaticity for mechanical networks) and initially perfect triangular lattices. The sixth section shows configurations for tuning the response with global strains applied to mechanical networks. The seventh section shows results for tuning a target current in response to a current source (flow networks) or target tension in response to a tension source (mechanical networks). Finally, the last two sections show results for a negative relative change in the target response for flow and mechanical networks. All variations on the tuning problem show qualitatively similar power law behavior except for tuning small changes in current or tension (see Section “Tuning target current”) or negative desired relative changes in target response, ) (see Section “Tuning negative target change ”).
Tuning target current
In Table S1, we do not list exponents for flow networks tuned for target current nor mechanical networks tuned for target tension with . As seen in Figs. S7(A) and (B), these cases do not result in the typical power law behavior seen elsewhere. Instead we find that it is almost always possible to achieve the desired response. This stems from the fact that the current in flow networks or tension in mechanical networks can be trivially increased in magnitude by simply removing the source edge. Typically, the source edge acts as either a resistor or a spring in parallel to the rest of the network, diverting a significant fraction of all current or tension through that edge. If the source edge is removed, then the magnitude of the current or tension is increased without changing the sign. We find that this increase is always enough to satisfy at least a 10% change in magnitude (), but not enough to satisfy in flow networks nor in mechanical networks. For these latter cases, the resulting transitions revert back to the typical behavior seen elsewhere.
Tuning negative target change
The last two sections of Table S1 contain the sets of variables we tested for the alternate case of a negative relative target response, . The resulting transitions are depicted in Figs. S8 and S9. For these cases, we flip the inequality in Eq. (1), resulting in the constraints
Note that corresponds to increasing the magnitude of the response without changing the sign, corresponds to decreasing the magnitude of the response without changing the sign, and corresponds to tuning target responses of the opposite sign from the source. For , we do not always see a simple power law behavior for reasons that are still under investigation private ().
Transition power law fitting and deviations
Fig. S10 demonstrates the deviations from power laws for the four systems displayed in Fig. 3. We have plotted the fractional difference of each measured point, or , from its fitted power law function and , respectively, as a function of system size . In both cases our fitted function is of the form where and are our fit parameters. Both the data sets for and are fit simultaneously with the same power , but different coefficients , resulting in a total of three fit parameters. Error bars have been estimated by dividing the uncertainty in or by the respective fit function at that point. It is apparent that the simple power law form does not perfectly match the underlying data.
Satisfaction probability error bars
Each data point of the various satisfaction probability plots is representative of a binomial distribution
where is the number samples and is the fraction of successful tuning attempts. To calculate the error bars depicted in the various satisfaction probability plots, we use the Wilson score interval Wilson1927 ()
with a z-score of . This gives us an estimate of the uncertainty for each data point which is analogous to the standard deviation for a Gaussian distribution. However, since the probability is restricted between zero and one, the error bars are not necessarily symmetric.
Satisfaction probability curve fitting
The satisfaction probability curves depicted in Fig. 2, along with many of the supplemental figures, were estimated using smoothing splines constructed from a basis of cubic B-splines. The procedure for constructing the splines and estimating the smoothing parameter were drawn with some modification from Ref. (Wahba1990, , Chapter 9.2).
To generate an estimate of a satisfaction probability curve, we start with a set of satisfaction probabilities each generated for a corresponding number of targets where goes from 1 to . Each satisfaction probability counts the fraction of successfully tuned networks from a collection of samples. Our goal is to find a function which approximates the underlying function sampled by the data. Since we do not know what functional form we should use, we would like to approximate this function using a spline. However, the function should be limited to the interval , while splines are not typically limited in this way. Therefore, we write in terms of a more general function as
where is the spline function which can take on any real value.
In terms of B-splines, the approximating spline function is written
with coefficients and degree- basis splines . The coefficients are the fit parameters we would like to estimate.
We must address the specific choices made in the use of B-splines. First, we choose to use cubic splines (). One knot is chosen for each data point plus an extra at the lowest and highest values of for padding. This gives us a total of knots,
The result is basis splines with corresponding coefficients.
B-spline coefficient estimation
Typically, one would employ a least squares approach to calculate the spline coefficients. However, this assumes that each data point is drawn from some normal distribution, while we know in this case they are drawn from a set of binomial distributions
Carrying out a standard maximum likelihood estimation, the corresponding log-likelihood of the binomially distributed data is
In terms of , the log-likelihood can be written
up to a constant with
To implement smoothing, we introduce a term with penalty parameter which penalizes the square of the curvature of . This gives us the penalized generalized linear model
Next we must choose a good value for . This is accomplished using a generalized cross-validation (GCV) approach, allowing us to choose in an agnostic manner. Using GCV effectively chooses so that the approximating spline curve changes as little as possible if an arbitrary subset of data is left out of the fit. For the sake of convenience, we write
where and are vectors of size . We also write
Finally, we minimize the generalized cross-validation function
The size matrix is the identity. When testing a particular value of , the values are always chosen to minimize (S11) for that . Therefore, the spline coefficients are treated as a function of .
When minimizing (S14), there may sometimes be extraneous minima at or . Since we would like some degree of smoothing, we never choose the minimum at zero. Also, moderate smoothing is generally preferable to infinite smoothing, so if a local minimum exists for finite , it is chosen even if it is not the global minimum.
We use the spline approximations of each satisfaction probability curve in order to estimate the positions and widths of each satisfiability transition. The center of the transition is simply chosen as the number of targets such that the probability of success is exactly , . The width of the transition is found by first finding the number of targets corresponding to success rates of and and taking their differences, . In order to weight each point correctly when finding the power law scaling of the transition properties, we utilized Monte Carlo resampling to estimate uncertainty press2007numerical (). To find the uncertainty values for and for a particular curve, each data point for that curve is resampled from its underlying binomial distribution. The spline approximation is then recalculated for this new set of data points and new values of and are extracted. This process is repeated numerous times, resulting in a distribution of value of and . The uncertainty is then calculated by finding the standard deviation of of these distributions.
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||1(A), 2(A),3, S1(A)|
|Mechanical||Edge Extension||Edge Extension||Random -||1(B), 2(B), 3, S1(B)|
|Flow||Node Pair Pressure Drop||Edge Pressure Drop||Random -||1(C) 3, S1(C)|
|Mechanical||Node Pair Extension||Edge Extension||Random -||1(D) 3, S1(D)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||S2(A)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||S2(A)|
|Mechanical||Edge Extension||Edge Extension||Random -||S3(A)|
|Mechanical||Edge Extension||Edge Extension||Random -||S3(B)|
|Mechanical||Edge Extension||Edge Extension||Random -||S3(C)|
|Mechanical||Edge Extension||Edge Extension||Random -||S3(D)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||S4(A)|
|Mechanical||Edge Extension||Edge Extension||Random -||S4(B)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Triangular Lattice||S5(C)|
|Mechanical||Edge Extension||Edge Extension||Triangular Lattice||S5(D)|
|Mechanical||Global Shear||Edge Extension||Random -||S6(A)|
|Mechanical||Global Expansion||Edge Extension||Random -||S6(B)|
|Flow||Edge Current||Edge Current||Random -||N/A||S7(A)|
|Flow||Edge Current||Edge Current||Random -||S7(B)|
|Mechanical||Edge Tension||Edge Tension||Random -||N/A||S7(C)|
|Mechanical||Edge Tension||Edge Tension||Random -||S7(D)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||N/A||S8(A)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||N/A||S8(B)|
|Flow||Edge Pressure Drop||Edge Pressure Drop||Random -||N/A||S8(C)|
|Mechanical||Edge Extension||Edge Extension||Random -||N/A||S9(A)|
|Mechanical||Edge Extension||Edge Extension||Random -||N/A||S9(B)|
|Mechanical||Edge Extension||Edge Extension||Random -||N/A||S9(C)|
N/A indicates power law estimates not applicable due to lack of transition, or clearly non-power-law-like behavior. Bold text indicates changes from default parameters.