From Modular to Distributed Open Architectures:
A Unified Decision Framework††thanks: This is the pre-print version of the following article: Heydari, B., Mosleh, M. and Dalili, K. (2016), From Modular to Distributed Open Architectures: A Unified Decision Framework. Systems Engineering, which has been published in final form at DOI:
This paper introduces a conceptual, yet quantifiable, architecture framework by extending the notion of system modularity in its broadest sense. Acknowledging that modularity is not a binary feature and comes in various types and levels, the proposed framework introduces higher levels of modularity that naturally incorporate decentralized architecture on the one hand and autonomy in agents and subsystems on the other. This makes the framework suitable for modularity decisions in Systems of Systems and for analyzing the impact of modularity on broader surrounding ecosystems. The stages of modularity in the proposed framework are naturally aligned with the level of variations and uncertainty in the system and its environment, a relationship that is central to the benefits of modularity. The conceptual framework is complemented with a decision layer that makes it suitable to be used as a computational architecture decision tool to determine the appropriate stage and level of modularity of a system, for a given profile of variations and uncertainties in its environment. We further argue that the fundamental systemic driving forces and trade-offs of moving from monolithic to distributed architecture are essentially similar to those for moving from integral to modular architectures. The spectrum, in conjunction with the decision layer, could guide system architects when selecting appropriate parameters and building a system-specific computational tool from a combination of existing tools and techniques. To demonstrate the applicability of the framework, a case for fractionated satellite systems based on a simplified demo of the DARPA program is presented where the value of transition from a monolithic architecture to a fractionated architecture, as two consecutive levels of modularity in the proposed spectrum, is calculated and ranges of parameters where fractionation increases systems value are determined.
Although modularity has long been suggested as an effective complexity management mechanism in natural and man-made systems, interests in modular design and modularity science have recently surged as a result of the increased complexity level of most engineering systems and more attention to new architecture schemes such as Modular Open Systems Approach and flexible systems [1, 2]. The definition of modularity, in this sense, goes beyond simply having modular components, and refers to “a general set of principles that help with managing complexity through breaking up a complex system into discrete pieces, which can then communicate with one another only through standardized interfaces” .
The significance of modularity for complex systems was first identified by Herbert Simon in his classic 1962 paper , in which a complex system was regarded as one made up of a large number of distinct parts that interact in a non-trivial way. One way to reduce this complexity, Simon suggests, is to decrease the number of distinct parts by encapsulating some of them into modules, where the internal information of each module is hidden from other modules. He referred to the notion of near-decomposability as a common feature of many natural systems that enables them to respond effectively to external changes without disrupting the system as a whole. More recent studies further validate Simon’s hypothesis and demonstrate various forms of biological modularity in protein-protein networks, neural cells, and gene regulation networks [5, 6]. Modularity has also been recognized as an essential concept in architecting products, processes, and organizations and has been an active area of research in many academic disciplines such as management sciences, systems and mechanical engineering, and organizational design. It has been shown to increase product and organizational variety , the rate of technological and social innovation , market dominance through interface capture , cooperation and trust in networked systems [10, 11], and to reduce cost through reuse . Following Conway’s law , it has been argued that modularity in products gives rise to modularity in organizations that manufacture them  and can result in some benefits at the enterprise and organizational level. A comprehensive list of studies related to the advantage of modularity in products and organizations can be found in [14, 3, 15, 16].
With all these advantages, we might be encouraged to make systems as modular as possible, limited only by physical and practical considerations. However, the observed trends in natural and engineered systems indicate that, under certain circumstances, some systems follow an opposite path, i.e., moving away from modularity toward more integration. The microelectronics industry provides the best example where more integration has been pursued not only for electronic components, but also mechanical and biomedical parts, resulting in the so-called system on a chip solutions. Such reverse trends remind us to also investigate costs of modularity and disadvantages of over-modularity. Increasing modularity often requires developing additional interfaces and standards, and thus can increase the overall cost of the system , can result in static architectures and excessive product similarity , and might hamper innovation in design . Moreover, increased levels of modularity can adversely affect the system performance under limited available margins. For example, designing some microelectronics and communication systems requires combining various standard modules into an integral architecture to increase the overall system performance [20, 21, 22]. Finally, when certain modules are equipped with decision-making autonomy, they can result in coordination difficulties and sub-optimal aggregate behavior [23, 24, 25].
These opposing effects mean that system architects face a dilemma in deciding between modular versus integral architectures. Moreover, since modularity is not a binary property, determining the right level of modularity becomes a complicated decision that requires formal frameworks and methods. Several conceptual framework and decision analysis methods and algorithms for modularizing an otherwise integral system have been suggested in the literature, examples of which are models based on complex network analysis , Fractal Product Design (FPD) , Modular Product Development (MPD) , Modeling the Product Modularity (MPM) , Modular Function Deployment (MFD) , Design Structure Matrix (DSM) , Axiomatic Design (AD) , and methods based on Real Options . However, the majority of these methods treat modularity as a binary feature rather than a continuum and do not relate modularity decisions to characteristics of the environment as defined earlier. DSM, a representation method for the interactions among systems components , in particular, has been widely used in many modularity decision methods [34, 35, 36]. Although the method uses a natural representation of internal system interactions and is simple to use, it is only effective in modularity decisions for relatively simple systems. DSMs have serious shortcomings when used in systems with higher levels of complexity, since they are static, do not incorporate the system’s interactions with the environment, and do not allow for clear presentation of multiple relations or time domain evolution [37, 38]. Some extensions of the method, such as Domain Mapping Matrix (DMM)  and Engineering System Matrix (ESM) , have been proposed in order to overcome these limitations. However, they haven’t advanced much beyond framework definitions and multi-attribute relational descriptions of the system; and more research is needed to make these methods applicable for real problems.
Besides accommodating a non-binary notion of modularity and incorporating environment parameters, a novel modularity decision framework needs to extend to open architectures and Systems of Systems and be able to capture the impact of system modularity on various parameters of the surrounding ecosystem. These extensions must incorporate two important features, namely decentralized/distributed schemes and autonomy of system components. These features contribute to system complexity, yet at the same time provide new capacities for complexity management mechanisms which in turn require a transformation in the notion of modularity from static to dynamic in which the modular structure of the system can dynamically and autonomously change in response to variations in the environment by leveraging available autonomy in the system as well as its underlying decentralized network structure . This notion of modularity is generally missing in the literature of product and system design.
This paper introduces a conceptual, yet quantifiable modularity framework that is a step toward addressing these shortcomings. First, it acknowledges that modularity is not a binary feature and comes in various types and levels. Second, the framework introduces higher stages of modularity that naturally incorporate decentralized architecture on the one hand and autonomy in agents and subsystems on the other. This makes the framework suitable for modularity decisions in systems of systems and analyzing the impact of modularity on broader surrounding ecosystems. Third, stages of modularity in the proposed framework are naturally aligned with the level of variations and uncertainty in the system and its environment; a relationship that is central to the benefits of modularity. Finally, the conceptual framework is complemented with a decision layer that makes it suitable to be used as a computational architecture decision tool to determine the appropriate stage and level of modularity of a system, for a given profile of variations and uncertainties in its environment. We further argue that the fundamental systemic driving forces and trade-offs of moving from monolithic to distributed architecture are essentially similar to those for moving from integral to modular architectures. In both of these two dichotomies, increased uncertainty, often in the environment, is one of the key contributors for pushing a system toward a more decentralized scheme of architecture, in which subsystems are loosely coupled. Trends in processing units in computer systems can be illustrative here. Depending on the relative rate of change and uncertainty, the CPU can be an integrated part of the system (e.g., Smart phone), be modular at the discretion of the user (e.g., PC), transition to client-server architecture to accommodate smoother response to technology upgrade, security threads, or computational demand, or finally migrate to a fully flexible system with dynamic resource-sharing (e.g., cloud computing).
The organization of the rest of the paper is as follows: In Section II, a method for characterizing the complexity of the environment and its impact on systems modularity is proposed. Next, in Section III, a conceptual five-stage modularity spectrum is introduced together with a computational decision layer that quantifies the value of architecture transitions along this spectrum. In Section IV, a formal method to quantify such transition decisions is presented. To show the applicability of the framework, one representative example is discussed, quantified, and simulated in detail using a case study related to fractionated spacecraft systems. Finally, Section V summarizes conclusions and provides direction for future studies. Following the general theme of the paper, its structure is designed quite modular; thus the reader who is interested in the conceptual parts, the theoretical foundation and the setup of the framework can skip Section IV without loosing much of the core message of the paper.
Ii Three Drivers of Modularity and A Space-Time Model of A System’s Environment
Much of the increase in systems’ complexity can be attributed to mechanisms that enable systems to deal with external complexity resulting from variations and uncertainties in their environments. In other words, the complexity of the system is driven by the complexity of the environment in which it is planned, architected, and operated [41, 42]. The notion of environment goes beyond the physical context and includes factors such as consumers and stakeholders requirements; various market forces; and policy, budgetary, and regulatory issues that can affect the performance of the system, and to which the system is expected to respond. The increase in external complexity means that the system should be able to respond to a wide range of scenarios, many of which are not entirely known during earlier phases of the system’s life cycle and can be subject to unanticipated changes. From this perspective, complexity management is a set of mechanisms that keep the system’s complexity (internal complexity) at an appropriate level that can respond to an expected level of external complexity, while staying robust, resilient, and within budget . Deviating from this level can result in performance degradation, where the system is unprepared to respond to the environment (under-complexity), or unnecessary cost and damaging unintended consequences, where the complexity is above the required level (over-complexity).
In an earlier work , using a dynamic network formation model , we demonstrated that the appropriate level of modularity in a heterogeneous complex networked system can be described by three factors: level of heterogeneity (diversity) in the environment, average cost of resource exchange, and resource processing capacity of systems constituents. “Resource” can have different interpretations depending on the context and can refer to information, power and energy or materials. We showed, using analytical and computational methods, that an increase in the first two factors pushes networks toward more modularity, while the third factor has an opposite effect. We refer to  and  for thorough description, assumptions, results, and implications of this framework.
These factors, once translated properly, help us to identify and categorize key drivers of modularity in complex engineering systems. It is worth mentioning that even though many complex systems do not have network architecture—which was a fundamental assumption that lead to these results—relationship and dependencies among their constituents can be represented using network structures, as is the case in Design Structure Matrix that can also be considered as the adjacency matrix of a graph .
The first factor, i.e., the heterogeneity of the environment, can now be defined in such a way that captures variations and uncertainties in the environment over time (temporal heterogeneity), as well as variations in the system’s environment at a given point in time (spatial heterogeneity). Examples of spatial heterogeneity include diversities that exist in stakeholders’ requirements, in product consumers preferences, or in expected missions for multi-mission systems. For a single stakeholder, single mission, and no environmental uncertainty (e.g., technical failure, technology evolution, market fluctuation, funding availability, etc) one will get the highest value by going for the most integral architecture.111Here we assume the possibility of the hypothetical scenario of a fully vertical design. In practice, even for zero space-time heterogeneity in the environment, some level of modularity still exists as a result of using standard components. We then expect to gain more value from modularity if the stakeholders needs or expected missions become heterogeneous, since modularity provides the option for customization. From this perspective, having a single stakeholder whose needs might change over time (temporal heterogeneity), once adjusted using discount factors, creates a similar impact as having multiple heterogeneous stakeholders at a given point of time. Here, the basic intuition is that we can model the environment as a finite set of possible states and design the architecture in order to be able to respond to these states. From this perspective, increase in both types of heterogeneity—i.e., spatial and temporal—adds to the number and diversity of such states and results in higher levels of adaptability, required for the system.
The simple hypothetical case shown in Figure 1 can further clarify the notion of space-time heterogeneity. To keep things as simple as possible, we limit the number of time steps to two and assume the environment can be modeled by two binary parameters. These parameters, for example, can be assumed to be market demand and temperature, both may take low and high values. This results in a total of four states in the environment, represented by to . Depending on whether the environment is expected to be static or dynamic, and the number of stakeholders (one or two), one can identify four scenarios, each represented by a separate panel in Figure 1. Panel (a) shows the states of the environment against time for a single stakeholder with a static environment. In Panel (b), requirements of the single stakeholder are uncertain and can result in two different states of the environment. Panel (c) depicts the states of the environment for two stakeholders with static requirements over time that result in two different states of the environment. Finally, Panel (d) shows states of the environment for two stakeholders with uncertain requirements over time, which results in four possible states for the environment. This way, system (a) needs to respond to one environment state and system (d) to four, thus we can expect system (a) to have low modularity and system (d) to be highly modular. Systems (b) and (c), although different in number of stakeholders and environment dynamics, both need to respond to two environment states (i.e., and ), thus we can expect them to be both similar in their level of modularity.
The heterogeneity level of the environment determines the level of responsiveness needed for the system, but to translate this into the actual modularity level, one needs to consider the other two factors. These two factors, namely the processing capacity of the system’s constituents and the cost of resource exchange among them, are intertwined in engineering systems. The cost of resource exchange, which defines the cost to establish and maintain a connection between two parts, includes various components such as the cost of interface design, maintaining an information link, losses and inefficiencies at the interface, and noise impacts. These cost components increase with the heterogeneity among the system’s constituents. Moreover, the increased range of customization options can result in more unintended consequences, including security risks that can ultimately compromise the robustness of the system. The processing capacity of nodes, as the third factor in the suggested model, captures various notions of budget in the system, and includes factors such as monetary budgets for development/customization of interfaces, information processing capacity on each subsystem, information link and noise budget for wireless systems, and cognitive budget for real-time decision-making where the system has human-in-the-loop.
Iii Spectrum of modular architectures and decision operators
As discussed earlier, and as been pointed out by other scholars in management sciences and engineering, modularity is not a binary property and needs to be considered as a spectrum of various levels and forms [47, 48]. This continuous nature exists for the level of a component’s modularity, as well as the modularity level of certain subsystems or that of a system as a whole. For some systems, continuous modularity can be modeled and quantified in a more intuitive and systematic way. For example, in the Complex Networks literature, network modularity is naturally defined as a continuous parameter ranging from zero to one, and continuous measures of modularity have been introduced for a subset of components in the network [49, 10]. For most other engineered systems, however, dealing with continuous spectrums in systems architecture decisions can be challenging for several reasons. One reason is that using a general continuous spectrum would turn the decision problem into an optimization problem that can easily become computationally intractable and might not be easily reconcilable with the engineering design intuition that is needed for such decisions. Furthermore, treating modularity as a single continuous spectrum does not lend itself to hierarchical and layered architectures, where adding a new layer to the system results in a discontinuity in the level of modularity. As a result, the real-world interpretation of a given point on a continuous spectrum of modularity can become difficult and often subjective.
To address this dilemma, we suggest a hybrid solution that keeps the spectral nature of modularity, yet discretizes it into multiple stages, each representing a different modularity class. Within each stage, modularity can be approximated as continuous (or can be further discretized if needed), while a change in the modularity stage is considered as discontinuous shifts.
Iii-a Modularity spectrum
Following the broad definition of modularity , our proposed framework is composed of five stages of modularity, indicated by to , and is shown in Figure 2. Higher stages represent more complex architectures from the perspective of a particular elements in the functional domain and are able to respond to higher levels of complexity in the environment.
Stage is considered as the lowest level of modularity and describes integral products where there is an unstructured mapping from functional elements to physical components. Engineering systems that are fully integral are rare, yet some products or subsystems such as pipes, communication transmission lines, or certain analog integrated electronic circuits belong to this stage. can be considered as zero modularity level and be used as a baseline in modularity quantification.
At the next stage, represents systems of identifiable physical components and subsystems, each responsible for specific elements in the functional structure. For most engineering systems, this stage represents the lowest bound of modularity. Components at this stage, although have identifiable functions, cannot easily be customized, replaced, or upgraded during later stages of systems lifecycle. Some types of modularity introduced in the literature such as Modularity-in-design , or certain types of Slot Modularity  can fall under stage. Smartphones or tablet hardware, most home appliances, most car components and medical devices can be considered at this modularity stage. Following our stream of examples regarding computer processors, smartphone’s central processor (CPU) shows a good example of where the design of the entire electronic board is such that it prevents users from replacing, customizing, or upgrading. Similar to other stages on the proposed spectrum, there are potentially different architectures that can all be at , so it is important to keep in mind that stages do not represent unique architectures.
To allow additional flexibility, one needs to go to the next stage, . Here, similar to , there is a structured mapping between functional and physical structure, however components interact with each other through flexible standard interfaces. These standard interfaces allow the components to be replaced or upgraded without disrupting the rest of the system. This stage represents what is often referred to in the literature as product modularity in general and can itself take several different forms such as component-sharing, component swapping, mix, sectional and bus modularity [47, 29, 50]. Returning to our processor example, contrary to in smartphones, personal computers’ CPUs are at stage, which allow users to customize or upgrade the processing units according to their preferences. The extent of such customization or upgrade depends on the limitations of standard interfaces. In the case of the CPU example, limited bit-rate capacity or signal interference of on-board interconnections are often limits that make users unable to keep upgrading beyond a few years, as newer technologies impose stricter requirements on interface latency, noise, and interference shielding.
Stages to represent modularity schemes for monolithic systems in which all components are in the same, and often compact, physical unit. These components might be decomposable or -modular, using standard interfaces. Extending the notion of modularity from monolithic to decentralized systems is crucial since much of the trends in Cyber-Physical Systems, critical infrastructure systems and Internet-of-Things are moving toward decentralized architectures. Moreover many of these systems include a set of autonomous agents, either in the form of autonomous machines, or interactive human agent that needs to be rigorously considered in the design process. Decentralized scheme and autonomy contribute to systems’ complexity; yet at the same time provide new capacities for complexity management mechanisms which in turn require a transformation in the notion of modularity from static to dynamic in which the modular structure of the system can dynamically and autonomously change in response to variations in the environment by leveraging available autonomy in the system as well as its underlying decentralized network structure. In light of this, the next two stages of the modularity spectrum are formulated.
Considering modularity as an architecture mechanism that enables the system to respond to a given complexity level in the environment, as this was the motivation for moving from integral to modular in the first place, one can extend the notion of standard interfaces beyond what was defined in the level so that it includes platform architecture, wireless standards and web protocols to also cover decentralized systems. In such systems, certain functionalities of the otherwise monolithic system222We use monolithic as opposed to distributed/decentralized, and integral as opposed to modular. are transferred to different, and often remote, physical units. Here, we refer to these units as systems fractions. Systems at and stages are composed of more than one (and, in certain systems, a large number) of fractions. These distributed fractions then communicate and coordinate either in peer-to-peer schemes, or using standard wireless or web-based protocols.
At , some critical resource-extensive functionalities are embedded in one fraction (or a small subset of fractions) that provide service to other fractions according to a pre-determined resource-allocation protocol. This creates a distributed yet static scheme, since the relationships of clients and servers are fixed and are decided in advance. has several advantages compared to . It improves system responsiveness to market and technology changes by facilitating and expediting upgrade and maintenance of critical subsystems, thus adding to the overall system’s flexibility. It also results in some scalability (not much though, as will be discussed in the description), since more client fractions can be added to the system during later stages of systems life cycle. Following our thread of examples related to computational components, starting from iPhone CPU () and moving to desktop computer CPU (), at multiple fractions can use the computational power of a dedicated fraction in a hub-and-spoke architecture. Other server-type fractions can also be considered for data-transmitter, sensors, navigation units, or memory and storage, depending on the nature of the system.
Moving toward these stages adds another layer of complexity and a new set of parameters to modularity decision models. When moving from to , for example, in addition to component-level modularity, one needs to also decide about the number of fractions; allocation of functionality, physical components, and resources across these fractions; connectivity structure (what is connected to what); and communication protocols (peer-to-peer vs. web-based, for example). These decisions, together with the component modularity decisions (decided at the level), determine the overall adaptability of the system in response to environment variations and uncertainties.
The static nature of can limit the flexibility and scalability of the system. Moreover, systems with architecture are not highly resilient in general, especially in response to targeted attacks, as the critical fractions are easily identifiable and the overall performance of the system depends on such fractions. These limitations motivate moving to a dynamic scheme of connectivity and resource sharing, formulated as the stage. stage is similar to in having multiple fractions with heterogeneous functionality, which perform different functions and communicate and coordinate resource allocation among themselves. It is, however, different from in the way the resource allocation is realized. Unlike , in which clients and servers are fixed and pre-planned, systems have multiple resource-sharing possibilities that increase both flexibility and scalability. This difference significantly affects the network connectivity structure of these two schemes where fractions are nodes of the network and resource sharing paths are the links. Whereas the connectivity structure of systems are closer to tree or two-mode networks with no loops, the structure of systems have numerous loops and multi-path connections that can take the complexity of the system to a much higher level and cause new systemic problems such as coordination, cooperation and proneness to cascading failure [51, 24] .
In addition to dynamic resource sharing, one can add another degree of flexibility at by allowing for dynamic connectivity structure in response to changes in the environment. Determining the connectivity structure of systems with heterogeneous components in response to the environmental uncertainty is an important design decision and creates an array of interesting and challenging research problems at the junction of engineering, economics and computer science . The sheer number of possibilities for connectivity structure and resource sharing paths, even for a small set of fractions, makes pre-planning of such systems unfeasible. As a result, in most systems network nodes (fractions) are given some level of autonomy to create and delete links and prioritize resource allocation requests. The autonomous, dynamic, network nature of systems at this stage makes analysis and design of such systems challenging and we can expect to see these challenges to motivate considerable volume of interdisciplinary and systems-oriented research in the coming decade.
It is wroth emphasizing that a given architecture can be at different stages at the same time for different elements in the functional domain. For example, a cloud computing system can be at stage for data processing, while being at or even from the perspective of data storage (relying on local hard-drives). However, a portion of the cost of moving toward higher stages are shared by more than one functionality, which in turn means that once the system is transferred to a higher stage for one function (e.g. computation), transfer of other functionalities to this new stage can be performed with less cost. This path dependency in architecture transitions need to be considered in quantitative decision models.
Iii-B M+ Decision Operators
In order to transform the proposed conceptual framework into a computational tool, we need to add a decision layer to the model that determines the optimal level of modularity for a given functionality of a system and under a certain profile of the environment. This decision involves selecting the stage of modularity, as well as the design instantiation within that stage. Here, we focus on the former by introducing a set of operators ( Operators) that calculate the value of transition from one stage of modularity () to its next immediate stage (). The proposed decision operators compare the value of the system prior to the operation to the value of the system afterward by calculating the probability distribution of value difference of two consecutive stages. This will allow decisions to be made not only based on the value difference average but also on the level of the risk tolerance.
One can consider transitions between different levels of modularity as a value-seeking process to also enable future evolution of the system. This view towards complex systems matches existing theoretical conjectures that natural selection favors more evolvable systems . Building on this metaphor and considering variation and selection as two of the key elements of the value-seeking evolutionary process, one can think of each modularity stage as determinant of limits of variations. Moreover, within each stage, a selection process is needed to decide the fittest design instantiation.
Here, we introduce a set of operators that address the first element by determining the optimal stage of modularity. However, as noted by , variation and selection in human design process are more intertwined than what appears to be the case in biological systems. This fact underscores a simplifying assumption in the decision layer of our proposed framework in which these two steps need to be done sequentially. This can be a sound assumption if one uses multiple iterations of these value-determining steps. One can further assume that the value of a transition operator between two stages should use the best-case design instantiation of the source (the one with the highest value in its modularity stage), since this instantiation is already determined in the previous round of iteration.
As mentioned in the previous section, is the lowest modularity for most engineering systems, therefore, the first decision operation, Splitting Operation, refers to the transition from to by developing and using proper standard interfaces. Fractionation operation takes a system from to by moving one or more of its subsystems to other fractions.
Although the specifics of M+ evaluation depend on individual systems and their parameters, we can provide a procedural algorithm that would act as the evaluation engine for decision operations. To measure the value of the M+ operation we have to compare the value of the system prior to the operation to the value of the system afterward. Such evaluation requires knowledge of the system and its environment. Figure 3 shows the input and output of the evaluation engine. The value of the system at each modularity level can be calculated via any of the standard system evaluation methods (e.g., scenario analysis, discounted cash flow analysis) and should consider the following parameters:
Technical Parameters: For example probability density for time to failure, time to availability of an upgrade, maximum number of modules allowed, maximum communication bandwidth.
Economical Parameters: For example number of modules in demand at a given time, launch and operational cost of a module, rate of value generation for various module types.
Life Cycle Parameters: Total operation time, budget, and maximum time to initial deployment.
Calculating the value of decentralization sharing operation—i.e., moving from to —is more challenging because of the dynamic nature of the resulting system. The cost of moving to depends on the ratio of clients to servers, the total heterogeneity of the system, and resource capacity of the fractions. Given the autonomous behavior of systems units at and the dynamic nature of sharing resources and connectivity structure, calculating the associated cost and benefits, using analytical methods, is difficult for this level of modularity.
The underlying network structure together with the dynamic, autonomous behavior of some systems constituents require designers to use multi-agent systems approaches that incorporate dynamics and evolution of systems with strategic, autonomous behavior of interconnected constituents . A high-level sketch of the method that calculates the value of the decentralization operation is depicted in Figure 4. Further details, more elaborate methods and illustrative case studies for the to transition create a number of interesting and challenging research questions that can be pursued in future publications.
Iv Value of the Fractionation Operation for a Spacecraft
As mentioned earlier, the specific details of the model and implementation of the proposed operations depend on the context of the problem, depth and time-scale of analysis, system boundary, and types of uncertainties to consider. Moreover, similar to Real Options Analysis, there are different ways to implement operations depending on the underlying assumptions and available computational resources. To show an illustrative example of how this framework can be implemented, we apply it to a simplified case of fractionating a spacecraft. We use this example to show a real-world realization of modularity stages ( and in this case) on the one hand, and show the way that the proposed framework extends the notion of modularity by considering distributed systems, on the other hand. It is worth emphasizing that the case in this section is not meant to demonstrate the full capability of the framework, the complexities of implementation, or a detailed solution for a fractionated spacecraft.
The presented case is constructed based on a simplified architecture that is proposed as a part of DARPA program, whose objective is to determine the feasibility of replacing a number of large, expensive, and rigid monolithic satellite systems with agile, flexible, and evolvable systems based on reconfigurable fractions . Traditional monolithic satellites are at the stage of modularity and have limited ability to respond to variations and uncertainties in the environment. In fractionated architecture, however, subsystems are placed into separate fractions that communicate wirelessly to deliver the capability of the original monolithic system [55, 56], thus moving the system to level (fractional) in the proposed framework. Clearly, this transition does not make sense for all satellite systems, thus the system architect needs to know conditions in the system and its environment, under which transition toward fractionated architecture increases the overall value of the system . While dynamic resource sharing ( Stage) has been proposed to further increase the flexibility of fractionated satellite systems, we restrict our attention to a static scheme and consider as the ultimate level of flexibility for this case and calculate the value operator for moving to this level.
The system in this case study contains four fractions flying in low earth orbit; one fraction carries the payload (sensor), one fraction carries a high performance computing unit, one fraction provides high-speed downlink capabilities, and a final fraction provides broadband access to a ground network through Inmarsat I-4 GEO constellation. Data collected by the sensor needs to be processed and transmitted to earth via a high-speed downlink, while a connection from earth to the system is needed for maintenance. In fractionated architecture, these functions are separated into four fractions while relations between fractions are fixed and no reconfiguration/reuse of assets is planned. Each fraction in the system carries a System Tech-Package (F6TP), which enables fractions to wirelessly communicate. Figure 5 illustrates the allocation of subsystems for the monolithic and fractionated architectures.
In order to make the calculations feasible for a small case study, we make a number of simplifying assumptions. Most of these assumptions, as will be explained, do not change the logic behind the proposed framework, yet they are needed to keep the case calculation tractable. In some other cases, for example for limiting the number of uncertainties, the simplifying assumptions are made in order to retain the focus of the paper. Similar to many other computational methods, such as NPV and Real Options, curse of dimensionality can also create computational problems for the framework. We discuss this in the last part of the paper and in our future publications.
To evaluate the fractionation operation (), we compare this fractionated architecture with a monolithic system comprised of the same four subsystems at the level of modularity. Note that since we are comparing the value of the fractionated system to the monolithic system, design costs of all subsystems are already taken into account. For simplification we assume project lifetime is fixed, and the system is managed to keep uninterrupted functionalities to the end of the project lifetime. This assumption results in gaining the same benefit from both systems. As a result, the value of the operation can be represented by the cost difference of the two systems. To simplify calculations we compare the cost of running the project to the end of its lifetime in the two different modularity levels. However, a similar method can be used to compare the total value of the system even if the benefits are not identical. We further simplify by considering only the following uncertainties in our calculations:
Component Failure: We assume time to failure for a subsystem follows a known distribution that can be approximated using historical data, and we assume various subsystems have independent failure times.
Technological Obsolescence: We assume that subsystems can become obsolete via technology upgrades and assume each technology has its own obsolescence time distribution. We also assume that different subsystems have independent time to obsolescence.
We also assume that obtaining the subsystem has a fixed cost through time, but future expenditures are discounted by a given interest rate. We are effectively assuming that replacement is immediate once it is needed. However, this assumption can easily be lifted without changing the underlying method. The parameters we use for the calculations include subsystem costs and masses; bus cost, mass, and capacity; distribution parameters for component failure and technological obsolescence; and launch costs, assumed to be proportional to mass.
First let us consider the replacement time for a fraction in the fractionated system. The fraction has to be replaced either because it has failed, or because it is obsolete. Given that these two are independent events, we can analytically compute the probability distribution function for the time to replacement given the probability distributions of time to failure, and time to obsolescence. Similarly, the monolithic system has to be replaced when any of its components require replacement, and can also be calculated analytically (Section IV-A). Once the distribution for time to replacement is calculated, replacing the fractions becomes a renewal process with a known distribution. Due to the fact that finding analytical solution for cost distribution is not easily tractable, we rely on simulation to approximate cost distributions.
Iv-a Computing replacement time probability distribution
Given the probability distribution of the time to failure for each subsystem, the bus and the F6TP, and the probability distribution of each subsystem’s obsolescence time, we calculate the probability distribution for replacement time. We can also assume that time to failure, and technology obsolescence times are independent random variables.
For a fraction, we consider its payload, the F6TP and the bus, each having time to failure with and being Probability Density Function (PDF) and cumulative distribution function (CDF), respectively. The payload’s obsolescence time is also given by a random variable with probability density and cumulative distribution functions and . We can then compute the CDF of the fraction’s replacement time, , as follows. Note that the fraction has to be replaced if either one of its three components fail, or if its payload is technologically obsolete. Therefore:
Thus, the probability density function of is:
We can compute the probability density function and the cumulative distribution function for the replacement time of the monolithic system in a similar way, with minor differences; the F6 tech-package will not be part of a monolithic system, and as such its time to failure will not enter our calculations. On the other hand, we have to consider all four subsystems, their time to failure, and their technology obsolescence times.
Iv-B Simulation setup to calculate value distribution
|Component||Weibull Alpha||Weibull Beta||Component Cost (k$)||Mass (kg)|
|Payload - Figure 6||15||1.7||27,000||50|
|Payload 1 - Figure 7||15||1.7||1,600||25|
|Payload 2 - Figure 7||15||1.7||11,600||350|
|Payload Bus (Fractionated)||108||1.7||28,000||180|
|Communication Bus (Fractionated)||108||1.7||29,000||200|
|Downlink Bus (Fractionated)||108||1.7||25,000||150|
|Processor Bus (Fractionated)||108||1.7||26,000||160|
In this section, we calculate the probability distribution for the value of the fractionation operation () based on the cost of building and launching of subsystems, and the probability distributions of their replacement times.
For monolithic satellite systems to remain functional, the whole system has to be replaced once one of its components become obsolete or fails. However, for a fractionated system, only the fraction associated with the dysfunctional component has to be deployed and launched again. Hence, a lower bound for the value of fractionated architecture can be calculated by comparing the cost imposed by each component replacement in a fractionated to an equivalent monolithic architecture over its lifetime.333Higher value can be achieved through scalability and evolvability that are intrinsic to fractionated architecture
We can formulate the cost of running the system as follows: For each fraction , suppose a sequence of random variables represents the time between two consecutive replacements. A new instance of a fraction has to be deployed at times in order for the system to function without interruption until the end of its lifetime. is the largest integer such that , where is the project lifetime. Suppose that is the cost of building and launching a new instance of fraction . The cost of running a system, , with fractions is the sum of the costs of its fractions, discounted to the present time (discount rate=), i.e., for a monolithic architecture, we can consider a single fraction in this model.
In the Monte Carlo simulation setup, we sample components’ replacement times based on their probability distribution. We find the component with the earliest replacement time and calculate the cost associated with its replacement in both monolithic and fractionated architecture. We continue this for both architectures until the earliest replacement time is greater than the given lifetime. For each run of the simulation, we calculate the cost difference of running the fractionated system against the monolithic system and discount it to the present time. Repeating this process a large number of times yields an approximation for the value distribution of the fractionated architecture over the monolithic architecture.
Table I presents the values that are used in the simulation. The input to the simulation includes subsystems’ costs, masses, and failure and obsolescence probability distribution parameters. We adopt the typical values and distribution functions for satellite systems design suggested in . For approximating subsystems’ failure, we use Weibull probability distribution. For technological obsolescence approximation, we employ a Lognormal distribution and assume subsystems’ obsolescence times are independent. We assume mean value of 1 year with standard deviation of 3 years for the obsolescence distribution. We also assume buses and F6TP do not become obsolete. Moreover, we consider $30k per kg for launch cost. We consider an interest rate of 2% in our simulation for a project lifetime of 20 years. All these assumptions and values can be easily tailored to other projects and circumstances.
Figure 6 illustrates the cost of operating a fractionated satellite system, where each main subsystem is assigned to a separate fraction, versus a monolithic system. The solid curves in Figure 6 represent expected cost for each architecture during the lifetime. The curves have relatively low slopes in the beginning of the system lifetime due to the low probability of obsolescence and failure in the early years. The initial cost of running the fractionated system is greater than that of the monolithic system due to fractionation cost, i.e., the cost of building additional subsystems such as F6TP. However, the expected lifetime cost of the monolithic system increases more quickly over time due to the fact that the whole system must be redeployed and launched when a component fails or becomes obsolete.
The boxplot in Figure 6 depicts the probability distribution of cost. The cost variance at each point is the result of two opposing forces. On the one hand, the intrinsic property of the underlying stochastic process results in increase of variance over time. On the other hand, during the lifetime, whenever a fraction is deployed and launched, the replacement time of its components is reset, which suppresses variance of cost. In the monolithic architecture, when a component fails, the time of replacement for the whole system is reset with a high cost. However, in case of failure of the equivalent component in the fractionated architecture, the replacement time for the other components do not change but the failure results in a lower cost. Figure 6 shows that the cost variances of both systems increase by time. However, the monolithic architecture has higher variance at every time step due to dominance of the impact of cost associated with each incident of subsystem replacement.
At this point, we will only look at the value of the fractionation operation. Figure 7 shows the value of fractionated architecture for two different payloads as listed in Table I. Payload 2 has higher mass and is more expensive than Payload 1. It can be observed that fractionation does not result in significant value to the system with Payload 1 over the simulated lifetime. However, the system having Payload 2 has a positive fractionation value earlier in the project lifetime.
Since F6 tech-package will only be a part of a fractionated system, it is important to analyze how its characteristics affect the value of fractionation. Figure 8 depicts the effect of reliability parameters of F6TP on the system fractionation value. In Figure 8, , is the shape parameter, and average lifetime is the mean value in Weibull distribution. The results in Figure 8 suggest that the value of fractionation is highly sensitive to the reliability of F6TP. If the average lifetime of FT6TP is less than a certain value (e.g., average lifetime=35 (years) for beta=5), fractionation would impose unnecessary costs to the system. This is due to the large number of replacements of fractions due to failure of F6TP when compared to the monolithic architecture.
V Conclusion and Future Directions
As a common feature in many complex systems, modularity is a primary mechanism for complexity management in natural, social, and engineering systems, with several associated advantages and disadvantages identified in the literature. On the one hand, modularity increases the adaptability and evolvability of systems and enables local changes without disrupting the whole system. On the other hand, it introduces additional costs, makes finding global optimized design more difficult, and hinders those innovations where a major change in the architecture is needed. These fundamental trade-offs make it necessary to determine under what conditions modularity increases the overall system value. Moreover, with the increasing complexity of large-scale engineering systems, in which features such as decentralized architecture and autonomy of systems components are becoming increasingly common, we need to extend the notion of modularity. Modularity must be a complexity management mechanism that incorporates these novel schemes such that system architectures can take advantage of them within complexity management in future generations of engineering systems.
In this paper, we used a general definition that recognizes modularity as a set of principals that enhance the management of complexity by breaking up a complex system into discrete pieces that communicate through standard interfaces. This definition encompasses a wide spectrum of modularity in complex systems that goes beyond component modularity and extends to decentralized and autonomous networked systems. We proposed a domain-independent framework that helps with understanding trade-offs of modularity and the dependency of these trade-offs on the characteristics of the system and its surrounding environment. The proposed framework accommodates different classes of architecture and allows designers to decide the class and the stage and level of modularity for a system as a function of uncertainty parameters in the environment. This unification originates from a theoretical complex network model in which structural (architectural) mechanisms of complexity management are divided into three general categories: space-time heterogeneity in the environment, transaction-cost of resource exchange between system components, and the available resource budget per system component. This paper argues that the same combination of factors that push an integral system toward modularity, once intensified, are responsible for pushing it further to higher stages of structural complexity as noted on the proposed spectrum. The paper also provides a novel way of looking at environment complexity by unifying complexity factors related to static variations such as heterogeneity in customers preferences or stakeholders requirements, with those factors related to temporal variation of an uncertainty. Further quantification of this notion of environment complexity and its impact on architecture decisions of a system can be further explored in future work.
To make this framework computationally feasible, we discretize the spectrum into five major stages of modularity, including fully integral (), integral yet decomposable (), modular yet monolithic (), static distributed (client-server architecture ), and dynamic distributed architecture (). We introduced a set of value operators (M+ operator) that quantify the net value of changing the level of modularity on this spectrum between two adjacent stages. The spectrum in conjugation with M+ operators can guide designers in selecting appropriate parameters and building a system-specific computational tool from a combination of existing tools and techniques. To illustrate the functionality of the proposed framework in a rather simple system, we apply it to the case of fractionated satellite systems, as a part of the DARPA System program. We analyze the value of fractionation as a function of uncertainty parameters by quantifying the operation in the proposed framework.
The proposed framework has also some limitations that can be addressed further in future research: First, while conceptually general, the framework might not scale well when the number of uncertainty parameters increases. As a result, one proposed direction for the future of this research is to devise methods that alleviate the curse of dimensionality. Second, the key drivers of modularity in the paper are based on a theoretical work that is mathematically verified, but not empirically validated. While several examples are provided in this work to shape an intuition regarding these drivers, a thorough empirical work to further validate the assumptions of the framework seems to be a natural next step for this work. Third, this paper treated the to transition at the very general level. Systems at stage are becoming exceedingly crucial given that many socio-technical systems are now moving toward peer-to-peer resource sharing and autonomous schemes. The decision layer will require a more specific computational procedure to formulate this transition. Finally, the relationship between the proposed framework and other taxonomies of system architecture, especially its precise relationship to layered systems , that can fall under and stages, can be further elaborated. As for the case study presented in this paper, our primary intention was to illustrate the applicability of the framework to real systems, so we made a series of simplifying assumptions: We limited uncertainties to technology obsolescence and technical failure. The model of the environment can be expanded by adding more sources of uncertainty and inclusion of spatial heterogeneities such as diversity in stakeholders preferences. Finally, we mainly focused on flexibility and uncertainty management, and ignored the values of scalability, resilience, and evolvability that are all important aspects of distributed architecture. Hence, the value presented is a lower bound for the proposed architecture. Integrating the added value related to resilience, evolvability and the impact of architecture and modularity transitions on innovation , collaboration, and market competition  will add another set of interesting questions for future research by Systems Engineering community.
This work was supported in part by the DARPA/NASA Ames under Contract NNA11AB35C through the Fractionated Space Systems F6 Project. The authors would like to thank DARPA and all the government team members for supporting this work as a part of the System program. In particular, we would like to thank Paul Eremenko, Owen Brown, and Paul Collopy, who lead the program and inspired us at various stages during this work. We would also like to thank Steve Cornford from JPL for his constructive feedback at various stages of this project. Our colleagues at Stevens Institute of Technology, especially Dr. Roshanak Nilchiani, were a great support of this project. We would also like to thank other members of our research group, the Complex Evolving Networked Systems lab, especially Peter Ludlow for various comments and feedback that improved this paper.
-  E. Fricke and A. P. Schulz, “Design for changeability (dfc): Principles to enable changes in systems throughout their entire lifecycle,” Systems Engineering, vol. 8, no. 4, 2005.
-  R. Nilchiani and D. E. Hastings, “Measuring the value of flexibility in space systems: A six-element framework,” Systems Engineering, vol. 10, no. 1, pp. 26–44, 2007.
-  R. N. Langlois, “Modularity in technology and organization,” Journal of Economic Behavior & Organization, vol. 49, no. 1, pp. 19–37, 2002.
-  H. A. Simon, “The architecture of complexity,” Proceedings of The American Philosophical Society, pp. 467–482, 1962.
-  J. Clune, J.-B. Mouret, and H. Lipson, “The evolutionary origins of modularity,” Proceedings of the Royal Society B: Biological Sciences, vol. 280, no. 1755, 2013.
-  D. M. Lorenz, A. Jeng, and M. W. Deem, “The emergence of modularity in biological systems,” Physics of Life Reviews, vol. 8, no. 2, pp. 129–160, 2011.
-  T. U. Karl and D. E. Steven, Product design and development. New York: The McGraw-Hill Companies, Inc.,, 2000, vol. 1.
-  C. Y. Baldwin and K. B. Clark, Design rules, Volume 1: The power of modularity. MIT Press, 2000, vol. 1.
-  W. L. Moore, J. J. Louviere, and R. Verma, “Using conjoint analysis to help design product platforms,” Journal of Product Innovation Management, vol. 16, no. 1, pp. 27–39, 1999.
-  D. Gianetto and B. Heydari, “Network modularity is essential for evolution of cooperation under uncertainty,” Scientific Reports, vol. 5, 2015.
-  ——, “Catalysts of cooperation in system of systems: The role of diversity and network structure,” Systems Journal, IEEE, vol. 9, no. 1, pp. 303–311, March 2015.
-  M. E. Conway, “How do committees invent,” Datamation, vol. 14, no. 4, pp. 28–31, 1968.
-  R. Sanchez and J. T. Mahoney, “Modularity, flexibility, and knowledge management in product and organization design,” Strategic Management Journal, vol. 17, pp. 63–76, 1996.
-  M. A. Schilling and H. K. Steensma, “The use of modular organizational forms: An industry-level analysis,” Academy of Management Journal, vol. 44, no. 6, pp. 1149–1168, 2001.
-  D. Campagnolo and A. Camuffo, “The concept of modularity in management studies: a literature review,” International Journal of Management Reviews, vol. 12, no. 3, pp. 259–283, 2010.
-  D. Doran and A. Hill, “A review of modular strategies and architecture within manufacturing operations,” Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, vol. 223, no. 1, pp. 65–75, 2009.
-  D. M. Sharman and A. A. Yassine, “Characterizing complex product architectures,” Systems Engineering, vol. 7, no. 1, pp. 35–60, 2004.
-  A. Kusiak, “Integrated product and process design: a modularity perspective,” Journal of Engineering Design, vol. 13, no. 3, pp. 223–231, 2002.
-  S. K. Ethiraj and D. Levinthal, “Modularity and innovation in complex systems,” Management Science, vol. 50, no. 2, pp. 159–173, 2004.
-  B. Heydari, M. Bohsali, E. Adabi, and A. M. Niknejad, “Millimeter-wave devices and circuit blocks up to 104 ghz in 90 nm cmos,” Solid-State Circuits, IEEE Journal of, vol. 42, no. 12, pp. 2893–2903, 2007.
-  B. Heydari, E. Adabi, M. Bohsali, B. Afshar, A. Arbabian, and A. M. Niknejad, “Internal unilaterization technique for cmos mm-wave amplifiers,” in Radio Frequency Integrated Circuits (RFIC) Symposium, 2007 IEEE. IEEE, 2007, pp. 463–466.
-  V. Srivastava and M. Motani, “Cross-layer design: a survey and the road ahead,” Communications Magazine, IEEE, vol. 43, no. 12, pp. 112–119, 2005.
-  C. Papadimitriou, “Algorithms, games, and the internet,” in Proceedings of the thirty-third annual ACM symposium on Theory of computing. ACM, 2001, pp. 749–753.
-  D. A. Gianetto and B. Heydari, “Sparse cliques trump scale-free networks in coordination and competition,” Scientific Reports, vol. 6, p. 21870, 2016.
-  S. Brusoni, L. Marengo, A. Prencipe, and M. Valente, “The value and costs of modularity: A problem-solving perspective,” European Management Review, vol. 4, no. 2, pp. 121–132, 2007.
-  M. E. Sosa, S. D. Eppinger, and C. M. Rowles, “A network approach to define modularity of components in complex products,” Journal of Mechanical Design, vol. 129, no. 11, p. 1118, 2007.
-  M. Kahmeyer, H. Warnecke, and W. Sheider, “Fractal product design: Design for assembly and disassembly in fractal factory,” in Proceedings of DFMA Conference, 1994, pp. 1–9.
-  G. Pahl, W. Beitz, H.-J. Schulz, and U. Jarecki, Engineering design: A systematic approach. Springer, 2007.
-  C.-C. Huang and A. Kusiak, “Modularity in design of products and systems,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 28, no. 1, pp. 66–77, 1998.
-  G. Erixon, “Modular function deployment, a systematic method and procedure for company supportive product modularization”,” Ph.D. dissertation, Ph.D. Thesis, The Royal Institute of Technology, Stockholm, Sweden, 1998.
-  T. U. Pimmler and S. D. Eppinger, Integration analysis of product decompositions. Alfred P. Sloan School of Management, Massachusetts Institute of Technology, 1994.
-  N. P. Suh, The principles of design. Oxford University Press New York, 1990, vol. 990.
-  D. V. Steward, “The design structure system: A method for managing the design of complex systems,” Engineering Management, IEEE Transactions on, no. EM-28, 1981.
-  T. R. Browning, “Applying the design structure matrix to system decomposition and integration problems: A review and new directions,” Engineering Management, IEEE Transactions on, vol. 48, no. 3, pp. 292–306, 2001.
-  N. Sangal, E. Jordan, V. Sinha, and D. Jackson, “Using dependency models to manage complex software architecture,” in ACM SIGPLAN Notices, vol. 40. ACM, 2005, pp. 167–176.
-  K. Hölttä-Otto and O. de Weck, “Metrics for assessing coupling density and modularity in complex products and systems,” in Proceedings of the 19th International Conference on Design Theory and Methodology, ASME. ASME, 2008, pp. 343–352.
-  Q. Dong, “Predicting and managing system interactions at early phase of the product development process,” Ph.D. dissertation, Massachusetts Institute of Technology, 2002.
-  J. E. Bartolomei, D. E. Hastings, R. de Neufville, and D. H. Rhodes, “Engineering systems multiple-domain matrix: An organizing framework for modeling large-scale complex systems,” Systems Engineering, 2012.
-  M. Danilovic and T. R. Browning, “Managing complex product development projects with design structure matrices and domain mapping matrices,” International Journal of Project Management, vol. 25, no. 3, pp. 300–314, 2007.
-  B. Heydari and K. Dalili, “Emergence of modularity in system of systems: Complex networks in heterogeneous environments,” Systems Journal, IEEE, vol. 9, no. 1, pp. 223–231, March 2015.
-  W. R. Ashby, “Requisite variety and its implications for the control of complex systems,” Cybernetica, vol. 1, pp. 83–99, 1958.
-  D. L. Alderson and J. C. Doyle, “Contrasting views of complexity and their implications for network-centric infrastructures,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 40, no. 4, pp. 839–852, 2010.
-  J. Wade and B. Heydari, “Complexity: Definition and reduction techniques; some simple thoughts on complex systems,” Complex Systems Design & Management, p. 361, 2015.
-  M. O. Jackson, “Strategic network formation,” in Social and economic networks. Princeton: Princeton university press Princeton, 2008, ch. 6.
-  B. Heydari, M. Mosleh, and K. Dalili, “Efficient network structures with separable heterogeneous connection costs,” Economics Letters, vol. 134, pp. 82–85, 2015.
-  S. D. Eppinger and T. R. Browning, Design structure matrix methods and applications. MIT press, 2012.
-  K. Ulrich, “The role of product architecture in the manufacturing firm,” Research policy, vol. 24, no. 3, pp. 419–440, 1995.
-  K. Hölttä-Otto and O. de Weck, “Degree of modularity in engineering systems and products with technical and business constraints,” Concurrent Engineering, 2007.
-  M. E. Newman, “Modularity and community structure in networks,” Proceedings of the National Academy of Sciences, vol. 103, no. 23, pp. 8577–8582, 2006.
-  F. Salvador, C. Forza, and M. Rungtusanatham, “Modularity, product variety, production volume, and component sourcing: theorizing beyond generic prescriptions,” Journal of Operations Management, vol. 20, no. 5, pp. 549–575, 2002.
-  L. Dueñas-Osorio and S. M. Vemuru, “Cascading failures in complex infrastructure systems,” Structural safety, vol. 31, no. 2, pp. 157–167, 2009.
-  S. A. Kauffman, The origins of order: Self-organization and selection in evolution. Oxford university press, 1993.
-  G. Weiss, Multiagent systems: A modern approach to distributed artificial intelligence. MIT press, 1999.
-  DARPA. DARPA, System F6 Program. Accessed: 2013-12-18. [Online]. Available: http://www.darpa.mil/Our_Work/TTO/Programs/System_F6.aspx
-  O. Brown, P. Eremenko, and B. Hamilton, “The value proposition for fractionated space architectures,” Sciences, vol. 99, no. 1, pp. 2538–2545, 2002.
-  O. C. Brown, P. Eremenko, and P. D. Collopy, Value-centric design methodologies for fractionated spacecraft: progress summary from phase 1 of the DARPA System F6 program. Defense Technical Information Center, 2009.
-  M. Mosleh, K. Dalili, and B. Heydari, “Optimal modularity for fractionated spacecraft: The case of system f6,” Procedia Computer Science, vol. 28, pp. 164–170, 2014.
-  O. Brown, A. Long, N. Shah, and P. Eremenko, “System lifecycle cost under uncertainty as a design metric encompassing the value of architectural flexibility,” AIAA Paper, vol. 6023, 2007.
-  M. W. Maier, The art of systems architecting. CRC press, 2009.
-  I. Bignon and Z. Szajnfarber, “Technical professionals’ identities in the r&d context: Beyond the scientist versus engineer dichotomy,” Engineering Management, IEEE Transactions on, vol. 62, no. 4, pp. 517–528, 2015.
-  C. Y. Baldwin and J. Henkel, “Modularity and intellectual property protection,” Strategic Management Journal, vol. 36, no. 11, pp. 1637–1655, 2015.