Interactive Diversity Optimization of Environments

Interactive Diversity Optimization of Environments

Glen Berseth* 0University of British Columbia Mahyar Khayatkhoei* Rutgers University Brandon Haworth* Muhammad Usman* York University Mubbasir Kapadia Rutgers University  and  Petros Faloutsos York University, Toronto Rehab Institute
Abstract.

The design of a building requires an architect to balance a wide range of constraints: aesthetic, geometric, usability, lighting, safety, etc. At the same time, there are often a multiplicity of diverse designs that can meet these constraints equally well. Architects must use their skills and artistic vision to explore these rich but highly constrained design spaces. A number of computer-aided design tools use automation to provide useful analytical data and optimal designs with respect to certain fitness criteria. However, this automation can come at the expense of a designer’s creative control.

We propose DOME, a user-in-the-loop system for computer-aided design exploration that balances automation and control by efficiently exploring, analyzing, and filtering the space of environment layouts to better inform an architect’s decision-making. At each design iteration, DOME provides a set of diverse designs which satisfy user-defined constraints and optimality criteria within a user defined parameterization of the design space. The user then selects a design and performs a similar optimization with the same or different parameters and objectives. This exploration process can be repeated as many times as the designer wishes. Our user studies indicates that DOME, with its diversity-based approach, improves the efficiency and effectiveness of even novice users with minimal training, without compromising the quality of their designs.

diversity optimization, space analysis, architecture
journal: TOGjournalvolume: 36journalnumber: 4article: 39journalyear: 2017publicationmonth: 3copyright: rightsretainedccs: Computing methodologies Animationccs: Computing methodologies Physics-based Simulation\newcolumntype

M¿\arraybackslashm

***These authors contributed equally to this work.

1. Introduction

Building design is both an art and a science. An architect must balance a wide variety of potentially competing objectives, such as space utilization, accessibility, visibility of certain areas, and safety regulations, while at the same time exercising artistic and creative control. The space of possible designs is extremely high-dimensional and continuous even for a small building, such as a single family home, let alone a museum the size of the Louvre. Searching this space for good solutions that meet different optimization criteria while balancing constraints is a challenging combinatorial problem.

Traditional manual design approaches rely on an architect’s intuition and expertise to find suitable design solutions by ignoring or simplifying constraints, making heuristic, rather than optimal decisions, and accepting potentially sub-optimal results. Computer-aided design tools help address these challenges by leveraging automation to predictively analyze and evaluate building layouts. Earlier methods are limited to simply computing quantitative measures for a design, typically in the form of charts, tables, or heat maps. Recent computer-aided design approaches not only provide analysis information, but can also produce optimal designs using the recent advances in optimization techniques and brute force computing power. However, these methods do not account for how people act and interact in these environments, because it is hard to quantify and incorporate into the optimization process. Furthermore, these approaches present a trade-off between automation and human control, where designers are limited to automatically synthesized designs which meet optimality considerations but may disregard designer constraints.

, we aim to combine combinatorial optimization and human insight into a framework for exploring creative designs. We propose DOME, a user-in-the-loop computer-aided design tool that employs diversity optimization to help architects and designers explore, analyze, and improve their work. A key aspect of our approach is that the optimization process itself is tuned for exploring alternatives (diversity) rather than simply producing one optimal design at each invocation.

Within DOME, a user first selects a set of environment elements and specifies which of the associated parameters may be explored by the system. Then, the user selects one or more metrics to serve as the optimization objectives in addition to the regions in the environment where the metrics should be computed on a regions of interest. For example, a user may wish to increase the visibility of a painting in a room with respect to the entrance(s) while maintaining an ordered room layout with sufficient clearance between walls for people to pass through. The user’s selections define a constrained multi-objective optimization problem. A key novelty of our approach is that instead of simply solving for a single optimal configuration, we solve for a set of diverse candidate solutions. Our formulation introduces a diversity term in the objective formulation. This requires the solver to focus the search to meet optimality criteria, while simultaneously broadening its exploration to maximize diversity of its candidate solutions. The process of balancing multiple objectives during optimization is a well known challenge, which is rendered even more difficult by the presence of a diversity term. To address this issue, we propose a novel hierarchical multi-objective optimization algorithm which balances optimality and diversity while remaining efficient for interactive use.

An important issue, for any application that computationally evaluates an environment, is the choice of evaluation quantities. There are different metrics that quantify useful aspects of an environment. In this work, we use metrics that focus on the utility of an environment with respect to its inhabitants use three measures defined by Space Syntax (Bafna, 2003). These metrics, in general terms, capture the way people interact with an environment by quantifying the visibility, accessibility, and organization of the space. These metrics are expensive to compute for large environments, especially as part of multiple optimization iterations. To mitigate their computational cost, we develop GPU accelerated versions of the metrics. We perform a sensitivity analysis to identify the minimum critical resolution of the environment discretization beyond which the measures converge, thus allowing us to compute these measures at a coarse granularity without noticeable loss in accuracy. These improvements offer significant performance gains, enabling the system to operate interactively for mid-scale environments.

Our framework can serve in a range of assisting roles, from an efficient way to evaluate alternate configurations which accomplish the similar objectives, all the way to a design brainstorming assistant. We have integrated DOME within an industry standard architectural design system, Autodesk Revit®. Our results demonstrate the value of our approach in iteratively optimizing and refining existing floor plans for a wide range of environments including an office, an art gallery, a subway station, a museum, and a maze. We have also performed user studies with experts and novice users to evaluate the usability and efficacy of DOME. The SUS score of DOME is , which suggests that even novices were able to use our system with minimal training. The results indicate that subjects using DOME were able to produce more optimal designs, in comparison to subject who didn’t use DOME and experts preferred designs from DOME. Our contributions can be summarized as follows:

  • We propose a user-in-loop system for computer-assisted exploration of building designs.

  • We introduce an efficient hierarchical multi-objective optimization method to balance optimality and diversity of alternative designs.

  • We develop GPU-accelerated measures for spatial analysis of scenes.

  • We integrate DOME within the Autodesk Revit ® pipeline for demonstration and evaluation.

  • We performed user studies to show the effectiveness of DOME.

Figure 1. DOME Framework Overview. Starting with an initial environment design, the user specifies permissible alterations to the layout as bounds on the degree to which different environment elements may be transformed. The user then specifies one or more focal regions in the environment for which different spatial measures are computed, to quantify visibility, accessibility, and organization of the space. A multi-objective hierarchical diversity optimization produces a set of diverse near optimal solutions with respect to user-defined optimality criteria, from which the user may select one and repeat the process as desired.

2. Related Work

Computer-aided design (CAD) methods have garnered increasing attention from both researchers and practitioners in recent years, as they allow designers and casual users to leverage automation at all stages of the design process. This has led to an evolution of CAD tools for architectural design that increasingly use computational resources to analyze, evaluate, and optimize the layout of buildings subject to various criteria.

Automated Architectural Design. There is a growing interest in using optimization techniques to explore design spaces for near-optimal solutions given certain problem criteria (Block et al., 2014; Pottmann et al., 2014; Peng et al., 2016). Galle (1981) focused on exhaustively searching possible layout configurations for small-scale environments. Since then, evolutionary approaches (Michalek and Papalambros, 2002; Yi and Yi, 2014) have been used to curb the infeasibility of brute-force methods for larger design spaces. Liu et al(2013) introduced functional, design, and fabrication constraints as objective measures to guide the optimization process. Data-driven approaches (Merrell et al., 2010) learn layout configurations from existing databases, which are used to automatically generate new layouts for computer graphics applications. Design objectives can be modelled as forces applied to physical features to generate layout designs automatically (Arvin and House, 2002). A sophisticated optimization scheme takes into account the visibility, accessibility, and other hierarchical spatial relationships between interior objects to produce realistic interior design configurations (Yu et al., 2011). Optimization methods can also successfully account for different physical aspects considered important to architecture such as sunlight (Yi and Yi, 2014), materials, energy savings (Caldas and Norford, 2002) or even acoustics (Bassuet et al., 2014).

Interactive Design Solutions. While automated approaches can take into account objective criteria, architectural design inherently involves subjective decisions about aesthetics, domain expertise, and hard-to-quantify criteria such as human activity and its relationship to the environment. These challenges are mitigated by proposing computer-assisted, interactive tools that keep the user in the design loop, while using automation to inform the designers decision-making (Shi and Yang, 2013; Felkner et al., 2013; Turrin et al., 2011; Ma et al., 2014). Harada et al. (1995) uses shape grammars to support the interactive manipulation of architectural layouts. Recent works have proposed optimisation-based interactive design tools to facilitate furniture arrangement using interior design principles (Yu et al., 2011; Merrell et al., 2011). Akase et al(2014) proposed an online room design framework where the objective function entirely relies upon the user’s evaluation.

Automatic Exploration of Diverse Designs. To better balance automation and the user’s creative control, researchers have proposed approaches for exploring multi-dimensional search spaces to find multiple, diverse, yet optimal solutions which can be provided as suggestions to the designer. This provides the designer with more control, allowing them to harness the power of computation to efficiently explore large design spaces, in domains including multi-body dynamics (Twigg and James, 2007; Agrawal et al., 2014), light selection and image rendering (Marks et al., 1997). Introducing diversity as part of the optimization formulation makes the problem significantly more challenging, with many proposed solutions including constraint programming (Hebrard et al., 2005), evolutionary methods (Ursem, 2002), and domain-independent methods (Srivastava et al., 2007; Coman and Muñoz-Avila, 2011). In this work, we use a round robin approach that introduces a minimal number of optimization parameters.

Architectural Metrics Space-Syntax is an established framework for spatial analysis (Hillier and Hanson, 1984; Peponis et al., 1990; Turner and Penn, 1999; Bafna, 2003). It includes a wide range of spatial measures, which have been shown to correlate with human behaviour (Dara-Abrams, 2006; Davies et al., 2006; Meilinger et al., 2012; Emo et al., 2012). In this work, we use a set of static measures grounded in Space-Syntax, however, our framework is independent of this particular choice and can easily incorporate other spatial measures.

Human-Factored Architectural Layout Analysis and Optimization. A key challenge in the analysis of environment designs is to account for factors related to its human occupants, which are difficult to quantify. Fruin (1971) uses crowd density as a proxy to estimate the level of service (LOS) of environments. Fisher et al(2015) synthesized functional 3D scenes by deducing possible human activities. AlHalawani and Mitra (2015) proposed an approach for optimizing object placement in a warehouse by analyzing traffic congestion. Recent work has collect network related city features and classified different cities based on these features (AlHalawani et al., 2014). Crowd simulation methods are perhaps the most accurate proxy of measuring real human movement, but are computationally too expensive for interactive optimization applications (Kapadia et al., 2015). Berseth et al(2015) uses crowd simulation to optimally place a small number of environmental elements in small-scale evacuation scenarios. Feng et al(2016) learns the relationship between crowd flow and various layout alternatives. The estimated model is then used to automatically reconfigure the layout in order to optimize for human factors such as flow.

Our Work. Our work strives to keep the user central to the design process, while leveraging computation to inform the user of factors which are difficult to interpret (e.g., human occupancy), and efficiently explore the design spaces.

3. Overview

An overview of the major components of DOME is illustrated in Figure 1. In subsequent sections, each part of DOME is delineated in detail with examples.

Environment Parameterization. Given an initial environment layout, a user first selects elements (e.g., disjoint structures such as pillars, junctions, or walls), and specifies limits on different degrees of freedom of these elements. These attributes represent a user defined parametrization of the environment layout, which together with the associated limits, model the space of admissible configurations of the environment. This affords both subjectivity as well as strict adherence to constraints such as structural integrity of the building. See Section 4 for details.

Spatial Analysis. DOME constructs a discrete graph representation of free space in the environment, DOME computes different spatial metrics to quantify visibility, accessibility, and organization of the space. While any metrics may be computed over the environment, these measures are predictive of spatial utilization and human movement, and serve as the basis for quantitatively analysing the environment. The user may optionally restrict the computation of these measures to specific regions of interest. For example, the user may wish to maximize the visibility of a key location with respect to the exits in a room. See Section 5 for details.

Multi-Objective Diversity Optimization. The environment parameters, designer constraints, and spatial measures are used to formulate an optimization problem over the space of environment configurations. We desire to keep the user central to the design process while using automation to provide multiple diverse suggestions for improving the current design. To facilitate this, we formulate our objective formulation to generate structurally diverse layouts, while preserving the aforementioned optimality criteria. DOME efficiently searches through the space of permissible environment configurations to identify diverse, yet optimal candidates using a novel hierarchical multi-objective optimization algorithm. See Section 6 for details.

User-in-the Loop Iterative Design. The designer reviews each of the candidate designs which are then used as the basis for subsequent alterations through a tightly coupled design and optimization process. Using DOME, designers can leverage computation to account for difficult to interpret features such as accessibility and visibility of an environment with respect to its human occupants. The diverse layouts are provided to the user as suggestions, together with visualisations of the spatial measures. The designer may browse these and make an informed decision on which candidate best suits their vision. See Section 7 for details.

4. Environment Parameterization

The architectural elements of a building and their connections can be represented by an undirected architectural graph , comprising of a set of nodes and edges = . Each node specifies a location in 2D-space. Each edge is a pair of nodes . An example of a building layout and the associated graph abstraction is shown in Fig. 2. In this representation, the walls are the edges () in the graph, while the nodes () represent end points and junctions between walls. If a connected component in an architectural graph contains a single node and no edges, such as in Fig. 2, then the node itself represents an element with fixed structure. The geometry of each element (wall, kiosk, etc.) is stored in a database and associated with the corresponding node or edge.

group

Figure 2. The layout of a floor plan and the corresponding graph parametrization of the walls, doors and other rigid elements. User selected nodes of the graph, , can be grouped, translated, scaled, and rotated within user defined bounds shown in colour and with arcs and arrows.

Given an architectural graph , the user can define the design space by parametrizing and constraining the attributes of selected nodes or groups of nodes. For demonstration, we focus primarily on rigid body transformations of position and orientation.

Each element of the parametrization, , contains a set of nodes , a transformation that will be applied to the nodes, the magnitude of the transformation, and the limits or constraints on the magnitude . Grouping the free parameters in a vector the parametrization of the design space can be compactly represented as .

Fig. 2 shows an example of a floor plan with sixteen nodes. The arrows, arc, and painted regions around node show the user specified range that the node can translate and rotate within. The group of nodes (in red) can rotate around within the specified range. The nodes and can move in the axis but are constrained to maintain their initial distance, forming another group.

5. Spatial Analysis

((a)) Visibility
((b)) Tree depth
((c)) Entropy
((d)) Visibility for Region of Query
Figure 3. Metrics values for a room in the Metropolitan Museum of Art. Heatmap colours indicate value from blue (low) to red (high). (a) Degree of visibility, where redder areas show more integrated regions, and are good candidates for placement of fire exits, signs, main event, etc. (b) Tree depth, where bluer areas have lower depth and are easier to access. (c) Entropy, where redder areas have high entropy (order), resulting in better human environmental cognition and easier planning at those points. (d) Degree of visibility in Region of Query with respect to Region of Reference which is shown in grey. Notice that the degree values (which are a function of Region of Reference) are different in comparison to (a).
Figure 4. Visibility graph () of the environment space which is discretized as finite grid (). Note that the sampling frequency is reduced in this figure for visualization - real examples use denser sampling.

Spatial analysis aims to quantify attributes of an environment that directly affect how people use the environment. The ideal measures should be general and intuitive, and should cover all of the most important aspects of an environment. Spatial analysis focuses primarily on static measures that are computed geometrically.

There are different approaches to represent space for the purposes of defining and computing spatial measures (Desyllas and Duxbury, 2001). We chose visibility graphs, as they are easier to compute and tend to be more informative than alternative representations, such as axial maps (Turner and Penn, 1999; Desyllas and Duxbury, 2001). Our method is not restricted to specific metrics, however for this work we compute Space-Syntax metrics.

5.1. Visibility Graph

To construct a visibility graph, , we first sample the environment with a finite grid , and then create an edge, , between every pair of nodes that share an unobstructed line of sight, see Fig. 4.

In most prior work, every vertex of grid is a vertex of the visibility graph. In many cases it may be useful to define the visibility graph, and consequently the associated measures, on specific regions of interest. For instance, we may be more interested in the accessibility of certain doors, or the visibility of a an exit sign, from specific hallways in the environment. To support this important feature, we allow the user to define two sets of grid vertices, the Region of Query with vertices , and the Region of Reference with vertices , see Fig. 1. We then construct a visibility graph from these two sets of vertices by computing the lines of sight between the vertices in the Region of Query and the vertices in the Region of Reference. The user defined regions provide greater flexibility to the user, giving them more control over the spatial queries to be performed on the layout. Putting everything together, the visibility graph depends on the architectural graph, its parametrization, and the regions of interest:

(1)

The spatial measures described in the next section are computed only for the vertices of the region of query.

5.2. Metrics

Given a visibility graph , metrics are computed that characterize meaningful relationships between floor plans and human behaviour. While DOME can incorporate many metrics that have been proposed (Bafna, 2003; Turner, 2001; Hillier et al., 1987; Jiang et al., 2000), we find the following measures sufficient.

Degree of Visibility. The degree of visibility, , of a vertex is the number of edges incident to the vertex, in other words the number of its immediate (1-hop) neighbours . Regions with high degree of visibility can be considered to be more connected, safe, or important (Turner, 2001; Bafna, 2003). If one wants to install a public safety sign, then positioning it in a high visibility region might be appropriate (Hölscher et al., 2004). In Fig. 3(a), red areas have the highest degree of visibility while blues indicate the lowest.

Tree Depth. Let be the largest connected component that contains vertex . The minimum height Trémeaux tree rooted at is the tree depth, . Tree depth has a few intuitive interpretations. First, it measures how far is from being a star (Neetil and de Mendez, 2012). Second, a vertex with large tree depth is connected to other regions of the environment through a long sequence of vertices. Thus, tree depth often relates to the notion of accessibility in an environment (Turner, 2001). Tree depth values, together with context dependent information, allow a user to make flow and congestion estimations on specific areas of a layout. Fig. 3(b) shows the computed depth values in heatmap form, where a lower value (blue) is better.

Entropy. Let be the largest connected component that contains vertex . Given a Trémeaux tree rooted at vertex with vertices at each level , we define a probability distribution for over the domain , where is the set of vertices in , and through this distribution we define the entropy at vertex as follows:

(2)

Technically, is the probability that a vertex in will be at level of the tree . In more intuitive terms, entropy measures the organization of an environment. Low entropy at a vertex means that the decision tree rooted at the vertex is unbalanced, or in other words the branching factor varies widely from level to level. This unbalance can materialize both as bottlenecks or areas with too many options which may disorient a person moving through the associated areas. In some sense, while tree depth relates to path lengths, entropy relates to the uniformity of the paths: the higher the entropy, the more uniform the branching, and thus better organization. Typically higher uniformity affords easier pedestrian decision making and navigation (Turner, 2001; Hölscher et al., 2004). Fig. 3(c) shows the entropy values in a heat map form over a sample environment. Notice how the top and bottom corridor have higher entropy because the decision sequences from those regions are balanced, i.e. the environment appears more organized from those regions point of view.

Fig. 3(d) shows the degree of visibility computed over the Region of Query (shown in heatmap) with respect to Region of Reference (shown in grey). Notice how changing the reference from the entire environment in part (a) to just the top and bottom hallways has affected the values of the metric and therefore our view of the space.

For an entire visibility graph with vertices , our metrics are the averages of the corresponding per vertex measures:

(3)
Figure 5. GPU model of the forest construction kernel. and are the number of reference and query vertices respectively. The red wavy arrows represent individual threads.

5.3. Metric Parallelization

The aforementioned metrics can be computationally expensive. The construction of the visibility graph is where and is the total number of obstacles in an environment. Furthermore, constructing the trees needed to calculate depth and entropy is of order where is the maximum branching factor of the and is the maximum of all the minimum depths of the Trémaux trees constructed at different vertices. This process is which means that, at best, it is as complex as constructing the graph itself, although it is much more complex in practice. In order to mitigate this computational overhead, we off-load the construction of the visibility graph and the forest to the GPU.

For the purposes of parallelization, we consider that computing the metrics involves two main tasks: the construction of the visibility graph for the given environment layout, and the computation of a set of trees. Although we discuss each task separately, our implementation runs these computations concurrently, and not in isolation.

5.3.1. Graph Construction

We represent the strictly upper triangular part of the -dimensional symmetric adjacency matrix of the graph in row major fashion as a vector, of dimension that is equal to . Each pair of vertices in where is assigned to a thread which calculates the straight-line between the vertices and checks whether the line intersects any obstacle. The load assignment is designed to exploit memory alignment and maximize GPU utilization.

5.3.2. Tree Construction

Consider the task of performing a Breadth First Search starting at a vertex and branching until the whole visibility graph is traversed, i.e. all vertices are visited exactly once. We introduce three binary -dimensional vectors: (a) frontier holds the elements of that must be expanded in the current level , (b) children holds the elements of that must be expanded in the next level , and (c) parents holds the elements that have been already expanded. We also keep a -dimensional integer vector which stores the number of elements visited at each level.

A Naive Kernel. In a CUDA kernel, we assign each vertex, in to one thread, that is each thread is responsible for one row of the adjacency matrix corresponding to the vertex . The kernel runs, level after level, until a flag is set showing that all vertices have been visited. At each level , each thread first checks if its vertex is adjacent to the -th vertex of the graph, second if the -th element is to be expanded ( is set), third if the -th element has not been expanded ( is not set), and if so, the thread will set the -th element to be expanded at the next level (set ). After each level, the number of 1s in is stored in , then the child vectors are copied into the frontier (), and the child vector is reset ( is initialized to zero vector). Note that other information can be stored depending on the required metrics, but in this case the number of visited vertices at each level suffices.

Cut-Off Threads. In the naive kernel, each thread has to check exactly vertices of the frontier at each level, resulting in exactly operations where is the number of levels. However, each vertex in the graph only needs to be visited once. This fact can be exploited by cutting off threads that have already been visited from the start of each level, that is, stopping thread i whose assigned vertex has already been expanded ( is 1) from launching in the first place. This results in each thread having to check at most vertices of the frontier at each level, and in practice greatly reduces the running time.

Indexed Frontier. So far, each thread, if not cut off, has to check all elements of the frontier, even though many may be zero (not to be expanded) at many levels. However, each vertex in the graph can only be expanded once, that is, each element of the binary frontier can be 1 exactly once over all levels. Thus, the frontier is changed from a binary vector to an integer vector which stores the indices of the elements to be expanded. This indexed frontier is populated by an intermediate process that first sets all elements of frontier to , then starts filling it from the start with the indices of the positions of 1s in children, instead of just copying the children vector into the frontier at the end of each level. When a zero is encountered in the frontier (i.e. ), the kernel is terminated. This process essentially takes the burden of passing over the whole frontier from every single thread, to one single preprocessing thread. The result is that no thread will pass over the frontier more than once.

Forest Construction. The tree construction process must start at all vertices in the graph. Because one tree construction is completely independent of another, all the tree constructions may run concurrently. Therefore, the same kernel as before is used but a new dimension is introduced to all containers (this is essentially concatenating), and then we put each tree process on one row of the device grid. Fig. 5 shows our final model, where the inward dimension of size is the result of concatenation, each layer on this dimension belongs to a new tree. Thus, when the kernel runs at one level, all of the forest is expanded one level deeper on the device. This allows for having a very large pool of threads, and therefore maximizes the load sharing and consequently the GPU utilization.

5.4. Penalty Metrics

In the context of architectural optimization, a user may wish to impose a number of conditions on certain design elements, such as a minimum amount of open space in passages, aesthetic relationships, or building codes. These conditions can be modelled with penalty functions which are treated as soft constraints by the optimizer. A few practical examples of this are described below.

Clearance

A measure of open space between architectural elements, clearance is computed as the aggregate Minkowski sum of each wall and a disk of radius  , which approximates the minimum width of a hallway. The Minkowski sum between a polygon and a disk dilates the polygon, effectively adding a buffer area around an obstacle or wall for comfortable passage.

(4)

where computes the area, and denotes the Minkowski sum between two polygons and is the geometric intersection of the two Minkowski sums. Adjoining walls are excluded from this computation. The associated penalty function is .

Total Wall Length

This is the sum the of the wall lengths of the new environment () with respect to the original environment () , where computes a sum over the length of every edge/wall in the graph. This penalty function is used to constrain the repositioning of elements to not reduce or increase the quantity of wall surface area in an environment. This particular penalty method is appropriate for museums and art galleries where there is a desired amount of wall surface area needed to display an art collection.

6. Optimization Formulation

The user defined parametrization of the architectural graph, in Section 4, defines the design domain , with bounds in . In this section, we describe the key elements of our objective function.

6.1. Diversity Objective

Unlike a typical optimization that produces a single design solution , the DOME system must produce a set of optimal solutions whose members are sufficiently diverse from each other. Therefore, measures of diversity are introduce and maximized. There are a number of techniques to accomplish this, each with their own advantages and disadvantages. For efficiency, instead of augmenting the parameter vector p with additional elements for each member of the diversity set, a round robin technique is used, where one member in is optimized at a time while keeping the parameters of the other members constant.

In practise, enforcing diversity naively can lead to a clustering of solutions (Agrawal et al., 2014). To avoid clustering, we impose a minimum distance between members of . Our diversity metric is as follows:

(5)
(6)

where normalizes its arguments over the parameter constraints and computes the Euclidean distance and is the minimum distance between and all other members in . Equation 6 ensures that diverse members don’t cluster by adding a cost when the closest neighbour is less than away. The terms , and are experimentally determined hyper-parameters that control the influence of the diversity term.

6.2. Optimization formulation

For a set of optimal solutions, , the objective vector is aggregated over the entire set. This results in the following multi-objective optimization problem:

(7)

where a are the parameter bounds specified by the user and is the penalty function described in Section 5.4. Solving this problem produces a set of solutions with maximum spatial objectives in combination with minimum penalties, and maximum diversity.

The next section discusses our solution to the above optimization.

7. Multi-Objective Optimization

There exists several methods that can be used to perform multi-objective optimization (Marler and Arora, 2004). Scalarized multi-objective optimization combines a vector of objectives with a vector of weights, however, finding a good vector of weights can be challenging, especially when the objectives are of largely different scales, as they are in our case. Pareto Front-based approaches produce a collection of parameter settings that are optimal trade-offs between the objectives (Wagner et al., 2007). However, they tend to be computationally expensive, and it is unclear how they would handle the diversity term. Hierarchical methods optimize one objective at a time, in order, in a fashion similar to coordinate descent. Each optimized objective becomes part of the objective function in the form of a soft constraint for the optimization of the next objective. A hierarchical approach appears to be the most practical approach for this problem space. It allows for more practical and intuitive control of the trade-off between optimality and diversity, in the form of a lower bound with respect to the optimal solutions. See the Appendix for more details on the multi-objective optimization methods.

Similar optimization problems have been solved in the graphics literature with a combination of Simulated Annealing and the Metropolis-Hastings algorithm (Yu et al., 2011; Merrell et al., 2011). The convergence rates of these methods can make them prohibitive for interactive systems. This is shown in the engineering literature were Covariance Matrix Adaptation (CMA) (Hansen and Ostermeier, 1996) is more popular for many design reasons (Nguyen et al., 2014), details are described in the Appendix. To address these design considerations, a hierarchical optimization solution based on CMA is used, which can manage the same number of parameters with faster convergence rates.

Our optimization approach aims to produce a set of diverse, near optimal solutions and is best described with two separate algorithmic steps.

7.1. Hierarchical Multi-Objective Optimization

Instead of optimizing a weighted combination of objectives, in this case the objectives defined in Eq. (7), the components are optimized as separate objectives. For each objective we specify an order (ranking) and a desired minimum improvement threshold . The desired is a ratio between [0, 1] where dictates a threshold between the default objective value and optimal objective value. For example if an objective is ranked first with a threshold of , then the optimization process will optimize it first, ignoring other metrics. After converging to an optimal value, a constraint is added to the second objective that imposes a penalty if the first objective falls below of its optimal value. The process repeats for all objectives in the order specified with the near-optimality margins given.

To incorporate an objective as a constraint during the hierarchical optimization process a threshold function is used. These functions are constant or simply zero when the input is within a given range, and rapidly increase when the input is outside this range.

(8)

For a set of threshold functions the total threshold violation cost is

(9)

Algorithm 1 describes the hierarchical multi-objective optimization method over the objective vector

(10)

It is important to have the diversity metric be the last objective in this vector. The diversity metric creates and uses a set of diverse members, the other metrics operate over a single member. Also, the penalty function should be first, as it is necessary to constrain the optimization of the following metrics. For each objective, a CMA-based optimization is performed (lines -). At the end of an individual objective optimization, a threshold constraint is created and added to the vector of threshold constraints (lines -). At the end of the main loop, the optimal parameter vectors are captured within the thresholding function vector, . The last objective, diversity, is optimized using in Algorithm 2, which searches for a diverse set of near optimal solutions given the set threshold functions constructed. Note that the other objectives are now represented as penalties through the threshold functions.

1:  Input: Number of diversity members,
2:  Input: Vector of objective thresholds,
3:  Input: Initial parameter vector,
4:  Input: Vector of objective functions,
5:  Input: Variance, , Sample size,
6:  
7:  for  each  do
8:     
9:     while  Terminate() do
10:        for  each  do
11:           
12:           
13:        
14:     
15:     
16:   DivOpt
17:  return  
Algorithm 1 Hierarchical Multi-Objective Optimization

The next section describes the final step of our hierarchical optimization - the diversity objective.

7.2. Diversity Optimization

A round-robin method is used to select and optimize each diversity member one at a time, see Algorithm 2. Each member is initialized using and progressively diverge from each other as the optimization unfolds. In each round, a single member is selected from and candidate parameters are sampled using CMA (Hansen and Ostermeier, 1996)(lines -). A simple in-order method is used to select the next member in each round. More complex, or random, selections may be employed, but we empirically found this strategy to work well. In lines -, the objective values for those candidates are calculated. In line , the structures in CMA that influence the optimization evolution are updated. The termination condition is dependent on the optimization progress with respect to the improvements made on and and the maximum number of function evaluations, which are parameters of CMA (Hansen and Ostermeier, 1996).

1:  function DivOpt()
2:  Input: Number of diversity members,
3:  Input: Objective function,
4:  Input: Initial parameters,
5:  Given: Variance, , Sample size
6:  for  do
7:     
8:  while  Terminate()  do
9:     Choose from
10:     for each  do
11:        
12:        
13:     
14:  return  
Algorithm 2 Diversity Optimization:

7.3. Diversity Set

((a)) Default ((b)) ((c)) ((d)) ((e)) ((f))
((g))
Figure 6. A diversity set example. The solution with the highest metric value (f), although optimal, can be considered less aesthetically pleasing. It is interesting that all solutions effectively converted the wall, at left centre, into a pillar, which indicates that the layout could potentially be improved by completely removing the wall. (g) For comparison all solutions are shown superimposed.

A key advantage of our approach is the production of a diverse set of solutions. Fig. 6 shows an example set: (a) is the default layout, (b-f) are the five members of the diversity set, and (g) is a superimposition of all solutions to better illustrate their differences.

8. Results

In this section we discuss the capabilities and limitations of DOME. First, we explore performance issues that are important for the practical use of the system. Then we present examples that clarify aspects of the system, and demonstrate its effectiveness. Note that it can be difficult to convey the user-in-the-loop nature of the system with static pictures alone (Usman et al., 2017). For a more effective demonstration we refer the reader to the accompanying video.

8.1. Performance Analysis

Spatial Metrics. Fig. 7 illustrates the comparative performance of our spatial analysis framework (Section 5) using single-threaded CPU, multi-threaded CPU, and GPU implementations. It is evident that the GPU implementation completes the computation much faster. For example, on a grid of vertices, over an effective area of   using a cell per meter granularity, the GPU takes   compared to the CPU’s   (-threads) on average to generate the visibility graph, construct the corresponding forest, and calculate the objective. This advantage increases as the number of vertices in the graph increases, with an order-of magnitude speedup. This test compares Intel Xenon at 3.5 GHz with GeForce GTX . Note that certain operations in our calculations (e.g. entropy calculations) are especially amenable to GPU parallelization. Moreover, the reported times include the initialization process for each granularity which is executed once per optimization; therefore the actual average times over objective calls would be considerably lower. In our current implementation, the spatial objective are computed concurrently on the GPU and a weighted sum of the spatial metrics may be used for efficiency. The performance analyses reported here encompass the entire spatial analysis pipeline averaged over runs.

Figure 7. Spatial analysis framework performance analysis of CPU and GPU implementations. The bars show the total time to calculate the three objectives using the corresponding hardware, with the typical use case highlighted in red. Each bar is also divided into darker and lighter shades to depict the time for graph generation and forest construction respectively. Time is in base logarithmic scale.

GPU Memory Complexity. The GPU memory required for the objective calculation on the GPU is of , more strictly it grows with , where is the total number of vertices included in the graph. All example environments in Table 1 take up less than   memory. Note that the provided memory complexity is for one unified run of optimization, a much larger environment can be processed in subsets (chunks) of vertices.

Critical Resolution. The grid resolution determines the number of vertices in the visibility graph, to identify the minimum resolution needed we perform a sensitivity analysis over the granularity. Each metric is computed over a range of grid resolutions, aggregated over multiple environment layouts. Here, resolution is represented as the number of cells per meter ratio, for example resolution means that in each dimension one cell covers square meters. The study results are illustrated in Fig. 8. These diagrams show that the metrics do not substantially change after a certain sampling frequency, suggesting that a critical value can be identified. The two jumps in the depth and entropy diagrams are caused by discovering new bottlenecks after a certain increase in resolution, which are discarded at higher sampling frequency. For the remaining experiments reported in this paper, we have used a sampling resolution of cells/meter.

Figure 8. Sensitivity of metric values to visibility graph resolution. The vertical axis is the metric average over randomly sampled environments. The standard deviation is also provided for each metric.

Diversity Optimization. Table 1 provides the computation times of diversity optimization for three exemplar environments. These include the environment used in the user study Fig. 16, as well as the art gallery and museum illustrated in Fig. 9 and Fig. 3 respectively. For moderately complex designs (hundreds of vertices in the visibility graph), the results show that DOME maintains interactive running times, taking a few seconds to compute diverse solution candidates. For most practical purposes, we anticipate that users will define optimization problems in focused environment regions, such as a particular room in a larger building, by specifying appropriately sized query and reference regions. For more complex designs with tens of thousands of vertices, such as Fig. 11, optimizations take close to one hour to complete. While this prevents an interactive design session, the results of our framework can still provide valuable design suggestions and feedback to the designer.

Environments Ref Vertices Query Vertices Total Vertices Effective Size () Objective Calls (c) Graph (ms/c) Forest (ms/c) Penalty (ms/c) Total Time (s)
CPU GPU CPU GPU CPU CPU GPU
Simple Room 361 25 361 1444 692 3.9 0.88 0.7 0.56 0.08 4.35 2.25
Large Room 1369 81 1369 5476 692 61.08 1.68 19.29 1.76 0.12 57.26 3.62
Museum 588 208 588 2352 772 23.37 1.04 15.98 1.91 1.06 34.04 5.06
Art Gallery 487 438 915 3660 732 67.59 1.35 22.73 1.93 5.36 72.69 7.41
Table 1. Diversity optimization running times. These results were computed using GeForce GTX and Intel Xenon at 3.5 GHz on a range of environments from simple and small to large and complex. Note that while the system is not real-time, it is sufficiently fast for interactive use.

Optimization Convergence. The convergence or stopping conditions of the optimization algorithm has a dramatic impact on the computational performance as well as the quality of the results. The default termination conditions are overly conservative for this application, leading to long optimization times with negligible effects on quality after the first few iterations (Hansen and Ostermeier, 1996). The termination conditions are adjusted to return results after the optimization has converged to of optimal. This leads to significant performance gains.

8.2. Examples

We demonstrate the application of DOME on a variety of real environments including a portion of the NYC Penn Station subway, the Metropolitan Museum of Art, and a layout based on the Washington Art Museum. DOME can also be applied in other interesting ways, for example, to increase or reduce the complexity of a maze-like environments.

Art Gallery A.  Fig. 9 illustrates the iterative design of an art gallery. Particularly, we are interested in increasing the degree of visibility, a reduction in the depth (which indicates an increase in accessibility), and an increase in entropy or organization of the gallery. The design process is performed over optimization rounds and a times improvement in the combined objective measure is discovered. In the heatmaps, red and blue show high and low values respectively.

=
=
=
=
Figure 9. Optimizing an art gallery. From top to bottom, the default gallery and three consecutive rounds of optimization. Each round is performed with a combination of degree, depth, and entropy. The combined objective at the end of each round is visualized as a heat map. Red is high value, blue is low.

Art Gallery B.  Fig. 10 illustrates the benefits of the diversity member set in the design process of an art gallery. This gallery design was parametrized to allow for interesting reconfigurations of the exhibit rooms which directly modify the open space of the main corridor. The optimization process produced a diversity set that includes both highly angular and interesting designs as well as more balanced designs that carefully reconfigure the view of individual exhibits and the open space in the corridor.

(a) (b) (c) (d)* (e)* (f)*
Figure 10. Optimizing an art gallery and exemplifying the power of having diverse near optimal designs. The top row of figures shows the blueprint of the wall designs for the art gallery with a particular viewpoint shown in magenta. The middle row of figures shows the rendered environment from the viewpoint shown in the top row. The final row of figures shows the combined metric values as a heatmap over the entirety of each design. Column (a) is the original design of the art gallery. The columns (b) - (f) show the diversity members provided by the DOME system for a particular parametrization of the environment. (b) is a member that opens up the floor space and the overall visibility down the corridor of the gallery. (c) is a member that balances the corridor visibility of (b) with the visibility of particular exhibits. (d) is a member that balances the visibility from (b) while reducing the number of path decisions further down the corridor and being particularly accessible. (e) is a member that mainly reduces path decisions while increasing gallery sizes. (f) is a member that balances the best of all designs being open, accessible, and easy to navigate. The (*)s identify the designs that six expert architects independently designated as preferred.

Subway Station. We use DOME to optimize a level of the NYC Penn Station. The user-in-the-loop approach affords an iterative design process, where a user may initially set up the problem by defining the movable elements, and the Region of Query and Region of Reference. Upon selecting a suitable revision to the layout from a set of diverse candidates provided by the system, the user may modify the problem formulation. Fig. 11 illustrates results from three iterations. By adding additional parameters or changing the regions in an effort the user can resolve issues that may have been identified over the course of previous optimization rounds. In this example, the user iteratively includes new query regions for the stairwell and elevators to account for additional aspects of the layout. What appear as minor alterations to the wall configuration in the subway increase the objective from to leading to a design that significantly improves the pedestrian environment.

(a) Initial: (c) Round 2:

(b) Round 1: (d) Round 3:
Figure 11. Optimization of Penn Station, NYC. This figure illustrates how the framework can be used on a large complex environment of vertices. Additional Region of Query are incrementally added, to resolve issues in the layout that were identified during the previous design optimization rounds. The light grey area is the Region of Reference. The heat map areas are Region of Query with significant changes outlined in brown rectangles. The dashed cyan lines show the structure of interest that was optimized between each round. The green boxes highlight the new areas of interest that were considered during the optimization round. Round 1(a-b) regions are chosen to increase the accessibility and visibility of subway platform access. Round 2(c) regions are chosen to increase the accessibility and visibility of exits. Round 3(d) the placement of washrooms and elevators are improved by making them more viewable and accessible from additional areas in the environment.

Museum of Metropolitan Art. In Fig. 12 we visually analyze the layout of the museum by inspecting its degree, depth, and entropy values over the entire layout. In the top-right hand corner of the museum contains an area with very low visibility, specifically of the entrance. Therefore, we optimize the top-right area, shown in Fig. 13, to improve its visibility while maintaining the amount of wall surface area, which is necessary for displaying works of art.

Degree Depth Entropy
Figure 12. Degree, depth, and entropy for the Metropolitan Museum of Art.
Figure 13. Analysis of the metrics reveal low visibility in the top-right section of the museum which we mitigate using DOME.

Maze. Interestingly DOME can be used to alter the complexity of environments. Fig. 14 illustrates this approach on a maze-like environment. Starting with a standard maze we maximize the visibility, minimize the depth (which maximizes accessibility), and minimize the entropy (which maximizes order). The resulting diverse set of layouts align the doorways to minimize long-windy passageways which have high depth (b,c). The more ordered environments (b) is then fed back into the system, with the objective measure inverted to minimize . The resulting diverse layouts (d,e,f) are of similarly complexity to the original maze, thus providing variations of the original design. Our method is able to automatically remove or introduce complexity in an environment, by altering the objective definition, while producing several diverse designs that meet user-defined criteria.

(a) (b) (c) (d) (e) (f)
Figure 14. The top row of figures are blueprints for maze-like designs. The middle row is the rendered environment. The final row of figures shows the combined metric values. (a) Initial maze configuration, with query region (pink) and reference region (gray). (b,c) Optimized maze to reduce environment complexity. (d,e,f), The result from (b) is optimized to increase complexity to produce new mazes. Maze size is approximately 20x20m.

9. User Studies

A series of user studies were conducted to assess the usability and design task performance of the DOME system. Participants were invited to a two part study session. Their first task was to complete an unstructured usability experiment in which they make use of DOME. The participants were given the opportunity to rate the general usability of DOME as an assisted user-in-the-loop method. Participants were then randomly assigned to one of three experimental groups, including a control group, to assess the general performance of assisted and unassisted design tools.

For this experiment, subjects volunteered to participate and gave informed consent. The participants were between the ages of to and self identified as males and females. All participants were graduate level students in computer science or a closely related field.

9.1. Usability

The goal of this experiment is to evaluate the DOME method, of automated optimization with high value diverse candidate solutions, as explored from a user perspective.

Materials and Methods: All participants interacted with the method on Desktop PC (Windows 7 64-bit, 8GB RAM, AMD FX(tm)-8320, 8 Computer Cores, 3.5GHz). Using a simple room as a teaching tool, the participants are given short instructions on how to manipulate and set parameter bounds for translation and rotations of environment elements. Participants are then shown how to select candidates from the diversity set.

The colloquial terms for the set of metrics are explained in general terms, i.e. Degree, Tree Depth, and Entropy are translated to visibility, accessibility, and organization respectively. Since these terms are unfamiliar to novice users outside the field of computational architectural analysis, the task description included simple language with details. The metrics were rephrased as visibility, accessibility, and organization according to previous interpretations (Turner, 2001). Participants were told that: visibility related to how visible any portion of the environment may be to another; accessibility related to how accessible the environment is; and organization related to how confusing the layout of the environment would be.

After initiation, participants are presented with a complex real world Art Gallery environment in which the ROIs, Region of Query and Region of Reference, are already defined. The participants are tasked with increasing the visibility, accessibility, and organization of the environment using DOME for a fixed amount of time ( minutes). At the end of this task, participants are immediately given a System Usability Scale (SUS) to measure usability of the system (Brooke, 2013; Brooke et al., 1996). The SUS is a well established and tested method for evaluating the usability of a product.

Results: The summary statistics of the SUS scores are reported in Table Table 2. The quartiles for the SUS scores are reported in Table 3.

Count Mean Median Standard Deviation
18 70.83 73.75 14.70
Table 2. Summary statistics for SUS results, where the score range is from 0 to 100.
Quartile Range
22.5 - 68.12
68.12 - 73.75
73.75 - 76.87
76.87 - 87.5
8.75
Table 3. SUS quartile ranges. The ranges for each quartile of the data are reported to show distribution of the results. The Interquartile Range () is also reported.

Discussion:

The SUS score is a composite measure of usability for a system which has been tested on a variety of tasks and proved to be robust and reliable (Sauro and Lewis, 2011). As well, a particular advantage of SUS is the ability to provide a reliable measure of usability with as few as to participants (Tullis and Stetson, 2004). It has been found that SUS in fact measures two factors: both “usability” and “learnability” (Lewis and Sauro, 2009; Borsci et al., 2009) of a system. SUS scores are scaled to the range of and , with being the average score taken over many tasks from different domains with scores above considered above average and acceptable (Sauro and Lewis, 2011). A mapping of scaled SUS scores to common adjectives, based on responses from many participants on several tasks across different domains, provides an intuitive interpretation for each score range (Bangor et al., 2009).

The results show that the participants mean and median scores fall within the adjective range of “good” and ‘excellent” (Bangor et al., 2009). Furthermore, quartile ranges show a strong preference for a high SUS score. This can be interpreted as meaning the DOME system is highly usable and “learnable” with a degree of confidence.

9.2. Design Performance

In this experiment, the effective performance of the method with respect to real world use is evaluated. The hypothesis is that DOME is better, in terms of objective metric values and efficiency, than manual unassisted design approaches as well as a version of DOME that does not provide the diversity set. A secondary hypothesis is that, as the complexity of the environment increases, the value of automated optimization and diverse candidate suggestions increases.

Materials and Methods: This experiment takes the form of an A-B-C group design wherein the participant pool is divided in to thirds and randomly assigned one of three tools. These groups are provided architectural design tools as follow: the A group is given the unassisted tool (standard Autodesk Revit interface); the B group is given a tool which exposes the optimization portion of DOME providing only the single most optimal candidate without the diversity set; and the C group is given the full assisted user-in-the-loop DOME method with diversity.

Participants are given two different environments. The first is a simple room of an art gallery with three parametrizable walls of the same dimensions. The second is a more complex art gallery with two sides that both have an asymmetric set of parametrizable objects (four square pillars, and three walls) and are connected by a small hallway at the centre.

For each environment, the participant was tasked with improving the metrics, as described in Section 9.1. The participants were given up to minutes, per environment, to make as many adjustments as they wish. The participant may finish at any point within the minutes, concluding their design when satisfied.

Results: The mean and standard deviation of the objective values, Equation LABEL:eq:scalarObjective, are shown in Fig. 15. The mean number of design iterations made by participants in the DOME group (B) was: for the simple environment; and for the complex environment. The mean number of design iterations made by participants in the DOME group (C) was: for the simple environment; and for the complex environment.

Figure 15. Comparison between participants in the (A) manual group, (B) assisted without diversity, and (c) assisted with diversity (C) by DOME. DOME participants designed layouts with significantly higher objective measures on average with have lesser deviation, indicating greater consistency across users. The mean and variance are calculated over f-values as specified in Equation LABEL:eq:scalarObjective.
Default Floor Plan Manual DOME-assisted
Figure 16. Selected user study layouts generated by participants. The top row shows the simple layout, while the bottom row shows the complex one. Pink regions form the Region of Query, and grey regions form the Region of Reference. Note the difference between manual design and DOME assisted results. The corresponding objective values are reported under each layout.

Discussion: The results show that generally, for groups (B) and (C), participants who were given access to optimal results performed better in terms of the objective, than those who made their designs manually Fig. 15. Group (C), who had access to the diversity set, performed on par with those participants in group (B), who were only given the optimal result. In summary, the objective value data shows that using DOME results in producing environments with much higher objective values.

The number of design iterations performed before participants decided they were satisfied and submitted their work is less when using the full DOME method. These results indicate that providing solution diversity is helpful, especially in the case of increased environment complexity. Furthermore, as the scenario, and thus task, grows in complexity, diversity becomes more valuable.

It is also noteworthy that the variance in the complex environment is significantly lower when using DOME with the full diversity set. As well, the group A results show a significantly larger standard deviation compared to B and C. These results suggest that manual optimization can be very inconsistent among different users, while using our system can effectively guide the user and keep the design exploration more focused. Furthermore, this could be a sign that using diversity helps avoid local minima in the design space.

However, it is important to note that solutions returned by group (C), the DOME users, were still quite diverse, with different users finding new ways to maximize the objective, even for these simple layouts.

9.3. Expert Validation

The goal of this experiment was to validate the designs, created by novice participants, from the perspective of architectural and design experts. Experts are asked to provide their perceptual preference of design outputs from novice participant sessions using either the unassisted or assisted tools. Our hypothesis is that there is a preference for designs which are the results of the assisted tool with diverse results as opposed to the standard unassisted tools.

Seven experts in the fields of architecture, interior, and civil design participated in the expert survey. An online questionnaire with a series of binary A/B choices was provided to each participant.

The questionnaire was made up of randomized environment pairs, each with one selection from the manual design set and one from the DOME tool design set corresponding to participant designs from groups (A) and (C) respectively - described in Section 9.2. Each participant was asked to make a binary choice for each environment pairing based on their expert intuition for which design best fulfilled the metrics for degree of visibility, tree depth, and entropy. The task objectives and metrics described for the A-B-C study were provided to the experts for additional guidance.

Results: The Interquartile Range (IQR) (Rousseeuw and Croux, 1992) is computed and shown in Fig. 17. The horizontal line in the centre of the boxes indicate the Median ( and ) and the boxes cover the Interquartile Range (Q1= to Q3= and Q1= to Q3=) for the users designs from DOME and Manual tools respectively.

Discussion: The results show that there is high preference for DOME designs with diverse results as opposed to the standard unassisted tools. This also reveals that DOME guides participants to preferable design patterns, even if the designers are novices or from a non-related field.

Figure 17. Distribution of experts’ perceptual preferences (%) of the environments designed by novice users.

A survey was given to a diverse set of experts in architectural design (N=) to select a perfered design from Fig. 10. There were no outlier selections, all three chosen candidates received the same number of votes, as well no experts chose the original gallery design given the context. This indicates that in the multi-objective building design space experts have there own preferences and the diversity optimization can facilitate these many preferences. The experts also agreed that an iterative non-prescriptive (ie. no single solution) approach is necessary and beneficial.

10. Conclusion

We have presented DOME, a user-in-the-loop system for computer-aided environment design that analysis and optimizes environments with respect to human behaviour. The user study indicates that not just design optimization but design diversity can be beneficial to the user. Results revealed that providing multiple diverse designs to the user, especially in the case of more complex environments, allows the user to find better solutions in less iterations. By providing the user with several candidate designs rather than just analytical data or a single optimal solution, the user remains a crucial part of the process at all stages of the design. In a sense, the system enhances the creative process rather than control it.

Limitations and Future Work.

Like most multi-objective optimization frameworks, our approach includes a variety of weights that the user can set to tweak the results. Although one can rely on default values, it might be beneficial for the user to adjust them. We plan to study the effects of these parameters on the resulting configurations with a large scale experiment, and attempt to identify specific relationships which might serve as guidelines.

The system is interactive for moderate scale designs. We have identified possibilities for improving performance, such as employing approximate and incremental algorithms for computing the spatial analysis metrics, which we plan to investigate in the future. We also want to investigate the use of more dynamic metrics, for example, crowd flow.

It is worth noting that these measures could be estimated or supplanted by crowd simulations (Berseth et al., 2015; Feng et al., 2016). However, these methods are impractical for repeated use in an interactive applications such as ours. Furthermore, they tend to be sensitive to the particular crowd simulator and the simulators internal parameters. Learning the relationship between an environment parametrization and a realistic crowd from examples is an interesting future project.

References

  • (1)
  • Agrawal et al. (2014) Shailen Agrawal, Shuo Shen, and Michiel van de Panne. 2014. Diverse Motions and Character Shapes for Simulated Skills. IEEE Transactions on Visualization and Computer Graphics 20, 10 (Oct 2014), 1345–1355. DOI:https://doi.org/10.1109/TVCG.2014.2314658 
  • Akase and Okada (2014) Ryuya Akase and Yoshihiro Okada. 2014. Web-based Multiuser 3D Room Layout System Using Interactive Evolutionary Computation with Conjoint Analysis. In Proceedings of the 7th International Symposium on Visual Information Communication and Interaction (VINCI ’14). ACM, New York, NY, USA, Article 178, 10 pages. DOI:https://doi.org/10.1145/2636240.2636849 
  • AlHalawani and Mitra (2015) Sawsan AlHalawani and Niloy J. Mitra. 2015. Congestion-Aware Warehouse Flow Analysis and Optimization. In Advances in Visual Computing - 11th International Symposium, ISVC 2015, December 14-16, 2015, Proceedings, Part II. 702–711. DOI:https://doi.org/10.1007/978-3-319-27863-6_66 
  • AlHalawani et al. (2014) Sawsan AlHalawani, Yong-Liang Yang, Peter Wonka, and Niloy J. Mitra. 2014. What Makes London Work Like London? Comput. Graph. Forum 33, 5 (Aug. 2014), 157–165. DOI:https://doi.org/10.1111/cgf.12441 
  • Arvin and House (2002) Scott A. Arvin and Donald H. House. 2002. Modeling architectural design objectives in physically based space planning. Automation in Construction 11, 2 (2002), 213–225.
  • Bafna (2003) Sonit Bafna. 2003. Space Syntax: A Brief Introduction to Its Logic and Analytical Techniques. Environment and Behavior 35, 1 (2003), 17–29. DOI:https://doi.org/10.1177/0013916502238863 
  • Bangor et al. (2009) Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies 4, 3 (2009), 114–123.
  • Bassuet et al. (2014) Alban Bassuet, Dave Rife, and Luca Dellatorre. 2014. Computational and Optimization Design in Geometric Acoustics. Building Acoustics 21, 1 (2014), 75–86.
  • Berseth et al. (2015) Glen Berseth, Muhammad Usman, Brandon Haworth, Mubbasir Kapadia, and Petros Faloutsos. 2015. Environment optimization for crowd evacuation. Computer Animation and Virtual Worlds 26, 3–4 (2015), 377–386.
  • Block et al. (2014) Philippe Block, Jan Knippers, Niloy J. Mitra, and Wenping Wang. 2014. Advances in Architectural Geometry 2014. (2014).
  • Borsci et al. (2009) Simone Borsci, Stefano Federici, and Marco Lauriola. 2009. On the dimensionality of the System Usability Scale: a test of alternative measurement models. Cognitive processing 10, 3 (2009), 193–197.
  • Brooke (2013) John Brooke. 2013. SUS: a retrospective. Journal of usability studies 8, 2 (2013), 29–40.
  • Brooke et al. (1996) John Brooke and others. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7.
  • Caldas and Norford (2002) Luisa Gama Caldas and Leslie K. Norford. 2002. A design optimization tool based on a genetic algorithm. Automation in construction 11, 2 (2002), 173–184.
  • Coman and Muñoz-Avila (2011) Alexandra Coman and Héctor Muñoz-Avila. 2011. Generating Diverse Plans Using Quantitative and Qualitative Plan Distance Metrics.. In AAAI. Citeseer, 946–951.
  • Dara-Abrams (2006) Drew Dara-Abrams. 2006. Architecture of mind and world: How urban form influences spatial cognition. In Proceedings of the Space Syntax and Spatial Cognition of the Workshop at Spatial Cognition, Bremen, Germany, Vol. 24.
  • Davies et al. (2006) Clare Davies, Rodrigo Mora, and David Peebles. 2006. Isovists for Orientation: can space syntax help us predict directional confusion?. In Space syntax and spatial cognition: Proceedings of the workshop held in Bremen, Vol. 2. 81–92.
  • Desyllas and Duxbury (2001) Jake Desyllas and Elspeth Duxbury. 2001. Axial Maps and Visibility Graph Analysis: A comparison of their methodology and use in models of urban pedestrian movement. In 3rd International Space Syntax Symposium. 27.1–27.13.
  • Ding et al. (2006) Yichuan Ding, Sandra Gregov, Oleg Grodzevich, Itamar Halevy, Zanin Kavazovic, Oleksandr Romanko, Tamar Seeman, Romy Shioda, and Fabien Youbissi. 2006. Discussions on normalization and other topics in multiobjective optimization. In Fields-MITACS, Fields Industrial Problem Solving Workshop.
  • Emo et al. (2012) Beatrix Emo, Christoph Hoelscher, Jan Wiener, and Ruth Dalton. 2012. Wayfinding and spatial configuration: evidence from street corners. (2012).
  • Felkner et al. (2013) Juliana Felkner, Eleni Chatzi, and Toni Kotnik. 2013. Interactive particle swarm optimization for the architectural design of truss structures. In Computational Intelligence for Engineering Solutions (CIES), 2013 IEEE Symposium on. IEEE, 15–22.
  • Feng et al. (2016) Tian Feng, Lap-Fai Yu, Sai-Kit Yeung, KangKang Yin, and Kun Zhou. 2016. Crowd-driven Mid-scale Layout Design. ACM Trans. Graph. 35, 4, Article 132 (July 2016), 14 pages. DOI:https://doi.org/10.1145/2897824.2925894 
  • Fisher et al. (2015) Matthew Fisher, Manolis Savva, Yangyan Li, Pat Hanrahan, and Matthias Niessner. 2015. Activity-centric Scene Synthesis for Functional 3D Scene Modeling. ACM Trans. Graph. 34, 6, Article 179 (Oct. 2015), 13 pages. DOI:https://doi.org/10.1145/2816795.2818057 
  • Fruin (1971) John J Fruin. 1971. Pedestrian planning and design. Technical Report.
  • Galle (1981) Per Galle. 1981. An Algorithm for Exhaustive Generation of Building Floor Plans. Commun. ACM 24, 12 (Dec. 1981), 813–825. DOI:https://doi.org/10.1145/358800.358804 
  • Hansen and Ostermeier (1996) Nikolaus Hansen and Andreas Ostermeier. 1996. Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation. In IEEE International Conference on Evolutionary Computation. 312–317.
  • Harada et al. (1995) Mikako Harada, Andrew Witkin, and David Baraff. 1995. Interactive Physically-based Manipulation of Discrete/Continuous Models. In Proceedings of the 22Nd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’95). ACM, New York, NY, USA, 199–208. DOI:https://doi.org/10.1145/218380.218443 
  • Hebrard et al. (2005) Emmanuel Hebrard, Brahim Hnich, Barry O’Sullivan, and Toby Walsh. 2005. Finding diverse and similar solutions in constraint programming. In AAAI, Vol. 5. 372–377.
  • Hillier and Hanson (1984) Bill Hillier and Julienne Hanson. 1984. The social logic of space, 1984. Cambridge: Press syndicate of the University of Cambridge (1984).
  • Hillier et al. (1987) WRG Hillier, Julienne Hanson, and John Peponis. 1987. Syntactic analysis of settlements. Architecture et comportement/Architecture and Behaviour 3, 3 (1987), 217–231.
  • Hölscher et al. (2004) Christoph Hölscher, Tobias Meilinger, Georg Vrachliotis, Martin Brösamle, and Markus Knauff. 2004. Finding the way inside: Linking architectural design analysis and cognitive processes. In Spatial Cognition IV. Reasoning, Action, Interaction. Springer, 1–23.
  • Jiang et al. (2000) Bin Jiang, Christophe Claramunt, and Björn Klarqvist. 2000. Integration of space syntax into GIS for modelling urban spaces. International Journal of Applied Earth Observation and Geoinformation 2, 3 (2000), 161–171.
  • Kapadia et al. (2015) Mubbasir Kapadia, Nuria Pelechano, Jan Allbeck, and Norm Badler. 2015. Virtual Crowds: Steps Toward Behavioral Realism. Synthesis Lectures on Visual Computing 7, 4 (2015), 1–270. DOI:https://doi.org/10.2200/S00673ED1V01Y201509CGR020  arXiv:http://dx.doi.org/10.2200/S00673ED1V01Y201509CGR020
  • Krause et al. (2016) Oswin Krause, Dídac Rodríguez Arbonès, and Christian Igel. 2016. CMA-ES with Optimal Covariance Update and Storage Complexity. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 370–378. http://papers.nips.cc/paper/6457-cma-es-with-optimal-covariance-update-and-storage-complexity.pdf
  • Lewis and Sauro (2009) James R Lewis and Jeff Sauro. 2009. The factor structure of the system usability scale. In Human centered design. Springer, 94–103.
  • Liu et al. (2013) Han Liu, Yong-Liang Yang, Sawsan AlHalawani, and Niloy J. Mitra. 2013. Constraint-aware interior layout exploration for pre-cast concrete-based buildings. The Visual Computer 29, 6-8 (2013), 663–673. DOI:https://doi.org/10.1007/s00371-013-0825-1 
  • Ma et al. (2014) Chongyang Ma, Nicholas Vining, Sylvain Lefebvre, and Alla Sheffer. 2014. Game level layout from design specification. Computer Graphics Forum 33, 2 (2014), 95–104. DOI:https://doi.org/10.1111/cgf.12314 
  • Marks et al. (1997) Joe Marks, Brad Andalman, Paul A. Beardsley, William Freeman, Sarah Gibson, Jessica Hodgins, Thomas Kang, Brian Mirtich, Hanspeter Pfister, Wheeler Ruml, Kathy Ryall, Joshua Seims, and Stuart Shieber. 1997. Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation. In Proceedings of ACM SIGGRAPH. 389–400. DOI:https://doi.org/10.1145/258734.258887 
  • Marler and Arora (2004) R. Timothy Marler and Jasbir S. Arora. 2004. Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization 26, 6 (2004), 369–395. DOI:https://doi.org/10.1007/s00158-003-0368-6 
  • Meilinger et al. (2012) Tobias Meilinger, Gerald Franz, and Heinrich H Bülthoff. 2012. From isovists via mental representations to behaviour: first steps toward closing the causal chain. Environment and Planning B: Planning and Design 39, 1 (2012), 48–62.
  • Merrell et al. (2010) Paul Merrell, Eric Schkufza, and Vladlen Koltun. 2010. Computer-generated Residential Building Layouts. ACM Trans. Graph. 29, 6, Article 181 (Dec. 2010), 12 pages. DOI:https://doi.org/10.1145/1882261.1866203 
  • Merrell et al. (2011) Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, and Vladlen Koltun. 2011. Interactive Furniture Layout Using Interior Design Guidelines. ACM Trans. Graph. 30, 4, Article 87 (July 2011), 10 pages. DOI:https://doi.org/10.1145/2010324.1964982 
  • Michalek and Papalambros (2002) Jeremy Michalek and Panos Papalambros. 2002. Interactive design optimization of architectural layouts. Engineering Optimization 34, 5 (2002), 485–501. DOI:https://doi.org/10.1080/03052150214021 
  • Milgo et al. (2017) Edna Milgo, Nixon Ronoh, Peter Waiganjo, and Bernard Manderick. 2017. Adaptiveness of CMA Based Samplers. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO ’17). ACM, New York, NY, USA, 179–180. DOI:https://doi.org/10.1145/3067695.3075611 
  • Müller and Sbalzarini (2010) C. L. Müller and I. F. Sbalzarini. 2010. Gaussian Adaptation as a unifying framework for continuous black-box optimization and adaptive Monte Carlo sampling. In IEEE Congress on Evolutionary Computation. 1–8. DOI:https://doi.org/10.1109/CEC.2010.5586491 
  • Neetil and de Mendez (2012) Jaroslav Neetil and Patrice Ossona de Mendez. 2012. Sparsity: Graphs, Structures, and Algorithms. Springer Publishing Company, Incorporated.
  • Nguyen et al. (2014) Anh-Tuan Nguyen, Sigrid Reiter, and Philippe Rigo. 2014. A review on simulation-based optimization methods applied to building performance analysis. Applied Energy 113, Supplement C (2014), 1043 – 1058. DOI:https://doi.org/10.1016/j.apenergy.2013.08.061 
  • Peng et al. (2016) Chi-Han Peng, Yong-Liang Yang, Fan Bao, Daniel Fink, Dong-Ming Yan, Peter Wonka, and Niloy J. Mitra. 2016. Computational Network Design from Functional Specifications. ACM Trans. Graph. 35, 4, Article 131 (July 2016), 12 pages. DOI:https://doi.org/10.1145/2897824.2925935 
  • Peponis et al. (1990) John Peponis, Craig Zimring, and Yoon Kyung Choi. 1990. Finding the building in wayfinding. Environment and behavior 22, 5 (1990), 555–590.
  • Pottmann et al. (2014) Helmut Pottmann, Michael Eigensatz, Amir Vaxman, and Johannes Wallner. 2014. Architectural geometry. Computers & Graphics (2014).
  • Rousseeuw and Croux (1992) Peter J. Rousseeuw and Christophe Croux. 1992. Explicit scale estimators with high breakdown point. L1-Statistical analysis and related methods 1 (1992), 77–92.
  • Sauro and Lewis (2011) Jeff Sauro and James R Lewis. 2011. When designing usability questionnaires, does it hurt to be positive?. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2215–2224.
  • Shi and Yang (2013) Xing Shi and Wenjie Yang. 2013. Performance-driven architectural design and optimization technique from a perspective of architects. Automation in Construction 32 (2013), 125–135.
  • Srivastava et al. (2007) Biplav Srivastava, Tuan Anh Nguyen, Alfonso Gerevini, Subbarao Kambhampati, Minh Binh Do, and Ivan Serina. 2007. Domain Independent Approaches for Finding Diverse Plans.. In IJCAI. 2016–2022.
  • Tullis and Stetson (2004) Thomas S. Tullis and Jacqueline N. Stetson. 2004. A comparison of questionnaires for assessing website usability. In Usability professional association conference. 1–12.
  • Turner (2001) Alasdair Turner. 2001. A program to perform visibility graph analysis. In Proceedings of the 3rd Space Syntax Symposium, Atlanta, University of Michigan. 31–1.
  • Turner and Penn (1999) Alasdair Turner and Alan Penn. 1999. Making isovists syntactic: isovist integration analysis. In 2nd International Symposium on Space Syntax, Brasilia. Citeseer.
  • Turrin et al. (2011) Michela Turrin, Peter von Buelow, and Rudi Stouffs. 2011. Design explorations of performance driven geometry in architectural design using parametric modeling and genetic algorithms. Advanced Engineering Informatics 25, 4 (2011), 656–675.
  • Twigg and James (2007) Christopher D. Twigg and Doug L. James. 2007. Many-worlds Browsing for Control of Multibody Dynamics. In Proceedings of ACM SIGGRAPH. ACM, New York, NY, USA, Article 14. DOI:https://doi.org/10.1145/1275808.1276395 
  • Ursem (2002) Rasmus K. Ursem. 2002. Parallel Problem Solving from Nature — PPSN VII: 7th International Conference Granada, Spain, September 7–11, 2002 Proceedings. Springer Berlin Heidelberg, Berlin, Heidelberg, Chapter Diversity-Guided Evolutionary Algorithms, 462–471. DOI:https://doi.org/10.1007/3-540-45712-7_45 
  • Usman et al. (2017) Muhammad Usman, Brandon Haworth, Glen Berseth, Mubbasir Kapadia, and Petros Faloutsos. 2017. Perceptual Evaluation of Space in Virtual Environments. In Proceedings of the Tenth International Conference on Motion in Games (MIG ’17). ACM, New York, NY, USA, Article 16, 10 pages. DOI:https://doi.org/10.1145/3136457.3136458 
  • Wagner et al. (2007) Tobias Wagner, Nicola Beume, and Boris Naujoks. 2007. Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In Evolutionary multi-criterion optimization. Springer, 742–756.
  • Yi and Yi (2014) Hwang Yi and Yun Kyu Yi. 2014. Performance Based Architectural Design Optimization: Automated 3D Space Layout Using Simulated Annealing. In ASHRAE/IBPSA-USA Building Simulation Conference.
  • Yu et al. (2011) Lap-Fai Yu, Sai Kit Yeung, Chi-Keung Tang, Demetri Terzopoulos, Tony F. Chan, and Stanley Osher. 2011. Make it home: automatic optimization of furniture arrangement. ACM Transactions on Graphics 30, 4 (2011), 86.

Appendix A Appendix

a.1. Metrics

This section describes additional details related to the methods used in this work.

a.2. CMA vs Simulated Annealing + MCMC

The choice of optimization algorithm to use for this type of design problem is an important consideration. A recent review of building architecture related optimization frameworks highlights the numerous optimization techniques used in the area, and reasons why some are better than others for particular design problems (Nguyen et al., 2014). Here we list the most relevant reasons for using CMA. Simulated annealing (SA) may need careful design of special parameter selection methods, like the ones used in  (Feng et al., 2016). SA is a poor choice given our desire for imposing design constraints. SA can handle noisy objective functions but only under certain conditions that can not be guaranteed for most building metrics. Also, genetic algorithms (GAs), like CMA, are often parallelizable, making the method more efficient. CMA should be better at escaping local-minima. Last, GAs have also been shown to show better early convergence, leading to quickly finding good local-minima that are often good enough for these types of design problems. CMA is a form of MCMC where the chain is the series of generated covariance distributions (Krause et al., 2016). You can even formulate MCMC to use a variant of CMA for sampling to improve convergence (Müller and Sbalzarini, 2010) These samplers outperform many variants of MCMC (Milgo et al., 2017)

a.3. Multi-Objective Optimization Methods

a.3.1. Scalarized

Computes a weighted combination of the objectives, weighting all of the objective terms with respect to some relative weighting. This method is challenging to use for two reasons. One, determining the weights to use for a combination of objectives can be a daunting task. Also, the objectives themselves may not be linear, with some growing faster than others usually precluding the possibility of finding a single set of weights that works well when the environment changes. Second, If a relative weighting is used it helps to normalize the metrics in some way. The maximum value for the Degree metric can be found by removing all of the items from the simulation and calculating the degree. There is no simple calculation to find the diversity bound, however, an upper bound can be found via optimization. The diversity metric is very cheap to compute (relative to Degree, etc), an optimization for only diversity can be performed first, to find the upper bound on diversity. Both degree and diversity are non-linear functions, this is okay and could give desirable results when performing a scalarized optimization, but it would still be challenging to find objective weights (Ding et al., 2006).

a.3.2. Pareto Front Optimization

This method essentially finds a set of points (non-dominated points) that are optimal trade-offs between a set of objectives. The issue with using a Pareto Optimal Front method is that the computation of diversity between the members is non-trivial. Diversity is a measure of the distance between points in the Pareto front. It is not clear how to accomplish this without introducing a large number of parameters. Possibly, two different objectives could be chosen to optimize with respect to, but those objectives are only proxies for diversity and could be very similar producing results with minimal dissimilarity.

a.3.3. Hierarchical Optimization

With hierarchical optimization an ordering and objective specific thresholds are used, instead of only relative weights. The objectives are optimized in the order given. Each objective is optimized to find its optimum and from this a constraint is added to the optimization for the next objective. This constraint adds a penalty whenever the previous objective(s) value goes below the threshold value(s). This gives more control over the trade-offs between objectives. This method works well and converges quickly, as can be seen in Fig. 18. In this experiment we optimized art-gallery B in Fig. 10 with a diversity set of size . This optimization completed in a few seconds and converged before the optimization was terminated.

Figure 18. Diversity optimization convergence
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
119799
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description